path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
Data Science Professional/6 - Machine Learning With Python/2.4 Non-linear Regression.ipynb | ###Markdown
Non Linear Regression Analysis RIHAD VARIAWA, Data Scientist - Who has fun LEARNING, EXPLORING & GROWINGIf the data shows a curvy trend, then linear regression will not produce very accurate results when compared to a non-linear regression because, as the name implies, linear regression presumes that the data is linear. Let's learn about non linear regressions and apply an example on python. In this notebook, we fit a non-linear model to the datapoints corrensponding to China's GDP from 1960 to 2014. Importing required libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Though Linear regression is very good to solve many problems, it cannot be used for all datasets. First recall how linear regression, could model a dataset. It models a linear relation between a dependent variable y and independent variable x. It had a simple equation, of degree 1, for example y = $2x$ + 3.
###Code
x = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
y = 2*(x) + 3
y_noise = 2 * np.random.normal(size=x.size)
ydata = y + y_noise
#plt.figure(figsize=(8,6))
plt.plot(x, ydata, 'bo')
plt.plot(x,y, 'r')
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
###Output
_____no_output_____
###Markdown
Non-linear regressions are a relationship between independent variables $x$ and a dependent variable $y$ which result in a non-linear function modeled data. Essentially any relationship that is not linear can be termed as non-linear, and is usually represented by the polynomial of $k$ degrees (maximum power of $x$). $$ \ y = a x^3 + b x^2 + c x + d \ $$Non-linear functions can have elements like exponentials, logarithms, fractions, and others. For example: $$ y = \log(x)$$ Or even, more complicated such as :$$ y = \log(a x^3 + b x^2 + c x + d)$$ Let's take a look at a cubic function's graph.
###Code
x = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
y = 1*(x**3) + 1*(x**2) + 1*x + 3
y_noise = 20 * np.random.normal(size=x.size)
ydata = y + y_noise
plt.plot(x, ydata, 'bo')
plt.plot(x,y, 'r')
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
###Output
_____no_output_____
###Markdown
As you can see, this function has $x^3$ and $x^2$ as independent variables. Also, the graphic of this function is not a straight line over the 2D plane. So this is a non-linear function. Some other types of non-linear functions are: Quadratic $$ Y = X^2 $$
###Code
x = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
y = np.power(x,2)
y_noise = 2 * np.random.normal(size=x.size)
ydata = y + y_noise
plt.plot(x, ydata, 'bo')
plt.plot(x,y, 'r')
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
###Output
_____no_output_____
###Markdown
Exponential An exponential function with base c is defined by $$ Y = a + b c^X$$ where b ≠0, c > 0 , c ≠1, and x is any real number. The base, c, is constant and the exponent, x, is a variable.
###Code
X = np.arange(-5.0, 5.0, 0.1)
##You can adjust the slope and intercept to verify the changes in the graph
Y= np.exp(X)
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
###Output
_____no_output_____
###Markdown
LogarithmicThe response $y$ is a results of applying logarithmic map from input $x$'s to output variable $y$. It is one of the simplest form of __log()__: i.e. $$ y = \log(x)$$Please consider that instead of $x$, we can use $X$, which can be polynomial representation of the $x$'s. In general form it would be written as \begin{equation}y = \log(X)\end{equation}
###Code
X = np.arange(-5.0, 5.0, 0.1)
Y = np.log(X)
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
###Output
/home/jupyterlab/conda/lib/python3.6/site-packages/ipykernel_launcher.py:3: RuntimeWarning: invalid value encountered in log
This is separate from the ipykernel package so we can avoid doing imports until
###Markdown
Sigmoidal/Logistic $$ Y = a + \frac{b}{1+ c^{(X-d)}}$$
###Code
X = np.arange(-5.0, 5.0, 0.1)
Y = 1-4/(1+np.power(3, X-2))
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
###Output
_____no_output_____
###Markdown
Non-Linear Regression example For an example, we're going to try and fit a non-linear model to the datapoints corresponding to China's GDP from 1960 to 2014. We download a dataset with two columns, the first, a year between 1960 and 2014, the second, China's corresponding annual gross domestic income in US dollars for that year.
###Code
import numpy as np
import pandas as pd
#downloading dataset
!wget -nv -O _datasets/china_gdp.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/china_gdp.csv
df = pd.read_csv("_datasets/china_gdp.csv")
df.head(10)
###Output
2019-01-22 08:45:15 URL:https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/china_gdp.csv [1218/1218] -> "_datasets/china_gdp.csv" [1]
###Markdown
__Did you know?__ When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC) Plotting the Dataset This is what the datapoints look like. It kind of looks like an either logistic or exponential function. The growth starts off slow, then from 2005 on forward, the growth is very significant. And finally, it decelerate slightly in the 2010s.
###Code
plt.figure(figsize=(8,5))
x_data, y_data = (df["Year"].values, df["Value"].values)
plt.plot(x_data, y_data, 'ro')
plt.ylabel('GDP')
plt.xlabel('Year')
plt.show()
###Output
_____no_output_____
###Markdown
Choosing a model From an initial look at the plot, we determine that the logistic function could be a good approximation,since it has the property of starting with a slow growth, increasing growth in the middle, and then decreasing again at the end; as illustrated below:
###Code
X = np.arange(-5.0, 5.0, 0.1)
Y = 1.0 / (1.0 + np.exp(-X))
plt.plot(X,Y)
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
###Output
_____no_output_____
###Markdown
The formula for the logistic function is the following:$$ \hat{Y} = \frac1{1+e^{\beta_1(X-\beta_2)}}$$$\beta_1$: Controls the curve's steepness,$\beta_2$: Slides the curve on the x-axis. Building The Model Now, let's build our regression model and initialize its parameters.
###Code
def sigmoid(x, Beta_1, Beta_2):
y = 1 / (1 + np.exp(-Beta_1*(x-Beta_2)))
return y
###Output
_____no_output_____
###Markdown
Lets look at a sample sigmoid line that might fit with the data:
###Code
beta_1 = 0.10
beta_2 = 1990.0
#logistic function
Y_pred = sigmoid(x_data, beta_1 , beta_2)
#plot initial prediction against datapoints
plt.plot(x_data, Y_pred*15000000000000.)
plt.plot(x_data, y_data, 'ro')
###Output
_____no_output_____
###Markdown
Our task here is to find the best parameters for our model. Lets first normalize our x and y:
###Code
# Lets normalize our data
xdata =x_data/max(x_data)
ydata =y_data/max(y_data)
###Output
_____no_output_____
###Markdown
How we find the best parameters for our fit line?we can use __curve_fit__ which uses non-linear least squares to fit our sigmoid function, to data. Optimal values for the parameters so that the sum of the squared residuals of sigmoid(xdata, *popt) - ydata is minimized.popt are our optimized parameters.
###Code
from scipy.optimize import curve_fit
popt, pcov = curve_fit(sigmoid, xdata, ydata)
#print the final parameters
print(" beta_1 = %f, beta_2 = %f" % (popt[0], popt[1]))
###Output
_____no_output_____
###Markdown
Now we plot our resulting regression model.
###Code
x = np.linspace(1960, 2015, 55)
x = x/max(x)
plt.figure(figsize=(8,5))
y = sigmoid(x, *popt)
plt.plot(xdata, ydata, 'ro', label='data')
plt.plot(x,y, linewidth=3.0, label='fit')
plt.legend(loc='best')
plt.ylabel('GDP')
plt.xlabel('Year')
plt.show()
###Output
_____no_output_____
###Markdown
PracticeCan you calculate what is the accuracy of our model?
###Code
# write your code here
###Output
_____no_output_____ |
updaters/elko.ipynb | ###Markdown
Elko Инициализация
###Code
import os
import sys
from django.utils import timezone
sys.path.append('/home/ubuntu/anodos.ru/anodos/')
os.environ['DJANGO_SETTINGS_MODULE'] = 'anodos.settings'
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
import re
import catalog.runner
from catalog.models import *
class Runner(catalog.runner.Runner):
name = 'Elko'
alias = 'elko'
url = {
'start' : 'https://ecom.elko.ru/Account/Login',
'login' : 'https://ecom.elko.ru/Account/Login',
'price' : 'https://ecom.elko.ru/Catalog/PriceList?'}
def __init__(self):
super().__init__()
self.stock = self.take_stock('stock', 'склад', 3, 10)
self.transit = self.take_stock('transit', 'транзит', 10, 60)
def run(self):
# Заходим на начальную страницу
tree = self.load_html(self.url['start'])
token = self.xpath_string(tree, './/input[@name="__RequestVerificationToken"]/@value')
self.login({'__RequestVerificationToken': token,
'Amnesia': '',
'Username': self.updater.login,
'Password': self.updater.password,
'submit': 'Войти',
'Username2': ''})
data = self.load(self.url['price'], result_type = 'content')
return data
s = Runner()
data = s.run()
import xlrd
# Номера строк и столбцов
num = {'header': 5}
# Распознаваемые слова
word = {'category': 'Категория',
'category_sub': 'Подкатегория',
'party_article': 'Код ELKO',
'product_vendor': 'Производитель',
'product_article': 'Заводской код',
'product_name': 'Название и описание продукта',
'product_description': 'Дополнительная информация',
'party_price': 'Цена',
'party_quantity': 'В наличии',
'product_warranty': 'Гарантия',
'product_url': 'Ссылка на товар'}
book = xlrd.open_workbook(file_contents = data)
sheet = book.sheet_by_index(0)
for row_num in range(sheet.nrows):
row = sheet.row_values(row_num)
# Пустые строки
if row_num < num['header']:
continue
# Заголовок таблицы
elif row_num == num['header']:
for cel_num, cel in enumerate(row):
if str(cel).strip() == word['category']:
num['category'] = cel_num
elif str(cel).strip() == word['category_sub']:
num['category_sub'] = cel_num
elif str(cel).strip() == word['party_article']:
num['party_article'] = cel_num
elif str(cel).strip() == word['product_vendor']:
num['product_vendor'] = cel_num
elif str(cel).strip() == word['product_article']:
num['product_article'] = cel_num
elif str(cel).strip() == word['product_name']:
num['product_name'] = cel_num
elif str(cel).strip() == word['product_description']:
num['product_description'] = cel_num
elif str(cel).strip() == word['party_price']:
num['party_price'] = cel_num
elif str(cel).strip() == word['party_quantity']:
num['party_quantity'] = cel_num
elif str(cel).strip() == word['product_warranty']:
num['product_warranty'] = cel_num
elif str(cel).strip() == word['product_url']:
num['product_url'] = cel_num
# Проверяем, все ли столбцы распознались
if len(num) > len(word):
print("Структура данных без изменений.")
else:
raise(ValueError("Ошибка структуры данных: не все столбцы опознаны."))
# Товар
elif row[num['product_article']] and row[num['product_vendor']]:
product_ = {}
party_ = {}
# Категория
category = "{} | {}".format(row[num['category']], row[num['category_sub']])
# Производитель
product_['vendor'] = s.fix_name(row[num['product_vendor']])
product_['vendor'] = Vendor.objects.get_by_key(updater = s.updater, key = product_['vendor'])
# Продукт
product_['article'] = s.fix_article(row[num['product_vendor']])
product_['article'] = s.fix_article(row[num['product_article']])
product_['name'] = s.fix_name(row[num['product_name']])
product_['description'] = s.fix_name(row[num['product_description']])
product_['warranty'] = s.fix_name(row[num['product_warranty']])
product_['url'] = s.fix_name(row[num['product_url']])
try:
product = Product.objects.take(article = product_['article'],
vendor = product_['vendor'],
name = product_['name'])
#s.products.append(product)
except ValueError as error:
continue
# Партия
party_['quantity_stock'] = s.fix_quantity(row[num['party_quantity']])
if row[num['party_quantity']] == 'в транзите':
party_['quantity_transit'] = None
else:
party_['quantity_transit'] = 0
party_['article'] = s.fix_quantity(row[num['party_article']])
party_['price'] = s.fix_price(row[num['party_price']])
try:
party = Party.objects.make(product = product,
stock = s.stock,
article = party_['article'],
price = party_['price'],
currency = s.rub,
quantity = party_['quantity_stock'],
time = s.start_time)
#self.parties.append(party)
except ValueError as error:
pass
try:
party = Party.objects.make(product = product,
stock = s.transit,
article = party_['article'],
price = party_['price'],
currency = s.rub,
quantity = party_['quantity_transit'],
time = s.start_time)
#self.parties.append(party)
except ValueError as error:
pass
###Output
_____no_output_____ |
example/RunModel/Python_Example/Python_Model_with_Heterogeneous_Data.ipynb | ###Markdown
RunModel with Heterogeneous Input: Python model executionThe RunModel class is capable of passing input in different formats into a single computational model. This means that the samples passed into a model can be passed as:- floating point values- numpy arrays- lists - tuples- lists of other iterables- numpy arrays of other iterables- or any combination of the aboveIn the examples below, we demonstrate the use of a Python computational model with inputs that are combinations of the above.Some notes on their use:1. UQpy converts all sample input to a numpy array with at least two dimensions. The first dimension, i.e. len(samples) must correspond to the number of samples being passed for model execution. The second dimension, i.e. len(samples[0]) must correspond to the number of variables that each sample possesses.2. Each individual sample, i.e. sample[j], may be composed of multiple data types -- with each variable having a different data type. For example, sample[j][k] may be a floating point value and sample[j][l] may be an array of arbitrary dimension.3. If a specific variable has multiple dimensions, the user may specify the index to be return in the input file. For example, the place holder for a variable x1 corresponding to sample[j][l] that is an array of shape (1,4) can be read as , which will return the final (0,3) component of samples[j][l].4. If the user does not specify the index for a multidimensional variable, then the entire multidimensional variable is flattened and written with comma delimiters.Michael D. Shields 29 April 2020 Python Model SummaryExamples 1-2:The provided Matlab models take the sum of three random variables: $s = \sum_{i=1}^3 x_i$ $x_i \sim N(0,1)$Example 3:The provided Matlab model takes the product of a random variable and the determinant of a random matrix: $z = x \det(Y)$ $x \sim N(0,1)$ $y$ is a 3x3 matrix of standard normal random variables.The Python model may be provided as either a class or a function. The examples below explore both cases.
###Code
from UQpy.SampleMethods import MCS
from UQpy.RunModel import RunModel
from UQpy.Distributions import Normal
import matplotlib.pyplot as plt
import time
import numpy as np
###Output
_____no_output_____
###Markdown
Pick which model to runOptions:- 'all'- 'scalar'- 'vector'- 'mixed'- 'fixed'
###Code
pick_model = 'all'
###Output
_____no_output_____
###Markdown
Example 1: Three scalar random variablesIn this example, we pass three scalar random variables. Note that this is different from assigning a single variable with three components, which will be handled in the following example. Here we will pass the samples both as an ndarray and as a list. Recall that UQpy converts all samples into an ndarray of at least two dimensions internally.
###Code
if pick_model == 'scalar' or pick_model =='vector' or pick_model == 'all':
# Call MCS to generate samples
# THIS WILL NEED TO BE REWRITTEN WITH DISTRIBUTION AND MCS UPDATES --------------------------------------------
# x_mcs = MCS(dist_name=['Normal','Normal','Normal'], dist_params=[[0,1],[0,1],[0,1]], nsamples=5,
# var_names = ['var1', 'var11', 'var111'])
# -------------------------------------------------------------------------------------------------------------
# Call MCS to generate samples
d = Normal(loc=0, scale=1)
x_mcs = MCS(dist_object=[d, d, d], nsamples=5, random_state=987979)
names = ['var1', 'var11', 'var111']
# UQpy returns samples as an ndarray. Convert them to a list for part 1.2
x_mcs_list = list(x_mcs.samples)
print("Monte Carlo samples of three random variables from a standard normal distribution.")
print('Samples stored as an array:')
print('Data type:', type(x_mcs.samples))
print('Number of samples:', len(x_mcs.samples))
print('Dimensions of samples:', np.shape(x_mcs.samples))
print('Samples')
print(x_mcs.samples)
print()
print('Samples stored as a list:')
print('Data type:', type(x_mcs_list))
print('Number of samples:', len(x_mcs_list))
print('Dimensions of samples:', np.shape(x_mcs_list))
print('Samples:')
print(x_mcs_list)
###Output
Monte Carlo samples of three random variables from a standard normal distribution.
Samples stored as an array:
Data type: <class 'numpy.ndarray'>
Number of samples: 5
Dimensions of samples: (5, 3)
Samples
[[ 2.23176466 -0.64178252 -0.38551651]
[ 2.39233592 0.23474428 0.89104532]
[-0.27088124 0.6625034 -1.66249933]
[ 0.1157384 -0.09437841 1.04910279]
[ 0.03322176 -0.09323229 -0.45691713]]
Samples stored as a list:
Data type: <class 'list'>
Number of samples: 5
Dimensions of samples: (5, 3)
Samples:
[array([ 2.23176466, -0.64178252, -0.38551651]), array([2.39233592, 0.23474428, 0.89104532]), array([-0.27088124, 0.6625034 , -1.66249933]), array([ 0.1157384 , -0.09437841, 1.04910279]), array([ 0.03322176, -0.09323229, -0.45691713])]
###Markdown
1.1 Pass samples as ndarray, Python class called, serial execution This examples uses the following files:- model_script = python_model.py
###Code
if pick_model == 'scalar' or pick_model == 'all':
# Call to RunModel - Here we run the model while instantiating the RunModel object.
t = time.time()
m11 = RunModel(ntasks=1, model_script='python_model.py', model_object_name='SumRVs', model_dir='Python_Runs', verbose=True)
m11.run(samples=x_mcs.samples,)
t_ser_python = time.time() - t
print("\nTime for serial execution:")
print(t_ser_python)
print()
print("The values returned from the Matlab simulation:")
print(m11.qoi_list)
###Output
UQpy: The following directory has been created for model evaluations:
/Users/audreyolivier/Documents/JHU_Research/UQpy/example/RunModel/Python_Example/Python_Runs_2020_06_15_05_31_042127_PM
UQpy: The model files have been copied to the following directory for evaluation:
/Users/audreyolivier/Documents/JHU_Research/UQpy/example/RunModel/Python_Example/Python_Runs_2020_06_15_05_31_042127_PM
UQpy: No samples are provided. Creating the object and building the model directory.
UQpy: All model evaluations will be executed from the following directory:
/Users/audreyolivier/Documents/JHU_Research/UQpy/example/RunModel/Python_Example/Python_Runs_2020_06_15_05_31_042127_PM
UQpy: The model class that will be run: SumRVs
UQpy: Performing serial execution of a Python model.
UQpy: Serial execution of the python model complete.
UQpy: Returning to the parent directory:
/Users/audreyolivier/Documents/JHU_Research/UQpy/example/RunModel/Python_Example
Time for serial execution:
0.009662866592407227
The values returned from the Matlab simulation:
[1.2044656328098802, 3.518125524609024, -1.2708771679901163, 1.0704627778055296, -0.5169276561716473]
###Markdown
1.2 Pass samples as list, Python function called, parallel execution This examples uses the following files:- model_script = python_model.py
###Code
if pick_model == 'scalar' or pick_model == 'all':
# Call to RunModel - Here we run the model while instantiating the RunModel object.
t = time.time()
m12 = RunModel(samples=x_mcs_list, ntasks=2, model_script='python_model.py',
model_object_name='sum_rvs', model_dir='Python_Runs')
t_par_python = time.time() - t
print("\nTime for parallel execution:")
print(t_par_python)
print()
print("The values returned from the Matlab simulation:")
print(m12.qoi_list)
###Output
Time for parallel execution:
0.016775846481323242
The values returned from the Matlab simulation:
[array([1.20446563]), array([3.51812552]), array([-1.27087717]), array([1.07046278]), array([-0.51692766])]
###Markdown
Example 2: Single tri-variate random variableIn this example, we pass three random variables in as a trivariate random variable. Note that this is different from assigning three scalar random variables, which was be handled in Example 1.Again, we will pass the samples both as an ndarray and as a list. Recall that UQpy converts all samples into an ndarray of at least two dimensions internally. Restructure the samplesTo pass the samples in as a single tri-variate variable, we need reshape the samples from shape (5, 3) to shape (5, 1, 3)
###Code
if pick_model == 'vector' or pick_model == 'all':
x_mcs_tri = x_mcs.samples.reshape(5, 1, 3)
x_mcs_tri_list = list(x_mcs_tri)
print("Monte Carlo samples of three random variables from a standard normal distribution.")
print('Samples stored as an array:')
print('Data type:', type(x_mcs_tri))
print('Number of samples:', len(x_mcs_tri))
print('Dimensions of samples:', np.shape(x_mcs_tri))
print('Samples')
print(x_mcs_tri)
print()
print('Samples stored as a list:')
print('Data type:', type(x_mcs_tri_list))
print('Number of samples:', len(x_mcs_tri_list))
print('Dimensions of samples:', np.shape(x_mcs_tri_list))
print('Samples:')
print(x_mcs_tri_list)
###Output
Monte Carlo samples of three random variables from a standard normal distribution.
Samples stored as an array:
Data type: <class 'numpy.ndarray'>
Number of samples: 5
Dimensions of samples: (5, 1, 3)
Samples
[[[ 2.23176466 -0.64178252 -0.38551651]]
[[ 2.39233592 0.23474428 0.89104532]]
[[-0.27088124 0.6625034 -1.66249933]]
[[ 0.1157384 -0.09437841 1.04910279]]
[[ 0.03322176 -0.09323229 -0.45691713]]]
Samples stored as a list:
Data type: <class 'list'>
Number of samples: 5
Dimensions of samples: (5, 1, 3)
Samples:
[array([[ 2.23176466, -0.64178252, -0.38551651]]), array([[2.39233592, 0.23474428, 0.89104532]]), array([[-0.27088124, 0.6625034 , -1.66249933]]), array([[ 0.1157384 , -0.09437841, 1.04910279]]), array([[ 0.03322176, -0.09323229, -0.45691713]])]
###Markdown
2.1 Pass samples as ndarray, Python function called, serial executionThis example uses the following files:- model_script = python_model.py
###Code
if pick_model == 'vector' or pick_model == 'all':
# Call to RunModel - Here we run the model while instantiating the RunModel object.
t = time.time()
m21 = RunModel(samples=x_mcs_tri, ntasks=1, model_script='python_model.py',
model_object_name='sum_rvs_vec', model_dir='Python_Runs')
t_ser_python = time.time() - t
print("\nTime for serial execution:")
print(t_ser_python)
print()
print("The values returned from the Matlab simulation:")
print(m21.qoi_list)
###Output
Time for serial execution:
0.00709080696105957
The values returned from the Matlab simulation:
[array([1.20446563]), array([3.51812552]), array([-1.27087717]), array([1.07046278]), array([-0.51692766])]
###Markdown
2.2 Pass samples as list, Python class called, parallel executionThis example uses the following files:- model_script = python_model.py
###Code
if pick_model == 'vector' or pick_model == 'all':
# Call to RunModel - Here we run the model while instantiating the RunModel object.
t = time.time()
m22 = RunModel(samples=x_mcs_tri_list, ntasks=2, model_script='python_model.py',
model_object_name='SumRVs', model_dir='Python_Runs')
t_par_python = time.time() - t
print("\nTime for parallel execution:")
print(t_par_python)
print()
print("The values returned from the Matlab simulation:")
print(m22.qoi_list)
###Output
Time for parallel execution:
0.01832294464111328
The values returned from the Matlab simulation:
[array([1.20446563]), array([3.51812552]), array([-1.27087717]), array([1.07046278]), array([-0.51692766])]
###Markdown
Example 3: Passing a scalar and an array to RunModelIn this example, we pass a single scalar random variable as well as an array into a Matlab model.Again, we will pass the samples both as an ndarray and as a list. Recall that UQpy converts all samples into an ndarray of at least two dimensions internally.
###Code
if pick_model == 'mixed' or pick_model =='vector' or pick_model == 'all':
# Call MCS to generate samples
# THIS WILL NEED TO BE REWRITTEN WITH DISTRIBUTION AND MCS UPDATES --------------------------------------------
# First generate the scalar random variable
# x_mcs1 = MCS(dist_name=['Normal'], dist_params=[[0,1]], nsamples=5, var_names = ['var1'])
# Next generate a 3x3 random matrix
# x_mcs2 = MCS(dist_name=['Normal','Normal','Normal'], dist_params=[[0,1],[0,1],[0,1]], nsamples=15)
# x_mcs_array = x_mcs2.samples.reshape((5,3,3))
# -------------------------------------------------------------------------------------------------------------
# Call MCS to generate samples
d = Normal(loc=0, scale=1)
x_mcs1 = MCS(dist_object=d, nsamples=5, random_state=987979)
x_mcs2 = MCS(dist_object=[d, d, d], nsamples=15, random_state=34876)
x_mcs_array = x_mcs2.samples.reshape((5,3,3))
print("Monte Carlo samples of a single random variable from a standard normal distribution.")
print('Samples stored as an array:')
print('Data type:', type(x_mcs1.samples))
print('Number of samples:', len(x_mcs1.samples))
print('Dimensions of samples:', np.shape(x_mcs1.samples))
print('Samples')
print(x_mcs1.samples)
print()
print("Monte Carlo samples of a 3x3 matrix of standard normal random variables.")
print('Samples stored as an array:')
print('Data type:', type(x_mcs_array))
print('Number of samples:', len(x_mcs_array))
print('Dimensions of samples:', np.shape(x_mcs_array))
print('Samples')
print(x_mcs_array)
print()
# Create a set of samples to be passed into RunModel
# Here we need to create the mixed samples such that each sample has a single scalar and a single 3x3 matrix.
# This data structure is essential to passing the input to UQpy correctly.
x_mixed = []
for i in range(5):
x_mixed.append([x_mcs1.samples[i], x_mcs_array[i]])
print("Combined samples with a scalar and a 3x3 matrix of standard normal random variables.")
print('Samples stored as a list:')
print('Data type:', type(x_mixed))
print('Number of samples:', len(x_mixed))
print('Dimensions of samples:', np.shape(x_mixed))
print('Samples')
print(x_mixed)
print()
x_mixed_array = np.atleast_2d(np.asarray(x_mixed))
print("Combined samples with a scalar and a 3x3 matrix of standard normal random variables.")
print('Samples stored as ndarray:')
print('Data type:', type(x_mixed_array))
print('Number of samples:', len(x_mixed_array))
print('Dimensions of samples:', np.shape(x_mixed_array))
print('Samples')
print(x_mixed_array)
print()
# Notice that, in both the ndarray case and the list case, the samples have dimension (5,2). That is, there
# are five samples of two variables. The first variable is a scalar. The second variable is a 3x3 matrix.
###Output
Monte Carlo samples of a single random variable from a standard normal distribution.
Samples stored as an array:
Data type: <class 'numpy.ndarray'>
Number of samples: 5
Dimensions of samples: (5, 1)
Samples
[[ 2.23176466]
[ 2.39233592]
[-0.27088124]
[ 0.1157384 ]
[ 0.03322176]]
Monte Carlo samples of a 3x3 matrix of standard normal random variables.
Samples stored as an array:
Data type: <class 'numpy.ndarray'>
Number of samples: 5
Dimensions of samples: (5, 3, 3)
Samples
[[[ 1.41844416e-01 3.02388356e-01 -1.23439338e+00]
[ 7.20992367e-01 1.52119064e+00 3.89663401e-01]
[-8.41732889e-01 6.22714399e-01 4.81666741e-01]]
[[-6.44218784e-01 -1.15450313e+00 -1.13456854e+00]
[ 1.19971101e+00 3.80664178e-02 -8.49111812e-02]
[-5.62372993e-01 1.02606814e+00 4.24392486e-01]]
[[-1.47495390e+00 -1.83639697e-05 1.03667812e-01]
[ 3.67230456e-01 4.43218583e-01 2.19753402e-01]
[ 1.68307016e+00 6.50639674e-02 1.73664803e+00]]
[[ 8.12629703e-01 -7.34797593e-01 -4.92923798e-01]
[-6.03117372e-01 1.84220244e+00 6.86679520e-02]
[-6.17815065e-01 1.38407753e-01 -1.02776267e+00]]
[[-7.73687187e-01 -1.14836191e+00 -6.79684240e-01]
[-9.67105857e-01 -1.31879204e+00 4.76126420e-01]
[-2.94287448e-01 -1.70578952e-01 1.24595246e-01]]]
Combined samples with a scalar and a 3x3 matrix of standard normal random variables.
Samples stored as a list:
Data type: <class 'list'>
Number of samples: 5
Dimensions of samples: (5, 2)
Samples
[[array([2.23176466]), array([[ 0.14184442, 0.30238836, -1.23439338],
[ 0.72099237, 1.52119064, 0.3896634 ],
[-0.84173289, 0.6227144 , 0.48166674]])], [array([2.39233592]), array([[-0.64421878, -1.15450313, -1.13456854],
[ 1.19971101, 0.03806642, -0.08491118],
[-0.56237299, 1.02606814, 0.42439249]])], [array([-0.27088124]), array([[-1.47495390e+00, -1.83639697e-05, 1.03667812e-01],
[ 3.67230456e-01, 4.43218583e-01, 2.19753402e-01],
[ 1.68307016e+00, 6.50639674e-02, 1.73664803e+00]])], [array([0.1157384]), array([[ 0.8126297 , -0.73479759, -0.4929238 ],
[-0.60311737, 1.84220244, 0.06866795],
[-0.61781506, 0.13840775, -1.02776267]])], [array([0.03322176]), array([[-0.77368719, -1.14836191, -0.67968424],
[-0.96710586, -1.31879204, 0.47612642],
[-0.29428745, -0.17057895, 0.12459525]])]]
Combined samples with a scalar and a 3x3 matrix of standard normal random variables.
Samples stored as ndarray:
Data type: <class 'numpy.ndarray'>
Number of samples: 5
Dimensions of samples: (5, 2)
Samples
[[array([2.23176466])
array([[ 0.14184442, 0.30238836, -1.23439338],
[ 0.72099237, 1.52119064, 0.3896634 ],
[-0.84173289, 0.6227144 , 0.48166674]])]
[array([2.39233592])
array([[-0.64421878, -1.15450313, -1.13456854],
[ 1.19971101, 0.03806642, -0.08491118],
[-0.56237299, 1.02606814, 0.42439249]])]
[array([-0.27088124])
array([[-1.47495390e+00, -1.83639697e-05, 1.03667812e-01],
[ 3.67230456e-01, 4.43218583e-01, 2.19753402e-01],
[ 1.68307016e+00, 6.50639674e-02, 1.73664803e+00]])]
[array([0.1157384])
array([[ 0.8126297 , -0.73479759, -0.4929238 ],
[-0.60311737, 1.84220244, 0.06866795],
[-0.61781506, 0.13840775, -1.02776267]])]
[array([0.03322176])
array([[-0.77368719, -1.14836191, -0.67968424],
[-0.96710586, -1.31879204, 0.47612642],
[-0.29428745, -0.17057895, 0.12459525]])]]
###Markdown
3.1 Pass samples as ndarray, Python class called, serial execution This examples uses the following files:- model_script = python_model.py
###Code
if pick_model == 'mixed' or pick_model == 'all':
# Call to RunModel - Here we run the model while instantiating the RunModel object.
t = time.time()
m31 = RunModel(samples=x_mixed_array, ntasks=1, model_script='python_model.py',
model_object_name='DetRVs', model_dir='Python_Runs', vec=False)
t_ser_python = time.time() - t
print("\nTime for serial execution:")
print(t_ser_python)
print()
print("The values returned from the Matlab simulation:")
print(m31.qoi_list)
###Output
Time for serial execution:
0.011285781860351562
The values returned from the Matlab simulation:
[array([-5.06488369]), array([-2.28414587]), array([0.32209288]), array([-0.18281303]), array([0.00792293])]
###Markdown
3.2 Pass samples as list, Python function called, parallel execution This examples uses the following files:- model_script = python_model.py
###Code
if pick_model == 'mixed' or pick_model == 'all':
# Call to RunModel - Here we run the model while instantiating the RunModel object.
# Note that the parallel model_object handles only one sample at a time.
t = time.time()
m32 = RunModel(samples=x_mixed, ntasks=1, model_script='python_model.py',
model_object_name='det_rvs_par', model_dir='Python_Runs', vec=False)
t_par_python = time.time() - t
print("\nTime for parallel execution:")
print(t_par_python)
print()
print("The values returned from the Matlab simulation:")
print(m32.qoi_list)
###Output
Time for parallel execution:
0.007647037506103516
The values returned from the Matlab simulation:
[array([-5.06488369]), array([-2.28414587]), array([0.32209288]), array([-0.18281303]), array([0.00792293])]
###Markdown
Example 4: Passing a fixed variable and an array of Random Variables to RunModelIn this example, we pass a fixed value coefficient as well as an array of random variables into a Python model.Again, we will pass the samples both as an ndarray and as a list. Recall that UQpy converts all samples into an ndarray of at least two dimensions internally.
###Code
if pick_model == 'mixed' or pick_model == 'all':
x = 2.5
print('Constant Coefficient:')
print(x)
print()
print("Monte Carlo samples of a 3x3 matrix of standard normal random variables.")
print('Samples stored as an array:')
print('Data type:', type(x_mcs_array))
print('Number of samples:', len(x_mcs_array))
print('Dimensions of samples:', np.shape(x_mcs_array))
print('Samples')
print(x_mcs_array)
print()
x_mcs_list = list(x_mcs_array)
print("3x3 matrix of standard normal random variables.")
print('Samples stored as ndarray:')
print('Data type:', type(x_mcs_list))
print('Number of samples:', len(x_mcs_list))
print('Dimensions of samples:', np.shape(x_mcs_list))
print('Samples')
print(x_mcs_list)
print()
# Notice that, in both the ndarray case and the list case, the samples have dimension (5,2). That is, there
# are five samples of two variables. The first variable is a scalar. The second variable is a 3x3 matrix.
###Output
Constant Coefficient:
2.5
Monte Carlo samples of a 3x3 matrix of standard normal random variables.
Samples stored as an array:
Data type: <class 'numpy.ndarray'>
Number of samples: 5
Dimensions of samples: (5, 3, 3)
Samples
[[[ 1.41844416e-01 3.02388356e-01 -1.23439338e+00]
[ 7.20992367e-01 1.52119064e+00 3.89663401e-01]
[-8.41732889e-01 6.22714399e-01 4.81666741e-01]]
[[-6.44218784e-01 -1.15450313e+00 -1.13456854e+00]
[ 1.19971101e+00 3.80664178e-02 -8.49111812e-02]
[-5.62372993e-01 1.02606814e+00 4.24392486e-01]]
[[-1.47495390e+00 -1.83639697e-05 1.03667812e-01]
[ 3.67230456e-01 4.43218583e-01 2.19753402e-01]
[ 1.68307016e+00 6.50639674e-02 1.73664803e+00]]
[[ 8.12629703e-01 -7.34797593e-01 -4.92923798e-01]
[-6.03117372e-01 1.84220244e+00 6.86679520e-02]
[-6.17815065e-01 1.38407753e-01 -1.02776267e+00]]
[[-7.73687187e-01 -1.14836191e+00 -6.79684240e-01]
[-9.67105857e-01 -1.31879204e+00 4.76126420e-01]
[-2.94287448e-01 -1.70578952e-01 1.24595246e-01]]]
3x3 matrix of standard normal random variables.
Samples stored as ndarray:
Data type: <class 'list'>
Number of samples: 5
Dimensions of samples: (5, 3, 3)
Samples
[array([[ 0.14184442, 0.30238836, -1.23439338],
[ 0.72099237, 1.52119064, 0.3896634 ],
[-0.84173289, 0.6227144 , 0.48166674]]), array([[-0.64421878, -1.15450313, -1.13456854],
[ 1.19971101, 0.03806642, -0.08491118],
[-0.56237299, 1.02606814, 0.42439249]]), array([[-1.47495390e+00, -1.83639697e-05, 1.03667812e-01],
[ 3.67230456e-01, 4.43218583e-01, 2.19753402e-01],
[ 1.68307016e+00, 6.50639674e-02, 1.73664803e+00]]), array([[ 0.8126297 , -0.73479759, -0.4929238 ],
[-0.60311737, 1.84220244, 0.06866795],
[-0.61781506, 0.13840775, -1.02776267]]), array([[-0.77368719, -1.14836191, -0.67968424],
[-0.96710586, -1.31879204, 0.47612642],
[-0.29428745, -0.17057895, 0.12459525]])]
###Markdown
4.1 Pass samples as ndarray, Python class called, serial execution This examples uses the following files:- model_script = python_model.py
###Code
if pick_model == 'mixed' or pick_model == 'all':
# Call to RunModel - Here we run the model while instantiating the RunModel object.
t = time.time()
m41 = RunModel(samples=x_mcs_array, ntasks=1, model_script='python_model.py',
model_object_name='det_rvs_fixed', model_dir='Python_Runs', vec=False, coeff=x)
t_ser_python = time.time() - t
print("\nTime for serial execution:")
print(t_ser_python)
print()
print("The values returned from the Matlab simulation:")
print(m41.qoi_list)
###Output
Time for serial execution:
0.00769805908203125
The values returned from the Matlab simulation:
[-5.673631015232223, -2.3869409943327016, -2.9726392645350614, -3.948841380481718, 0.5962156121663712]
###Markdown
4.2 Pass samples as list, Python class called, serial execution This examples uses the following files:- model_script = python_model.py
###Code
if pick_model == 'mixed' or pick_model == 'all':
# Call to RunModel - Here we run the model while instantiating the RunModel object.
t = time.time()
m42 = RunModel(samples=x_mcs_list, ntasks=1, model_script='python_model.py',
model_object_name='det_rvs_fixed', model_dir='Python_Runs', vec=False, coeff=x)
t_ser_python = time.time() - t
print("\nTime for serial execution:")
print(t_ser_python)
print()
print("The values returned from the Matlab simulation:")
print(m42.qoi_list)
###Output
Time for serial execution:
0.00405573844909668
The values returned from the Matlab simulation:
[-5.673631015232223, -2.3869409943327016, -2.9726392645350614, -3.948841380481718, 0.5962156121663712]
###Markdown
RunModel with Heterogeneous Input: Python model executionThe RunModel class is capable of passing input in different formats into a single computational model. This means that the samples passed into a model can be passed as:- floating point values- numpy arrays- lists - tuples- lists of other iterables- numpy arrays of other iterables- or any combination of the aboveIn the examples below, we demonstrate the use of a Python computational model with inputs that are combinations of the above.Some notes on their use:1. UQpy converts all sample input to a numpy array with at least two dimensions. The first dimension, i.e. len(samples) must correspond to the number of samples being passed for model execution. The second dimension, i.e. len(samples[0]) must correspond to the number of variables that each sample possesses.2. Each individual sample, i.e. sample[j], may be composed of multiple data types -- with each variable having a different data type. For example, sample[j][k] may be a floating point value and sample[j][l] may be an array of arbitrary dimension.3. If a specific variable has multiple dimensions, the user may specify the index to be return in the input file. For example, the place holder for a variable x1 corresponding to sample[j][l] that is an array of shape (1,4) can be read as , which will return the final (0,3) component of samples[j][l].4. If the user does not specify the index for a multidimensional variable, then the entire multidimensional variable is flattened and written with comma delimiters.Michael D. Shields 29 April 2020 Python Model SummaryExamples 1-2:The provided Matlab models take the sum of three random variables: $s = \sum_{i=1}^3 x_i$ $x_i \sim N(0,1)$Example 3:The provided Matlab model takes the product of a random variable and the determinant of a random matrix: $z = x \det(Y)$ $x \sim N(0,1)$ $y$ is a 3x3 matrix of standard normal random variables.The Python model may be provided as either a class or a function. The examples below explore both cases.
###Code
from UQpy.SampleMethods import MCS
from UQpy.RunModel import RunModel
from UQpy.Distributions import Normal
import matplotlib.pyplot as plt
import time
import numpy as np
###Output
_____no_output_____
###Markdown
Pick which model to runOptions:- 'all'- 'scalar'- 'vector'- 'mixed'- 'fixed'
###Code
pick_model = 'all'
###Output
_____no_output_____
###Markdown
Example 1: Three scalar random variablesIn this example, we pass three scalar random variables. Note that this is different from assigning a single variable with three components, which will be handled in the following example. Here we will pass the samples both as an ndarray and as a list. Recall that UQpy converts all samples into an ndarray of at least two dimensions internally.
###Code
if pick_model == 'scalar' or pick_model =='vector' or pick_model == 'all':
# Call MCS to generate samples
# THIS WILL NEED TO BE REWRITTEN WITH DISTRIBUTION AND MCS UPDATES --------------------------------------------
# x_mcs = MCS(dist_name=['Normal','Normal','Normal'], dist_params=[[0,1],[0,1],[0,1]], nsamples=5,
# var_names = ['var1', 'var11', 'var111'])
# -------------------------------------------------------------------------------------------------------------
# Call MCS to generate samples
d = Normal(loc=0, scale=1)
x_mcs = MCS(dist_object=[d, d, d], nsamples=5, random_state=987979)
names = ['var1', 'var11', 'var111']
# UQpy returns samples as an ndarray. Convert them to a list for part 1.2
x_mcs_list = list(x_mcs.samples)
print("Monte Carlo samples of three random variables from a standard normal distribution.")
print('Samples stored as an array:')
print('Data type:', type(x_mcs.samples))
print('Number of samples:', len(x_mcs.samples))
print('Dimensions of samples:', np.shape(x_mcs.samples))
print('Samples')
print(x_mcs.samples)
print()
print('Samples stored as a list:')
print('Data type:', type(x_mcs_list))
print('Number of samples:', len(x_mcs_list))
print('Dimensions of samples:', np.shape(x_mcs_list))
print('Samples:')
print(x_mcs_list)
###Output
Monte Carlo samples of three random variables from a standard normal distribution.
Samples stored as an array:
Data type: <class 'numpy.ndarray'>
Number of samples: 5
Dimensions of samples: (5, 3)
Samples
[[ 2.23176466 -0.64178252 -0.38551651]
[ 2.39233592 0.23474428 0.89104532]
[-0.27088124 0.6625034 -1.66249933]
[ 0.1157384 -0.09437841 1.04910279]
[ 0.03322176 -0.09323229 -0.45691713]]
Samples stored as a list:
Data type: <class 'list'>
Number of samples: 5
Dimensions of samples: (5, 3)
Samples:
[array([ 2.23176466, -0.64178252, -0.38551651]), array([2.39233592, 0.23474428, 0.89104532]), array([-0.27088124, 0.6625034 , -1.66249933]), array([ 0.1157384 , -0.09437841, 1.04910279]), array([ 0.03322176, -0.09323229, -0.45691713])]
###Markdown
1.1 Pass samples as ndarray, Python class called, serial execution This examples uses the following files:- model_script = python_model.py
###Code
if pick_model == 'scalar' or pick_model == 'all':
# Call to RunModel - Here we run the model while instantiating the RunModel object.
t = time.time()
m11 = RunModel(ntasks=1, model_script='python_model.py', model_object_name='SumRVs', model_dir='Python_Runs', verbose=True)
m11.run(samples=x_mcs.samples,)
t_ser_python = time.time() - t
print("\nTime for serial execution:")
print(t_ser_python)
print()
print("The values returned from the Matlab simulation:")
print(m11.qoi_list)
###Output
UQpy: The following directory has been created for model evaluations:
/Users/audreyolivier/Documents/JHU_Research/UQpy/example/RunModel/Python_Example/Python_Runs_2020_06_15_05_31_042127_PM
UQpy: The model files have been copied to the following directory for evaluation:
/Users/audreyolivier/Documents/JHU_Research/UQpy/example/RunModel/Python_Example/Python_Runs_2020_06_15_05_31_042127_PM
UQpy: No samples are provided. Creating the object and building the model directory.
UQpy: All model evaluations will be executed from the following directory:
/Users/audreyolivier/Documents/JHU_Research/UQpy/example/RunModel/Python_Example/Python_Runs_2020_06_15_05_31_042127_PM
UQpy: The model class that will be run: SumRVs
UQpy: Performing serial execution of a Python model.
UQpy: Serial execution of the python model complete.
UQpy: Returning to the parent directory:
/Users/audreyolivier/Documents/JHU_Research/UQpy/example/RunModel/Python_Example
Time for serial execution:
0.009662866592407227
The values returned from the Matlab simulation:
[1.2044656328098802, 3.518125524609024, -1.2708771679901163, 1.0704627778055296, -0.5169276561716473]
###Markdown
1.2 Pass samples as list, Python function called, parallel execution This examples uses the following files:- model_script = python_model.py
###Code
if pick_model == 'scalar' or pick_model == 'all':
# Call to RunModel - Here we run the model while instantiating the RunModel object.
t = time.time()
m12 = RunModel(samples=x_mcs_list, ntasks=2, model_script='python_model.py',
model_object_name='sum_rvs', model_dir='Python_Runs')
t_par_python = time.time() - t
print("\nTime for parallel execution:")
print(t_par_python)
print()
print("The values returned from the Matlab simulation:")
print(m12.qoi_list)
###Output
Time for parallel execution:
0.016775846481323242
The values returned from the Matlab simulation:
[array([1.20446563]), array([3.51812552]), array([-1.27087717]), array([1.07046278]), array([-0.51692766])]
###Markdown
Example 2: Single tri-variate random variableIn this example, we pass three random variables in as a trivariate random variable. Note that this is different from assigning three scalar random variables, which was be handled in Example 1.Again, we will pass the samples both as an ndarray and as a list. Recall that UQpy converts all samples into an ndarray of at least two dimensions internally. Restructure the samplesTo pass the samples in as a single tri-variate variable, we need reshape the samples from shape (5, 3) to shape (5, 1, 3)
###Code
if pick_model == 'vector' or pick_model == 'all':
x_mcs_tri = x_mcs.samples.reshape(5, 1, 3)
x_mcs_tri_list = list(x_mcs_tri)
print("Monte Carlo samples of three random variables from a standard normal distribution.")
print('Samples stored as an array:')
print('Data type:', type(x_mcs_tri))
print('Number of samples:', len(x_mcs_tri))
print('Dimensions of samples:', np.shape(x_mcs_tri))
print('Samples')
print(x_mcs_tri)
print()
print('Samples stored as a list:')
print('Data type:', type(x_mcs_tri_list))
print('Number of samples:', len(x_mcs_tri_list))
print('Dimensions of samples:', np.shape(x_mcs_tri_list))
print('Samples:')
print(x_mcs_tri_list)
###Output
Monte Carlo samples of three random variables from a standard normal distribution.
Samples stored as an array:
Data type: <class 'numpy.ndarray'>
Number of samples: 5
Dimensions of samples: (5, 1, 3)
Samples
[[[ 2.23176466 -0.64178252 -0.38551651]]
[[ 2.39233592 0.23474428 0.89104532]]
[[-0.27088124 0.6625034 -1.66249933]]
[[ 0.1157384 -0.09437841 1.04910279]]
[[ 0.03322176 -0.09323229 -0.45691713]]]
Samples stored as a list:
Data type: <class 'list'>
Number of samples: 5
Dimensions of samples: (5, 1, 3)
Samples:
[array([[ 2.23176466, -0.64178252, -0.38551651]]), array([[2.39233592, 0.23474428, 0.89104532]]), array([[-0.27088124, 0.6625034 , -1.66249933]]), array([[ 0.1157384 , -0.09437841, 1.04910279]]), array([[ 0.03322176, -0.09323229, -0.45691713]])]
###Markdown
2.1 Pass samples as ndarray, Python function called, serial executionThis example uses the following files:- model_script = python_model.py
###Code
if pick_model == 'vector' or pick_model == 'all':
# Call to RunModel - Here we run the model while instantiating the RunModel object.
t = time.time()
m21 = RunModel(samples=x_mcs_tri, ntasks=1, model_script='python_model.py',
model_object_name='sum_rvs_vec', model_dir='Python_Runs')
t_ser_python = time.time() - t
print("\nTime for serial execution:")
print(t_ser_python)
print()
print("The values returned from the Matlab simulation:")
print(m21.qoi_list)
###Output
Time for serial execution:
0.00709080696105957
The values returned from the Matlab simulation:
[array([1.20446563]), array([3.51812552]), array([-1.27087717]), array([1.07046278]), array([-0.51692766])]
###Markdown
2.2 Pass samples as list, Python class called, parallel executionThis example uses the following files:- model_script = python_model.py
###Code
if pick_model == 'vector' or pick_model == 'all':
# Call to RunModel - Here we run the model while instantiating the RunModel object.
t = time.time()
m22 = RunModel(samples=x_mcs_tri_list, ntasks=2, model_script='python_model.py',
model_object_name='SumRVs', model_dir='Python_Runs')
t_par_python = time.time() - t
print("\nTime for parallel execution:")
print(t_par_python)
print()
print("The values returned from the Matlab simulation:")
print(m22.qoi_list)
###Output
Time for parallel execution:
0.01832294464111328
The values returned from the Matlab simulation:
[array([1.20446563]), array([3.51812552]), array([-1.27087717]), array([1.07046278]), array([-0.51692766])]
###Markdown
Example 3: Passing a scalar and an array to RunModelIn this example, we pass a single scalar random variable as well as an array into a Matlab model.Again, we will pass the samples both as an ndarray and as a list. Recall that UQpy converts all samples into an ndarray of at least two dimensions internally.
###Code
if pick_model == 'mixed' or pick_model =='vector' or pick_model == 'all':
# Call MCS to generate samples
# THIS WILL NEED TO BE REWRITTEN WITH DISTRIBUTION AND MCS UPDATES --------------------------------------------
# First generate the scalar random variable
# x_mcs1 = MCS(dist_name=['Normal'], dist_params=[[0,1]], nsamples=5, var_names = ['var1'])
# Next generate a 3x3 random matrix
# x_mcs2 = MCS(dist_name=['Normal','Normal','Normal'], dist_params=[[0,1],[0,1],[0,1]], nsamples=15)
# x_mcs_array = x_mcs2.samples.reshape((5,3,3))
# -------------------------------------------------------------------------------------------------------------
# Call MCS to generate samples
d = Normal(loc=0, scale=1)
x_mcs1 = MCS(dist_object=d, nsamples=5, random_state=987979)
x_mcs2 = MCS(dist_object=[d, d, d], nsamples=15, random_state=34876)
x_mcs_array = x_mcs2.samples.reshape((5,3,3))
print("Monte Carlo samples of a single random variable from a standard normal distribution.")
print('Samples stored as an array:')
print('Data type:', type(x_mcs1.samples))
print('Number of samples:', len(x_mcs1.samples))
print('Dimensions of samples:', np.shape(x_mcs1.samples))
print('Samples')
print(x_mcs1.samples)
print()
print("Monte Carlo samples of a 3x3 matrix of standard normal random variables.")
print('Samples stored as an array:')
print('Data type:', type(x_mcs_array))
print('Number of samples:', len(x_mcs_array))
print('Dimensions of samples:', np.shape(x_mcs_array))
print('Samples')
print(x_mcs_array)
print()
# Create a set of samples to be passed into RunModel
# Here we need to create the mixed samples such that each sample has a single scalar and a single 3x3 matrix.
# This data structure is essential to passing the input to UQpy correctly.
x_mixed = []
for i in range(5):
x_mixed.append([x_mcs1.samples[i], x_mcs_array[i]])
print("Combined samples with a scalar and a 3x3 matrix of standard normal random variables.")
print('Samples stored as a list:')
print('Data type:', type(x_mixed))
print('Number of samples:', len(x_mixed))
print('Dimensions of samples:', np.shape(x_mixed))
print('Samples')
print(x_mixed)
print()
x_mixed_array = np.atleast_2d(np.asarray(x_mixed))
print("Combined samples with a scalar and a 3x3 matrix of standard normal random variables.")
print('Samples stored as ndarray:')
print('Data type:', type(x_mixed_array))
print('Number of samples:', len(x_mixed_array))
print('Dimensions of samples:', np.shape(x_mixed_array))
print('Samples')
print(x_mixed_array)
print()
# Notice that, in both the ndarray case and the list case, the samples have dimension (5,2). That is, there
# are five samples of two variables. The first variable is a scalar. The second variable is a 3x3 matrix.
###Output
Monte Carlo samples of a single random variable from a standard normal distribution.
Samples stored as an array:
Data type: <class 'numpy.ndarray'>
Number of samples: 5
Dimensions of samples: (5, 1)
Samples
[[ 2.23176466]
[ 2.39233592]
[-0.27088124]
[ 0.1157384 ]
[ 0.03322176]]
Monte Carlo samples of a 3x3 matrix of standard normal random variables.
Samples stored as an array:
Data type: <class 'numpy.ndarray'>
Number of samples: 5
Dimensions of samples: (5, 3, 3)
Samples
[[[ 1.41844416e-01 3.02388356e-01 -1.23439338e+00]
[ 7.20992367e-01 1.52119064e+00 3.89663401e-01]
[-8.41732889e-01 6.22714399e-01 4.81666741e-01]]
[[-6.44218784e-01 -1.15450313e+00 -1.13456854e+00]
[ 1.19971101e+00 3.80664178e-02 -8.49111812e-02]
[-5.62372993e-01 1.02606814e+00 4.24392486e-01]]
[[-1.47495390e+00 -1.83639697e-05 1.03667812e-01]
[ 3.67230456e-01 4.43218583e-01 2.19753402e-01]
[ 1.68307016e+00 6.50639674e-02 1.73664803e+00]]
[[ 8.12629703e-01 -7.34797593e-01 -4.92923798e-01]
[-6.03117372e-01 1.84220244e+00 6.86679520e-02]
[-6.17815065e-01 1.38407753e-01 -1.02776267e+00]]
[[-7.73687187e-01 -1.14836191e+00 -6.79684240e-01]
[-9.67105857e-01 -1.31879204e+00 4.76126420e-01]
[-2.94287448e-01 -1.70578952e-01 1.24595246e-01]]]
Combined samples with a scalar and a 3x3 matrix of standard normal random variables.
Samples stored as a list:
Data type: <class 'list'>
Number of samples: 5
Dimensions of samples: (5, 2)
Samples
[[array([2.23176466]), array([[ 0.14184442, 0.30238836, -1.23439338],
[ 0.72099237, 1.52119064, 0.3896634 ],
[-0.84173289, 0.6227144 , 0.48166674]])], [array([2.39233592]), array([[-0.64421878, -1.15450313, -1.13456854],
[ 1.19971101, 0.03806642, -0.08491118],
[-0.56237299, 1.02606814, 0.42439249]])], [array([-0.27088124]), array([[-1.47495390e+00, -1.83639697e-05, 1.03667812e-01],
[ 3.67230456e-01, 4.43218583e-01, 2.19753402e-01],
[ 1.68307016e+00, 6.50639674e-02, 1.73664803e+00]])], [array([0.1157384]), array([[ 0.8126297 , -0.73479759, -0.4929238 ],
[-0.60311737, 1.84220244, 0.06866795],
[-0.61781506, 0.13840775, -1.02776267]])], [array([0.03322176]), array([[-0.77368719, -1.14836191, -0.67968424],
[-0.96710586, -1.31879204, 0.47612642],
[-0.29428745, -0.17057895, 0.12459525]])]]
Combined samples with a scalar and a 3x3 matrix of standard normal random variables.
Samples stored as ndarray:
Data type: <class 'numpy.ndarray'>
Number of samples: 5
Dimensions of samples: (5, 2)
Samples
[[array([2.23176466])
array([[ 0.14184442, 0.30238836, -1.23439338],
[ 0.72099237, 1.52119064, 0.3896634 ],
[-0.84173289, 0.6227144 , 0.48166674]])]
[array([2.39233592])
array([[-0.64421878, -1.15450313, -1.13456854],
[ 1.19971101, 0.03806642, -0.08491118],
[-0.56237299, 1.02606814, 0.42439249]])]
[array([-0.27088124])
array([[-1.47495390e+00, -1.83639697e-05, 1.03667812e-01],
[ 3.67230456e-01, 4.43218583e-01, 2.19753402e-01],
[ 1.68307016e+00, 6.50639674e-02, 1.73664803e+00]])]
[array([0.1157384])
array([[ 0.8126297 , -0.73479759, -0.4929238 ],
[-0.60311737, 1.84220244, 0.06866795],
[-0.61781506, 0.13840775, -1.02776267]])]
[array([0.03322176])
array([[-0.77368719, -1.14836191, -0.67968424],
[-0.96710586, -1.31879204, 0.47612642],
[-0.29428745, -0.17057895, 0.12459525]])]]
###Markdown
3.1 Pass samples as ndarray, Python class called, serial execution This examples uses the following files:- model_script = python_model.py
###Code
if pick_model == 'mixed' or pick_model == 'all':
# Call to RunModel - Here we run the model while instantiating the RunModel object.
t = time.time()
m31 = RunModel(samples=x_mixed_array, ntasks=1, model_script='python_model.py',
model_object_name='DetRVs', model_dir='Python_Runs', vec=False)
t_ser_python = time.time() - t
print("\nTime for serial execution:")
print(t_ser_python)
print()
print("The values returned from the Matlab simulation:")
print(m31.qoi_list)
###Output
Time for serial execution:
0.011285781860351562
The values returned from the Matlab simulation:
[array([-5.06488369]), array([-2.28414587]), array([0.32209288]), array([-0.18281303]), array([0.00792293])]
###Markdown
3.2 Pass samples as list, Python function called, parallel execution This examples uses the following files:- model_script = python_model.py
###Code
if pick_model == 'mixed' or pick_model == 'all':
# Call to RunModel - Here we run the model while instantiating the RunModel object.
# Note that the parallel model_object handles only one sample at a time.
t = time.time()
m32 = RunModel(samples=x_mixed, ntasks=1, model_script='python_model.py',
model_object_name='det_rvs_par', model_dir='Python_Runs', vec=False)
t_par_python = time.time() - t
print("\nTime for parallel execution:")
print(t_par_python)
print()
print("The values returned from the Matlab simulation:")
print(m32.qoi_list)
###Output
Time for parallel execution:
0.007647037506103516
The values returned from the Matlab simulation:
[array([-5.06488369]), array([-2.28414587]), array([0.32209288]), array([-0.18281303]), array([0.00792293])]
###Markdown
Example 4: Passing a fixed variable and an array of Random Variables to RunModelIn this example, we pass a fixed value coefficient as well as an array of random variables into a Python model.Again, we will pass the samples both as an ndarray and as a list. Recall that UQpy converts all samples into an ndarray of at least two dimensions internally.
###Code
if pick_model == 'mixed' or pick_model == 'all':
x = 2.5
print('Constant Coefficient:')
print(x)
print()
print("Monte Carlo samples of a 3x3 matrix of standard normal random variables.")
print('Samples stored as an array:')
print('Data type:', type(x_mcs_array))
print('Number of samples:', len(x_mcs_array))
print('Dimensions of samples:', np.shape(x_mcs_array))
print('Samples')
print(x_mcs_array)
print()
x_mcs_list = list(x_mcs_array)
print("3x3 matrix of standard normal random variables.")
print('Samples stored as ndarray:')
print('Data type:', type(x_mcs_list))
print('Number of samples:', len(x_mcs_list))
print('Dimensions of samples:', np.shape(x_mcs_list))
print('Samples')
print(x_mcs_list)
print()
# Notice that, in both the ndarray case and the list case, the samples have dimension (5,2). That is, there
# are five samples of two variables. The first variable is a scalar. The second variable is a 3x3 matrix.
###Output
Constant Coefficient:
2.5
Monte Carlo samples of a 3x3 matrix of standard normal random variables.
Samples stored as an array:
Data type: <class 'numpy.ndarray'>
Number of samples: 5
Dimensions of samples: (5, 3, 3)
Samples
[[[ 1.41844416e-01 3.02388356e-01 -1.23439338e+00]
[ 7.20992367e-01 1.52119064e+00 3.89663401e-01]
[-8.41732889e-01 6.22714399e-01 4.81666741e-01]]
[[-6.44218784e-01 -1.15450313e+00 -1.13456854e+00]
[ 1.19971101e+00 3.80664178e-02 -8.49111812e-02]
[-5.62372993e-01 1.02606814e+00 4.24392486e-01]]
[[-1.47495390e+00 -1.83639697e-05 1.03667812e-01]
[ 3.67230456e-01 4.43218583e-01 2.19753402e-01]
[ 1.68307016e+00 6.50639674e-02 1.73664803e+00]]
[[ 8.12629703e-01 -7.34797593e-01 -4.92923798e-01]
[-6.03117372e-01 1.84220244e+00 6.86679520e-02]
[-6.17815065e-01 1.38407753e-01 -1.02776267e+00]]
[[-7.73687187e-01 -1.14836191e+00 -6.79684240e-01]
[-9.67105857e-01 -1.31879204e+00 4.76126420e-01]
[-2.94287448e-01 -1.70578952e-01 1.24595246e-01]]]
3x3 matrix of standard normal random variables.
Samples stored as ndarray:
Data type: <class 'list'>
Number of samples: 5
Dimensions of samples: (5, 3, 3)
Samples
[array([[ 0.14184442, 0.30238836, -1.23439338],
[ 0.72099237, 1.52119064, 0.3896634 ],
[-0.84173289, 0.6227144 , 0.48166674]]), array([[-0.64421878, -1.15450313, -1.13456854],
[ 1.19971101, 0.03806642, -0.08491118],
[-0.56237299, 1.02606814, 0.42439249]]), array([[-1.47495390e+00, -1.83639697e-05, 1.03667812e-01],
[ 3.67230456e-01, 4.43218583e-01, 2.19753402e-01],
[ 1.68307016e+00, 6.50639674e-02, 1.73664803e+00]]), array([[ 0.8126297 , -0.73479759, -0.4929238 ],
[-0.60311737, 1.84220244, 0.06866795],
[-0.61781506, 0.13840775, -1.02776267]]), array([[-0.77368719, -1.14836191, -0.67968424],
[-0.96710586, -1.31879204, 0.47612642],
[-0.29428745, -0.17057895, 0.12459525]])]
###Markdown
4.1 Pass samples as ndarray, Python class called, serial execution This examples uses the following files:- model_script = python_model.py
###Code
if pick_model == 'mixed' or pick_model == 'all':
# Call to RunModel - Here we run the model while instantiating the RunModel object.
t = time.time()
m41 = RunModel(samples=x_mcs_array, ntasks=1, model_script='python_model.py',
model_object_name='det_rvs_fixed', model_dir='Python_Runs', vec=False, coeff=x)
t_ser_python = time.time() - t
print("\nTime for serial execution:")
print(t_ser_python)
print()
print("The values returned from the Matlab simulation:")
print(m41.qoi_list)
###Output
Time for serial execution:
0.00769805908203125
The values returned from the Matlab simulation:
[-5.673631015232223, -2.3869409943327016, -2.9726392645350614, -3.948841380481718, 0.5962156121663712]
###Markdown
4.2 Pass samples as list, Python class called, serial execution This examples uses the following files:- model_script = python_model.py
###Code
if pick_model == 'mixed' or pick_model == 'all':
# Call to RunModel - Here we run the model while instantiating the RunModel object.
t = time.time()
m42 = RunModel(samples=x_mcs_list, ntasks=1, model_script='python_model.py',
model_object_name='det_rvs_fixed', model_dir='Python_Runs', vec=False, coeff=x)
t_ser_python = time.time() - t
print("\nTime for serial execution:")
print(t_ser_python)
print()
print("The values returned from the Matlab simulation:")
print(m42.qoi_list)
###Output
Time for serial execution:
0.00405573844909668
The values returned from the Matlab simulation:
[-5.673631015232223, -2.3869409943327016, -2.9726392645350614, -3.948841380481718, 0.5962156121663712]
|
Coursera_RL_Course/week4/practice_approx_qlearning.ipynb | ###Markdown
Approximate q-learningIn this notebook you will teach a __tensorflow__ neural network to do Q-learning. __Frameworks__ - we'll accept this homework in any deep learning framework. This particular notebook was designed for tensorflow, but you will find it easy to adapt it to almost any python-based deep learning framework.
###Code
#XVFB will be launched if you run on a server
import os
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
!bash ../xvfb start
os.environ['DISPLAY'] = ':1'
import gym
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
env = gym.make("CartPole-v0").env
env.reset()
n_actions = env.action_space.n
state_dim = env.observation_space.shape
plt.imshow(env.render("rgb_array"))
###Output
_____no_output_____
###Markdown
Approximate (deep) Q-learning: building the networkTo train a neural network policy one must have a neural network policy. Let's build it.Since we're working with a pre-extracted features (cart positions, angles and velocities), we don't need a complicated network yet. In fact, let's build something like this for starters:![img](https://raw.githubusercontent.com/yandexdataschool/Practical_RL/master/yet_another_week/_resource/qlearning_scheme.png)For your first run, please only use linear layers (L.Dense) and activations. Stuff like batch normalization or dropout may ruin everything if used haphazardly. Also please avoid using nonlinearities like sigmoid & tanh: agent's observations are not normalized so sigmoids may become saturated from init.Ideally you should start small with maybe 1-2 hidden layers with < 200 neurons and then increase network size if agent doesn't beat the target score.
###Code
import tensorflow as tf
import keras
import keras.layers as L
tf.reset_default_graph()
sess = tf.InteractiveSession()
keras.backend.set_session(sess)
network = keras.models.Sequential()
#network.add(L.InputLayer(state_dim))
# let's create a network for approximate q-learning following guidelines above
network.add(L.Dense(100, input_shape=(state_dim[0],), activation='relu'))
network.add(L.Dense(100, activation='relu'))
network.add(L.Dense(n_actions, activation=None))
def get_action(state, epsilon=0):
"""
sample actions with epsilon-greedy policy
recap: with p = epsilon pick random action, else pick action with highest Q(s,a)
"""
q_values = network.predict(state[None])[0]
explora_factor = np.random.random()
if explora_factor < epsilon:
action = np.random.choice(n_actions, 1)[0]
else:
action = np.argmax(q_values)
return action
assert network.output_shape == (None, n_actions), "please make sure your model maps state s -> [Q(s,a0), ..., Q(s, a_last)]"
assert network.layers[-1].activation == keras.activations.linear, "please make sure you predict q-values without nonlinearity"
# test epsilon-greedy exploration
s = env.reset()
assert np.shape(get_action(s)) == (), "please return just one action (integer)"
for eps in [0., 0.1, 0.5, 1.0]:
state_frequencies = np.bincount([get_action(s, epsilon=eps) for i in range(10000)], minlength=n_actions)
best_action = state_frequencies.argmax()
assert abs(state_frequencies[best_action] - 10000 * (1 - eps + eps / n_actions)) < 200
for other_action in range(n_actions):
if other_action != best_action:
assert abs(state_frequencies[other_action] - 10000 * (eps / n_actions)) < 200
print('e=%.1f tests passed'%eps)
###Output
W0925 19:21:29.789233 139877167286016 deprecation_wrapper.py:119] From /opt/conda/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:2741: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.
W0925 19:21:29.791384 139877167286016 deprecation_wrapper.py:119] From /opt/conda/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:174: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead.
###Markdown
Q-learning via gradient descentWe shall now train our agent's Q-function by minimizing the TD loss:$$ L = { 1 \over N} \sum_i (Q_{\theta}(s,a) - [r(s,a) + \gamma \cdot max_{a'} Q_{-}(s', a')]) ^2 $$Where* $s, a, r, s'$ are current state, action, reward and next state respectively* $\gamma$ is a discount factor defined two cells above.The tricky part is with $Q_{-}(s',a')$. From an engineering standpoint, it's the same as $Q_{\theta}$ - the output of your neural network policy. However, when doing gradient descent, __we won't propagate gradients through it__ to make training more stable (see lectures).To do so, we shall use `tf.stop_gradient` function which basically says "consider this thing constant when doingbackprop".
###Code
# Create placeholders for the <s, a, r, s'> tuple and a special indicator for game end (is_done = True)
states_ph = keras.backend.placeholder(dtype='float32', shape=(None,) + state_dim)
actions_ph = keras.backend.placeholder(dtype='int32', shape=[None])
rewards_ph = keras.backend.placeholder(dtype='float32', shape=[None])
next_states_ph = keras.backend.placeholder(dtype='float32', shape=(None,) + state_dim)
is_done_ph = keras.backend.placeholder(dtype='bool', shape=[None])
#get q-values for all actions in current states
predicted_qvalues = network(states_ph)
#select q-values for chosen actions
predicted_qvalues_for_actions = tf.reduce_sum(predicted_qvalues * tf.one_hot(actions_ph, n_actions), axis=1)
gamma = 0.99
# compute q-values for all actions in next states
predicted_next_qvalues = network(next_states_ph)
# compute V*(next_states) using predicted next q-values
next_state_values = tf.reduce_max(predicted_next_qvalues, axis=1)
# compute "target q-values" for loss - it's what's inside square parentheses in the above formula.
target_qvalues_for_actions = rewards_ph + (gamma*next_state_values)
# at the last state we shall use simplified formula: Q(s,a) = r(s,a) since s' doesn't exist
target_qvalues_for_actions = tf.where(is_done_ph, rewards_ph, target_qvalues_for_actions)
#mean squared error loss to minimize
loss = (predicted_qvalues_for_actions - tf.stop_gradient(target_qvalues_for_actions)) ** 2
loss = tf.reduce_mean(loss)
# training function that resembles agent.update(state, action, reward, next_state) from tabular agent
train_step = tf.train.AdamOptimizer(1e-4).minimize(loss)
assert tf.gradients(loss, [predicted_qvalues_for_actions])[0] is not None, "make sure you update q-values for chosen actions and not just all actions"
assert tf.gradients(loss, [predicted_next_qvalues])[0] is None, "make sure you don't propagate gradient w.r.t. Q_(s',a')"
assert predicted_next_qvalues.shape.ndims == 2, "make sure you predicted q-values for all actions in next state"
assert next_state_values.shape.ndims == 1, "make sure you computed V(s') as maximum over just the actions axis and not all axes"
assert target_qvalues_for_actions.shape.ndims == 1, "there's something wrong with target q-values, they must be a vector"
###Output
_____no_output_____
###Markdown
Playing the game
###Code
def generate_session(t_max=1000, epsilon=0, train=False):
"""play env with approximate q-learning agent and train it at the same time"""
total_reward = 0
s = env.reset()
for t in range(t_max):
a = get_action(s, epsilon=epsilon)
next_s, r, done, _ = env.step(a)
if train:
sess.run(train_step,{
states_ph: [s], actions_ph: [a], rewards_ph: [r],
next_states_ph: [next_s], is_done_ph: [done]
})
total_reward += r
s = next_s
if done: break
return total_reward
epsilon = 0.5
for i in range(1000):
session_rewards = [generate_session(epsilon=epsilon, train=True) for _ in range(100)]
print("epoch #{}\tmean reward = {:.3f}\tepsilon = {:.3f}".format(i, np.mean(session_rewards), epsilon))
epsilon *= 0.99
assert epsilon >= 1e-4, "Make sure epsilon is always nonzero during training"
if np.mean(session_rewards) > 300:
print ("You Win!")
break
###Output
epoch #0 mean reward = 13.630 epsilon = 0.500
epoch #1 mean reward = 13.660 epsilon = 0.495
epoch #2 mean reward = 14.920 epsilon = 0.490
epoch #3 mean reward = 13.530 epsilon = 0.485
epoch #4 mean reward = 13.710 epsilon = 0.480
epoch #5 mean reward = 13.660 epsilon = 0.475
epoch #6 mean reward = 18.020 epsilon = 0.471
epoch #7 mean reward = 14.550 epsilon = 0.466
epoch #8 mean reward = 15.050 epsilon = 0.461
epoch #9 mean reward = 17.100 epsilon = 0.457
epoch #10 mean reward = 23.980 epsilon = 0.452
epoch #11 mean reward = 35.830 epsilon = 0.448
epoch #12 mean reward = 29.650 epsilon = 0.443
epoch #13 mean reward = 36.830 epsilon = 0.439
epoch #14 mean reward = 43.600 epsilon = 0.434
epoch #15 mean reward = 53.080 epsilon = 0.430
epoch #16 mean reward = 67.370 epsilon = 0.426
epoch #17 mean reward = 111.100 epsilon = 0.421
epoch #18 mean reward = 128.160 epsilon = 0.417
epoch #19 mean reward = 137.310 epsilon = 0.413
epoch #20 mean reward = 163.740 epsilon = 0.409
epoch #21 mean reward = 159.430 epsilon = 0.405
epoch #22 mean reward = 184.690 epsilon = 0.401
epoch #23 mean reward = 168.760 epsilon = 0.397
epoch #24 mean reward = 171.620 epsilon = 0.393
epoch #25 mean reward = 169.450 epsilon = 0.389
epoch #26 mean reward = 206.360 epsilon = 0.385
epoch #27 mean reward = 169.980 epsilon = 0.381
epoch #28 mean reward = 261.980 epsilon = 0.377
epoch #29 mean reward = 211.730 epsilon = 0.374
epoch #30 mean reward = 366.820 epsilon = 0.370
You Win!
###Markdown
How to interpret resultsWelcome to the f.. world of deep f...n reinforcement learning. Don't expect agent's reward to smoothly go up. Hope for it to go increase eventually. If it deems you worthy.Seriously though,* __ mean reward__ is the average reward per game. For a correct implementation it may stay low for some 10 epochs, then start growing while oscilating insanely and converges by ~50-100 steps depending on the network architecture. * If it never reaches target score by the end of for loop, try increasing the number of hidden neurons or look at the epsilon.* __ epsilon__ - agent's willingness to explore. If you see that agent's already at < 0.01 epsilon before it's is at least 200, just reset it back to 0.1 - 0.5. Record videosAs usual, we now use `gym.wrappers.Monitor` to record a video of our agent playing the game. Unlike our previous attempts with state binarization, this time we expect our agent to act ~~(or fail)~~ more smoothly since there's no more binarization error at play.As you already did with tabular q-learning, we set epsilon=0 for final evaluation to prevent agent from exploring himself to death.
###Code
#record sessions
import gym.wrappers
env = gym.wrappers.Monitor(gym.make("CartPole-v0"),directory="videos",force=True)
sessions = [generate_session(epsilon=0, train=False) for _ in range(100)]
env.close()
#show video
from IPython.display import HTML
import os
video_names = list(filter(lambda s:s.endswith(".mp4"),os.listdir("./videos/")))
HTML("""
<video width="640" height="480" controls>
<source src="{}" type="video/mp4">
</video>
""".format("./videos/"+video_names[-1])) #this may or may not be _last_ video. Try other indices
###Output
_____no_output_____
###Markdown
--- Submit to coursera
###Code
from submit import submit_cartpole
submit_cartpole(generate_session, '[email protected]', 'nbV4ZMrRGkEn00jO')
###Output
Submitted to Coursera platform. See results on assignment page!
|
site/en-snapshot/model_optimization/guide/quantization/training_comprehensive_guide.ipynb | ###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Quantization aware training comprehensive guide View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Welcome to the comprehensive guide for Keras quantization aware training.This page documents various use cases and shows how to use the API for each one. Once you know which APIs you need, find the parameters and the low-level details in the[API docs](https://www.tensorflow.org/model_optimization/api_docs/python/tfmot/quantization).* If you want to see the benefits of quantization aware training and what's supported, see the [overview](https://www.tensorflow.org/model_optimization/guide/quantization/training.md).* For a single end-to-end example, see the [quantization aware training example](https://www.tensorflow.org/model_optimization/guide/quantization/training_example.md).The following use cases are covered:* Deploy a model with 8-bit quantization with these steps. * Define a quantization aware model. * For Keras HDF5 models only, use special checkpointing and deserialization logic. Training is otherwise standard. * Create a quantized model from the quantization aware one.* Experiment with quantization. * Anything for experimentation has no supported path to deployment. * Custom Keras layers fall under experimentation. Setup For finding the APIs you need and understanding purposes, you can run but skip reading this section.
###Code
! pip uninstall -y tensorflow
! pip install -q tf-nightly
! pip install -q tensorflow-model-optimization
import tensorflow as tf
import numpy as np
import tensorflow_model_optimization as tfmot
import tempfile
input_shape = [20]
x_train = np.random.randn(1, 20).astype(np.float32)
y_train = tf.keras.utils.to_categorical(np.random.randn(1), num_classes=20)
def setup_model():
model = tf.keras.Sequential([
tf.keras.layers.Dense(20, input_shape=input_shape),
tf.keras.layers.Flatten()
])
return model
def setup_pretrained_weights():
model= setup_model()
model.compile(
loss=tf.keras.losses.categorical_crossentropy,
optimizer='adam',
metrics=['accuracy']
)
model.fit(x_train, y_train)
_, pretrained_weights = tempfile.mkstemp('.tf')
model.save_weights(pretrained_weights)
return pretrained_weights
def setup_pretrained_model():
model = setup_model()
pretrained_weights = setup_pretrained_weights()
model.load_weights(pretrained_weights)
return model
setup_model()
pretrained_weights = setup_pretrained_weights()
###Output
_____no_output_____
###Markdown
Define quantization aware model By defining models in the following ways, there are available paths to deployment to backends listed in the [overview page](https://www.tensorflow.org/model_optimization/guide/quantization/training.md). By default, 8-bit quantization is used.Note: a quantization aware model is not actually quantized. Creating a quantized model is a separate step. Quantize whole model **Your use case:*** Subclassed models are not supported.**Tips for better model accuracy:*** Try "Quantize some layers" to skip quantizing the layers that reduce accuracy the most.* It's generally better to finetune with quantization aware training as opposed to training from scratch. To make the whole model aware of quantization, apply `tfmot.quantization.keras.quantize_model` to the model.
###Code
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
quant_aware_model = tfmot.quantization.keras.quantize_model(base_model)
quant_aware_model.summary()
###Output
_____no_output_____
###Markdown
Quantize some layers Quantizing a model can have a negative effect on accuracy. You can selectively quantize layers of a model to explore the trade-off between accuracy, speed, and model size.**Your use case:*** To deploy to a backend that only works well with fully quantized models (e.g. EdgeTPU v1, most DSPs), try "Quantize whole model".**Tips for better model accuracy:*** It's generally better to finetune with quantization aware training as opposed to training from scratch.* Try quantizing the later layers instead of the first layers.* Avoid quantizing critical layers (e.g. attention mechanism). In the example below, quantize only the `Dense` layers.
###Code
# Create a base model
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
# Helper function uses `quantize_annotate_layer` to annotate that only the
# Dense layers should be quantized.
def apply_quantization_to_dense(layer):
if isinstance(layer, tf.keras.layers.Dense):
return tfmot.quantization.keras.quantize_annotate_layer(layer)
return layer
# Use `tf.keras.models.clone_model` to apply `apply_quantization_to_dense`
# to the layers of the model.
annotated_model = tf.keras.models.clone_model(
base_model,
clone_function=apply_quantization_to_dense,
)
# Now that the Dense layers are annotated,
# `quantize_apply` actually makes the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model)
quant_aware_model.summary()
###Output
_____no_output_____
###Markdown
While this example used the type of the layer to decide what to quantize, the easiest way to quantize a particular layer is to set its `name` property, and look for that name in the `clone_function`.
###Code
print(base_model.layers[0].name)
###Output
_____no_output_____
###Markdown
More readable but potentially lower model accuracy This is not compatible with finetuning with quantization aware training, which is why it may be less accurate than the above examples. **Functional example**
###Code
# Use `quantize_annotate_layer` to annotate that the `Dense` layer
# should be quantized.
i = tf.keras.Input(shape=(20,))
x = tfmot.quantization.keras.quantize_annotate_layer(tf.keras.layers.Dense(10))(i)
o = tf.keras.layers.Flatten()(x)
annotated_model = tf.keras.Model(inputs=i, outputs=o)
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model)
# For deployment purposes, the tool adds `QuantizeLayer` after `InputLayer` so that the
# quantized model can take in float inputs instead of only uint8.
quant_aware_model.summary()
###Output
_____no_output_____
###Markdown
**Sequential example**
###Code
# Use `quantize_annotate_layer` to annotate that the `Dense` layer
# should be quantized.
annotated_model = tf.keras.Sequential([
tfmot.quantization.keras.quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=input_shape)),
tf.keras.layers.Flatten()
])
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model)
quant_aware_model.summary()
###Output
_____no_output_____
###Markdown
Checkpoint and deserialize **Your use case:** this code is only needed for the HDF5 model format (not HDF5 weights or other formats).
###Code
# Define the model.
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
quant_aware_model = tfmot.quantization.keras.quantize_model(base_model)
# Save or checkpoint the model.
_, keras_model_file = tempfile.mkstemp('.h5')
quant_aware_model.save(keras_model_file)
# `quantize_scope` is needed for deserializing HDF5 models.
with tfmot.quantization.keras.quantize_scope():
loaded_model = tf.keras.models.load_model(keras_model_file)
loaded_model.summary()
###Output
_____no_output_____
###Markdown
Create and deploy quantized model In general, reference the documentation for the deployment backend that youwill use.This is an example for the TFLite backend.
###Code
base_model = setup_pretrained_model()
quant_aware_model = tfmot.quantization.keras.quantize_model(base_model)
# Typically you train the model here.
converter = tf.lite.TFLiteConverter.from_keras_model(quant_aware_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
quantized_tflite_model = converter.convert()
###Output
_____no_output_____
###Markdown
Experiment with quantization **Your use case**: using the following APIs means that there is nosupported path to deployment. The features are also experimental and notsubject to backward compatibility. * `tfmot.quantization.keras.QuantizeConfig` * `tfmot.quantization.keras.quantizers.Quantizer` * `tfmot.quantization.keras.quantizers.LastValueQuantizer` * `tfmot.quantization.keras.quantizers.MovingAverageQuantizer` Setup: DefaultDenseQuantizeConfig Experimenting requires using `tfmot.quantization.keras.QuantizeConfig`, which describes how to quantize the weights, activations, and outputs of a layer.Below is an example that defines the same `QuantizeConfig` used for the `Dense` layer in the API defaults.During the forward propagation in this example, the `LastValueQuantizer` returned in `get_weights_and_quantizers` is called with `layer.kernel` as the input, producing an output. The output replaces `layer.kernel`in the original forward propagation of the `Dense` layer, via the logic defined in `set_quantize_weights`. The same idea applies to the activations and outputs.
###Code
LastValueQuantizer = tfmot.quantization.keras.quantizers.LastValueQuantizer
MovingAverageQuantizer = tfmot.quantization.keras.quantizers.MovingAverageQuantizer
class DefaultDenseQuantizeConfig(tfmot.quantization.keras.QuantizeConfig):
# Configure how to quantize weights.
def get_weights_and_quantizers(self, layer):
return [(layer.kernel, LastValueQuantizer(num_bits=8, symmetric=True, narrow_range=False, per_axis=False))]
# Configure how to quantize activations.
def get_activations_and_quantizers(self, layer):
return [(layer.activation, MovingAverageQuantizer(num_bits=8, symmetric=False, narrow_range=False, per_axis=False))]
def set_quantize_weights(self, layer, quantize_weights):
# Add this line for each item returned in `get_weights_and_quantizers`
# , in the same order
layer.kernel = quantize_weights[0]
def set_quantize_activations(self, layer, quantize_activations):
# Add this line for each item returned in `get_activations_and_quantizers`
# , in the same order.
layer.activation = quantize_activations[0]
# Configure how to quantize outputs (may be equivalent to activations).
def get_output_quantizers(self, layer):
return []
def get_config(self):
return {}
###Output
_____no_output_____
###Markdown
Quantize custom Keras layer This example uses the `DefaultDenseQuantizeConfig` to quantize the `CustomLayer`.Applying the configuration is the same acrossthe "Experiment with quantization" use cases. * Apply `tfmot.quantization.keras.quantize_annotate_layer` to the `CustomLayer` and pass in the `QuantizeConfig`. * Use`tfmot.quantization.keras.quantize_annotate_model` to continue to quantize the rest of the model with the API defaults.
###Code
quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer
quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model
quantize_scope = tfmot.quantization.keras.quantize_scope
class CustomLayer(tf.keras.layers.Dense):
pass
model = quantize_annotate_model(tf.keras.Sequential([
quantize_annotate_layer(CustomLayer(20, input_shape=(20,)), DefaultDenseQuantizeConfig()),
tf.keras.layers.Flatten()
]))
# `quantize_apply` requires mentioning `DefaultDenseQuantizeConfig` with `quantize_scope`
# as well as the custom Keras layer.
with quantize_scope(
{'DefaultDenseQuantizeConfig': DefaultDenseQuantizeConfig,
'CustomLayer': CustomLayer}):
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(model)
quant_aware_model.summary()
###Output
_____no_output_____
###Markdown
Modify quantization parameters **Common mistake:** quantizing the bias to fewer than 32-bits usually harms model accuracy too much.This example modifies the `Dense` layer to use 4-bits for its weights insteadof the default 8-bits. The rest of the model continues to use API defaults.
###Code
quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer
quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model
quantize_scope = tfmot.quantization.keras.quantize_scope
class ModifiedDenseQuantizeConfig(DefaultDenseQuantizeConfig):
# Configure weights to quantize with 4-bit instead of 8-bits.
def get_weights_and_quantizers(self, layer):
return [(layer.kernel, LastValueQuantizer(num_bits=4, symmetric=True, narrow_range=False, per_axis=False))]
###Output
_____no_output_____
###Markdown
Applying the configuration is the same acrossthe "Experiment with quantization" use cases. * Apply `tfmot.quantization.keras.quantize_annotate_layer` to the `Dense` layer and pass in the `QuantizeConfig`. * Use`tfmot.quantization.keras.quantize_annotate_model` to continue to quantize the rest of the model with the API defaults.
###Code
model = quantize_annotate_model(tf.keras.Sequential([
# Pass in modified `QuantizeConfig` to modify this Dense layer.
quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=(20,)), ModifiedDenseQuantizeConfig()),
tf.keras.layers.Flatten()
]))
# `quantize_apply` requires mentioning `ModifiedDenseQuantizeConfig` with `quantize_scope`:
with quantize_scope(
{'ModifiedDenseQuantizeConfig': ModifiedDenseQuantizeConfig}):
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(model)
quant_aware_model.summary()
###Output
_____no_output_____
###Markdown
Modify parts of layer to quantize This example modifies the `Dense` layer to skip quantizing the activation. The rest of the model continues to use API defaults.
###Code
quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer
quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model
quantize_scope = tfmot.quantization.keras.quantize_scope
class ModifiedDenseQuantizeConfig(DefaultDenseQuantizeConfig):
def get_activations_and_quantizers(self, layer):
# Skip quantizing activations.
return []
def set_quantize_activations(self, layer, quantize_activations):
# Empty since `get_activaations_and_quantizers` returns
# an empty list.
return
###Output
_____no_output_____
###Markdown
Applying the configuration is the same acrossthe "Experiment with quantization" use cases. * Apply `tfmot.quantization.keras.quantize_annotate_layer` to the `Dense` layer and pass in the `QuantizeConfig`. * Use`tfmot.quantization.keras.quantize_annotate_model` to continue to quantize the rest of the model with the API defaults.
###Code
model = quantize_annotate_model(tf.keras.Sequential([
# Pass in modified `QuantizeConfig` to modify this Dense layer.
quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=(20,)), ModifiedDenseQuantizeConfig()),
tf.keras.layers.Flatten()
]))
# `quantize_apply` requires mentioning `ModifiedDenseQuantizeConfig` with `quantize_scope`:
with quantize_scope(
{'ModifiedDenseQuantizeConfig': ModifiedDenseQuantizeConfig}):
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(model)
quant_aware_model.summary()
###Output
_____no_output_____
###Markdown
Use custom quantization algorithm The `tfmot.quantization.keras.quantizers.Quantizer` class is a callable thatcan apply any algorithm to its inputs.In this example, the inputs are the weights, and we apply the math in the`FixedRangeQuantizer` \_\_call\_\_ function to the weights. Instead of the originalweights values, the output of the`FixedRangeQuantizer` is now passed to whatever would have used the weights.
###Code
quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer
quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model
quantize_scope = tfmot.quantization.keras.quantize_scope
class FixedRangeQuantizer(tfmot.quantization.keras.quantizers.Quantizer):
"""Quantizer which forces outputs to be between -1 and 1."""
def build(self, tensor_shape, name, layer):
# Not needed. No new TensorFlow variables needed.
return {}
def __call__(self, inputs, training, weights, **kwargs):
return tf.keras.backend.clip(inputs, -1.0, 1.0)
def get_config(self):
# Not needed. No __init__ parameters to serialize.
return {}
class ModifiedDenseQuantizeConfig(DefaultDenseQuantizeConfig):
# Configure weights to quantize with 4-bit instead of 8-bits.
def get_weights_and_quantizers(self, layer):
# Use custom algorithm defined in `FixedRangeQuantizer` instead of default Quantizer.
return [(layer.kernel, FixedRangeQuantizer())]
###Output
_____no_output_____
###Markdown
Applying the configuration is the same acrossthe "Experiment with quantization" use cases. * Apply `tfmot.quantization.keras.quantize_annotate_layer` to the `Dense` layer and pass in the `QuantizeConfig`. * Use`tfmot.quantization.keras.quantize_annotate_model` to continue to quantize the rest of the model with the API defaults.
###Code
model = quantize_annotate_model(tf.keras.Sequential([
# Pass in modified `QuantizeConfig` to modify this `Dense` layer.
quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=(20,)), ModifiedDenseQuantizeConfig()),
tf.keras.layers.Flatten()
]))
# `quantize_apply` requires mentioning `ModifiedDenseQuantizeConfig` with `quantize_scope`:
with quantize_scope(
{'ModifiedDenseQuantizeConfig': ModifiedDenseQuantizeConfig}):
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(model)
quant_aware_model.summary()
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Quantization aware training comprehensive guide View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Welcome to the comprehensive guide for Keras quantization aware training.This page documents various use cases and shows how to use the API for each one. Once you know which APIs you need, find the parameters and the low-level details in the[API docs](https://www.tensorflow.org/model_optimization/api_docs/python/tfmot/quantization).* If you want to see the benefits of quantization aware training and what's supported, see the [overview](https://www.tensorflow.org/model_optimization/guide/quantization/training.md).* For a single end-to-end example, see the [quantization aware training example](https://www.tensorflow.org/model_optimization/guide/quantization/training_example.md).The following use cases are covered:* Deploy a model with 8-bit quantization with these steps. * Define a quantization aware model. * For Keras HDF5 models only, use special checkpointing and deserialization logic. Training is otherwise standard. * Create a quantized model from the quantization aware one.* Experiment with quantization. * Anything for experimentation has no supported path to deployment. * Custom Keras layers fall under experimentation. Setup For finding the APIs you need and understanding purposes, you can run but skip reading this section.
###Code
! pip uninstall -y tensorflow
! pip install -q tf-nightly
! pip install -q tensorflow-model-optimization
import tensorflow as tf
import numpy as np
import tensorflow_model_optimization as tfmot
import tempfile
input_shape = [20]
x_train = np.random.randn(1, 20).astype(np.float32)
y_train = tf.keras.utils.to_categorical(np.random.randn(1), num_classes=20)
def setup_model():
model = tf.keras.Sequential([
tf.keras.layers.Dense(20, input_shape=input_shape),
tf.keras.layers.Flatten()
])
return model
def setup_pretrained_weights():
model= setup_model()
model.compile(
loss=tf.keras.losses.categorical_crossentropy,
optimizer='adam',
metrics=['accuracy']
)
model.fit(x_train, y_train)
_, pretrained_weights = tempfile.mkstemp('.tf')
model.save_weights(pretrained_weights)
return pretrained_weights
def setup_pretrained_model():
model = setup_model()
pretrained_weights = setup_pretrained_weights()
model.load_weights(pretrained_weights)
return model
setup_model()
pretrained_weights = setup_pretrained_weights()
###Output
_____no_output_____
###Markdown
Define quantization aware model By defining models in the following ways, there are available paths to deployment to backends listed in the [overview page](https://www.tensorflow.org/model_optimization/guide/quantization/training.md). By default, 8-bit quantization is used.Note: a quantization aware model is not actually quantized. Creating a quantized model is a separate step. Quantize whole model **Your use case:*** Subclassed models are not supported.**Tips for better model accuracy:*** Try "Quantize some layers" to skip quantizing the layers that reduce accuracy the most.* It's generally better to finetune with quantization aware training as opposed to training from scratch. To make the whole model aware of quantization, apply `tfmot.quantization.keras.quantize_model` to the model.
###Code
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
quant_aware_model = tfmot.quantization.keras.quantize_model(base_model)
quant_aware_model.summary()
###Output
_____no_output_____
###Markdown
Quantize some layers Quantizing a model can have a negative effect on accuracy. You can selectively quantize layers of a model to explore the trade-off between accuracy, speed, and model size.**Your use case:*** To deploy to a backend that only works well with fully quantized models (e.g. EdgeTPU v1, most DSPs), try "Quantize whole model".**Tips for better model accuracy:*** It's generally better to finetune with quantization aware training as opposed to training from scratch.* Try quantizing the later layers instead of the first layers.* Avoid quantizing critical layers (e.g. attention mechanism). In the example below, quantize only the `Dense` layers.
###Code
# Create a base model
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
# Helper function uses `quantize_annotate_layer` to annotate that only the
# Dense layers should be quantized.
def apply_quantization_to_dense(layer):
if isinstance(layer, tf.keras.layers.Dense):
return tfmot.quantization.keras.quantize_annotate_layer(layer)
return layer
# Use `tf.keras.models.clone_model` to apply `apply_quantization_to_dense`
# to the layers of the model.
annotated_model = tf.keras.models.clone_model(
base_model,
clone_function=apply_quantization_to_dense,
)
# Now that the Dense layers are annotated,
# `quantize_apply` actually makes the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model)
quant_aware_model.summary()
###Output
_____no_output_____
###Markdown
While this example used the type of the layer to decide what to quantize, the easiest way to quantize a particular layer is to set its `name` property, and look for that name in the `clone_function`.
###Code
print(base_model.layers[0].name)
###Output
_____no_output_____
###Markdown
More readable but potentially lower model accuracy This is not compatible with finetuning with quantization aware training, which is why it may be less accurate than the above examples. **Functional example**
###Code
# Use `quantize_annotate_layer` to annotate that the `Dense` layer
# should be quantized.
i = tf.keras.Input(shape=(20,))
x = tfmot.quantization.keras.quantize_annotate_layer(tf.keras.layers.Dense(10))(i)
o = tf.keras.layers.Flatten()(x)
annotated_model = tf.keras.Model(inputs=i, outputs=o)
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model)
# For deployment purposes, the tool adds `QuantizeLayer` after `InputLayer` so that the
# quantized model can take in float inputs instead of only uint8.
quant_aware_model.summary()
###Output
_____no_output_____
###Markdown
**Sequential example**
###Code
# Use `quantize_annotate_layer` to annotate that the `Dense` layer
# should be quantized.
annotated_model = tf.keras.Sequential([
tfmot.quantization.keras.quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=input_shape)),
tf.keras.layers.Flatten()
])
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model)
quant_aware_model.summary()
###Output
_____no_output_____
###Markdown
Checkpoint and deserialize **Your use case:** this code is only needed for the HDF5 model format (not HDF5 weights or other formats).
###Code
# Define the model.
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
quant_aware_model = tfmot.quantization.keras.quantize_model(base_model)
# Save or checkpoint the model.
_, keras_model_file = tempfile.mkstemp('.h5')
quant_aware_model.save(keras_model_file)
# `quantize_scope` is needed for deserializing HDF5 models.
with tfmot.quantization.keras.quantize_scope():
loaded_model = tf.keras.models.load_model(keras_model_file)
loaded_model.summary()
###Output
_____no_output_____
###Markdown
Create and deploy quantized model In general, reference the documentation for the deployment backend that youwill use.This is an example for the TFLite backend.
###Code
base_model = setup_pretrained_model()
quant_aware_model = tfmot.quantization.keras.quantize_model(base_model)
# Typically you train the model here.
converter = tf.lite.TFLiteConverter.from_keras_model(quant_aware_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
quantized_tflite_model = converter.convert()
###Output
_____no_output_____
###Markdown
Experiment with quantization **Your use case**: using the following APIs means that there is nosupported path to deployment. The features are also experimental and notsubject to backward compatibility. * `tfmot.quantization.keras.QuantizeConfig` * `tfmot.quantization.keras.quantizers.Quantizer` * `tfmot.quantization.keras.quantizers.LastValueQuantizer` * `tfmot.quantization.keras.quantizers.MovingAverageQuantizer` Setup: DefaultDenseQuantizeConfig Experimenting requires using `tfmot.quantization.keras.QuantizeConfig`, which describes how to quantize the weights, activations, and outputs of a layer.Below is an example that defines the same `QuantizeConfig` used for the `Dense` layer in the API defaults.During the forward propagation in this example, the `LastValueQuantizer` returned in `get_weights_and_quantizers` is called with `layer.kernel` as the input, producing an output. The output replaces `layer.kernel`in the original forward propagation of the `Dense` layer, via the logic defined in `set_quantize_weights`. The same idea applies to the activations and outputs.
###Code
LastValueQuantizer = tfmot.quantization.keras.quantizers.LastValueQuantizer
MovingAverageQuantizer = tfmot.quantization.keras.quantizers.MovingAverageQuantizer
class DefaultDenseQuantizeConfig(tfmot.quantization.keras.QuantizeConfig):
# Configure how to quantize weights.
def get_weights_and_quantizers(self, layer):
return [(layer.kernel, LastValueQuantizer(num_bits=8, symmetric=True, narrow_range=False, per_axis=False))]
# Configure how to quantize activations.
def get_activations_and_quantizers(self, layer):
return [(layer.activation, MovingAverageQuantizer(num_bits=8, symmetric=False, narrow_range=False, per_axis=False))]
def set_quantize_weights(self, layer, quantize_weights):
# Add this line for each item returned in `get_weights_and_quantizers`
# , in the same order
layer.kernel = quantize_weights[0]
def set_quantize_activations(self, layer, quantize_activations):
# Add this line for each item returned in `get_activations_and_quantizers`
# , in the same order.
layer.activation = quantize_activations[0]
# Configure how to quantize outputs (may be equivalent to activations).
def get_output_quantizers(self, layer):
return []
def get_config(self):
return {}
###Output
_____no_output_____
###Markdown
Quantize custom Keras layer This example uses the `DefaultDenseQuantizeConfig` to quantize the `CustomLayer`.Applying the configuration is the same acrossthe "Experiment with quantization" use cases. * Apply `tfmot.quantization.keras.quantize_annotate_layer` to the `CustomLayer` and pass in the `QuantizeConfig`. * Use`tfmot.quantization.keras.quantize_annotate_model` to continue to quantize the rest of the model with the API defaults.
###Code
quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer
quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model
quantize_scope = tfmot.quantization.keras.quantize_scope
class CustomLayer(tf.keras.layers.Dense):
pass
model = quantize_annotate_model(tf.keras.Sequential([
quantize_annotate_layer(CustomLayer(20, input_shape=(20,)), DefaultDenseQuantizeConfig()),
tf.keras.layers.Flatten()
]))
# `quantize_apply` requires mentioning `DefaultDenseQuantizeConfig` with `quantize_scope`
# as well as the custom Keras layer.
with quantize_scope(
{'DefaultDenseQuantizeConfig': DefaultDenseQuantizeConfig,
'CustomLayer': CustomLayer}):
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(model)
quant_aware_model.summary()
###Output
_____no_output_____
###Markdown
Modify quantization parameters **Common mistake:** quantizing the bias to fewer than 32-bits usually harms model accuracy too much.This example modifies the `Dense` layer to use 4-bits for its weights insteadof the default 8-bits. The rest of the model continues to use API defaults.
###Code
quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer
quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model
quantize_scope = tfmot.quantization.keras.quantize_scope
class ModifiedDenseQuantizeConfig(DefaultDenseQuantizeConfig):
# Configure weights to quantize with 4-bit instead of 8-bits.
def get_weights_and_quantizers(self, layer):
return [(layer.kernel, LastValueQuantizer(num_bits=4, symmetric=True, narrow_range=False, per_axis=False))]
###Output
_____no_output_____
###Markdown
Applying the configuration is the same acrossthe "Experiment with quantization" use cases. * Apply `tfmot.quantization.keras.quantize_annotate_layer` to the `Dense` layer and pass in the `QuantizeConfig`. * Use`tfmot.quantization.keras.quantize_annotate_model` to continue to quantize the rest of the model with the API defaults.
###Code
model = quantize_annotate_model(tf.keras.Sequential([
# Pass in modified `QuantizeConfig` to modify this Dense layer.
quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=(20,)), ModifiedDenseQuantizeConfig()),
tf.keras.layers.Flatten()
]))
# `quantize_apply` requires mentioning `ModifiedDenseQuantizeConfig` with `quantize_scope`:
with quantize_scope(
{'ModifiedDenseQuantizeConfig': ModifiedDenseQuantizeConfig}):
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(model)
quant_aware_model.summary()
###Output
_____no_output_____
###Markdown
Modify parts of layer to quantize This example modifies the `Dense` layer to skip quantizing the activation. The rest of the model continues to use API defaults.
###Code
quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer
quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model
quantize_scope = tfmot.quantization.keras.quantize_scope
class ModifiedDenseQuantizeConfig(DefaultDenseQuantizeConfig):
def get_activations_and_quantizers(self, layer):
# Skip quantizing activations.
return []
def set_quantize_activations(self, layer, quantize_activations):
# Empty since `get_activaations_and_quantizers` returns
# an empty list.
return
###Output
_____no_output_____
###Markdown
Applying the configuration is the same acrossthe "Experiment with quantization" use cases. * Apply `tfmot.quantization.keras.quantize_annotate_layer` to the `Dense` layer and pass in the `QuantizeConfig`. * Use`tfmot.quantization.keras.quantize_annotate_model` to continue to quantize the rest of the model with the API defaults.
###Code
model = quantize_annotate_model(tf.keras.Sequential([
# Pass in modified `QuantizeConfig` to modify this Dense layer.
quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=(20,)), ModifiedDenseQuantizeConfig()),
tf.keras.layers.Flatten()
]))
# `quantize_apply` requires mentioning `ModifiedDenseQuantizeConfig` with `quantize_scope`:
with quantize_scope(
{'ModifiedDenseQuantizeConfig': ModifiedDenseQuantizeConfig}):
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(model)
quant_aware_model.summary()
###Output
_____no_output_____
###Markdown
Use custom quantization algorithm The `tfmot.quantization.keras.quantizers.Quantizer` class is a callable thatcan apply any algorithm to its inputs.In this example, the inputs are the weights, and we apply the math in the`FixedRangeQuantizer` \_\_call\_\_ function to the weights. Instead of the originalweights values, the output of the`FixedRangeQuantizer` is now passed to whatever would have used the weights.
###Code
quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer
quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model
quantize_scope = tfmot.quantization.keras.quantize_scope
class FixedRangeQuantizer(tfmot.quantization.keras.quantizers.Quantizer):
"""Quantizer which forces outputs to be between -1 and 1."""
def build(self, tensor_shape, name, layer):
# Not needed. No new TensorFlow variables needed.
return {}
def __call__(self, inputs, training, weights, **kwargs):
return tf.keras.backend.clip(inputs, -1.0, 1.0)
def get_config(self):
# Not needed. No __init__ parameters to serialize.
return {}
class ModifiedDenseQuantizeConfig(DefaultDenseQuantizeConfig):
# Configure weights to quantize with 4-bit instead of 8-bits.
def get_weights_and_quantizers(self, layer):
# Use custom algorithm defined in `FixedRangeQuantizer` instead of default Quantizer.
return [(layer.kernel, FixedRangeQuantizer())]
###Output
_____no_output_____
###Markdown
Applying the configuration is the same acrossthe "Experiment with quantization" use cases. * Apply `tfmot.quantization.keras.quantize_annotate_layer` to the `Dense` layer and pass in the `QuantizeConfig`. * Use`tfmot.quantization.keras.quantize_annotate_model` to continue to quantize the rest of the model with the API defaults.
###Code
model = quantize_annotate_model(tf.keras.Sequential([
# Pass in modified `QuantizeConfig` to modify this `Dense` layer.
quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=(20,)), ModifiedDenseQuantizeConfig()),
tf.keras.layers.Flatten()
]))
# `quantize_apply` requires mentioning `ModifiedDenseQuantizeConfig` with `quantize_scope`:
with quantize_scope(
{'ModifiedDenseQuantizeConfig': ModifiedDenseQuantizeConfig}):
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(model)
quant_aware_model.summary()
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Quantization aware training comprehensive guide View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Welcome to the comprehensive guide for Keras quantization aware training.This page documents various use cases and shows how to use the API for each one. Once you know which APIs you need, find the parameters and the low-level details in the[API docs](https://www.tensorflow.org/model_optimization/api_docs/python/tfmot/quantization).* If you want to see the benefits of quantization aware training and what's supported, see the [overview](https://www.tensorflow.org/model_optimization/guide/quantization/training.md).* For a single end-to-end example, see the [quantization aware training example](https://www.tensorflow.org/model_optimization/guide/quantization/training_example.md).The following use cases are covered:* Deploy a model with 8-bit quantization with these steps. * Define a quantization aware model. * For Keras HDF5 models only, use special checkpointing and deserialization logic. Training is otherwise standard. * Create a quantized model from the quantization aware one.* Experiment with quantization. * Anything for experimentation has no supported path to deployment. * Custom Keras layers fall under experimentation. Setup For finding the APIs you need and understanding purposes, you can run but skip reading this section.
###Code
! pip uninstall -y tensorflow
! pip install -q tf-nightly
! pip install -q tensorflow-model-optimization
import tensorflow as tf
import numpy as np
import tensorflow_model_optimization as tfmot
import tempfile
input_shape = [20]
x_train = np.random.randn(1, 20).astype(np.float32)
y_train = tf.keras.utils.to_categorical(np.random.randn(1), num_classes=20)
def setup_model():
model = tf.keras.Sequential([
tf.keras.layers.Dense(20, input_shape=input_shape),
tf.keras.layers.Flatten()
])
return model
def setup_pretrained_weights():
model= setup_model()
model.compile(
loss=tf.keras.losses.categorical_crossentropy,
optimizer='adam',
metrics=['accuracy']
)
model.fit(x_train, y_train)
_, pretrained_weights = tempfile.mkstemp('.tf')
model.save_weights(pretrained_weights)
return pretrained_weights
def setup_pretrained_model():
model = setup_model()
pretrained_weights = setup_pretrained_weights()
model.load_weights(pretrained_weights)
return model
setup_model()
pretrained_weights = setup_pretrained_weights()
###Output
_____no_output_____
###Markdown
Define quantization aware model By defining models in the following ways, there are available paths to deployment to backends listed in the [overview page](https://www.tensorflow.org/model_optimization/guide/quantization/training.md). By default, 8-bit quantization is used.Note: a quantization aware model is not actually quantized. Creating a quantized model is a separate step. Quantize whole model **Your use case:*** Subclassed models are not supported.**Tips for better model accuracy:*** Try "Quantize some layers" to skip quantizing the layers that reduce accuracy the most.* It's generally better to finetune with quantization aware training as opposed to training from scratch. To make the whole model aware of quantization, apply `tfmot.quantization.keras.quantize_model` to the model.
###Code
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
quant_aware_model = tfmot.quantization.keras.quantize_model(base_model)
quant_aware_model.summary()
###Output
_____no_output_____
###Markdown
Quantize some layers Quantizing a model can have a negative effect on accuracy. You can selectively quantize layers of a model to explore the trade-off between accuracy, speed, and model size.**Your use case:*** To deploy to a backend that only works well with fully quantized models (e.g. EdgeTPU v1, most DSPs), try "Quantize whole model".**Tips for better model accuracy:*** It's generally better to finetune with quantization aware training as opposed to training from scratch.* Try quantizing the later layers instead of the first layers.* Avoid quantizing critical layers (e.g. attention mechanism). In the example below, quantize only the `Dense` layers.
###Code
# Create a base model
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
# Helper function uses `quantize_annotate_layer` to annotate that only the
# Dense layers should be quantized.
def apply_quantization_to_dense(layer):
if isinstance(layer, tf.keras.layers.Dense):
return tfmot.quantization.keras.quantize_annotate_layer(layer)
return layer
# Use `tf.keras.models.clone_model` to apply `apply_quantization_to_dense`
# to the layers of the model.
annotated_model = tf.keras.models.clone_model(
base_model,
clone_function=apply_quantization_to_dense,
)
# Now that the Dense layers are annotated,
# `quantize_apply` actually makes the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model)
quant_aware_model.summary()
###Output
_____no_output_____
###Markdown
While this example used the type of the layer to decide what to quantize, the easiest way to quantize a particular layer is to set its `name` property, and look for that name in the `clone_function`.
###Code
print(base_model.layers[0].name)
###Output
_____no_output_____
###Markdown
More readable but potentially lower model accuracy This is not compatible with finetuning with quantization aware training, which is why it may be less accurate than the above examples. **Functional example**
###Code
# Use `quantize_annotate_layer` to annotate that the `Dense` layer
# should be quantized.
i = tf.keras.Input(shape=(20,))
x = tfmot.quantization.keras.quantize_annotate_layer(tf.keras.layers.Dense(10))(i)
o = tf.keras.layers.Flatten()(x)
annotated_model = tf.keras.Model(inputs=i, outputs=o)
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model)
# For deployment purposes, the tool adds `QuantizeLayer` after `InputLayer` so that the
# quantized model can take in float inputs instead of only uint8.
quant_aware_model.summary()
###Output
_____no_output_____
###Markdown
**Sequential example**
###Code
# Use `quantize_annotate_layer` to annotate that the `Dense` layer
# should be quantized.
annotated_model = tf.keras.Sequential([
tfmot.quantization.keras.quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=input_shape)),
tf.keras.layers.Flatten()
])
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model)
quant_aware_model.summary()
###Output
_____no_output_____
###Markdown
Checkpoint and deserialize **Your use case:** this code is only needed for the HDF5 model format (not HDF5 weights or other formats).
###Code
# Define the model.
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
quant_aware_model = tfmot.quantization.keras.quantize_model(base_model)
# Save or checkpoint the model.
_, keras_model_file = tempfile.mkstemp('.h5')
quant_aware_model.save(keras_model_file)
# `quantize_scope` is needed for deserializing HDF5 models.
with tfmot.quantization.keras.quantize_scope():
loaded_model = tf.keras.models.load_model(keras_model_file)
loaded_model.summary()
###Output
_____no_output_____
###Markdown
Create and deploy quantized model In general, reference the documentation for the deployment backend that youwill use.This is an example for the TFLite backend.
###Code
base_model = setup_pretrained_model()
quant_aware_model = tfmot.quantization.keras.quantize_model(base_model)
# Typically you train the model here.
converter = tf.lite.TFLiteConverter.from_keras_model(quant_aware_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
quantized_tflite_model = converter.convert()
###Output
_____no_output_____
###Markdown
Experiment with quantization **Your use case**: using the following APIs means that there is nosupported path to deployment. For instance, TFLite conversionand kernel implementations only support 8-bit quantization.The features are also experimental and not subject to backward compatibility. * `tfmot.quantization.keras.QuantizeConfig` * `tfmot.quantization.keras.quantizers.Quantizer` * `tfmot.quantization.keras.quantizers.LastValueQuantizer` * `tfmot.quantization.keras.quantizers.MovingAverageQuantizer` Setup: DefaultDenseQuantizeConfig Experimenting requires using `tfmot.quantization.keras.QuantizeConfig`, which describes how to quantize the weights, activations, and outputs of a layer.Below is an example that defines the same `QuantizeConfig` used for the `Dense` layer in the API defaults.During the forward propagation in this example, the `LastValueQuantizer` returned in `get_weights_and_quantizers` is called with `layer.kernel` as the input, producing an output. The output replaces `layer.kernel`in the original forward propagation of the `Dense` layer, via the logic defined in `set_quantize_weights`. The same idea applies to the activations and outputs.
###Code
LastValueQuantizer = tfmot.quantization.keras.quantizers.LastValueQuantizer
MovingAverageQuantizer = tfmot.quantization.keras.quantizers.MovingAverageQuantizer
class DefaultDenseQuantizeConfig(tfmot.quantization.keras.QuantizeConfig):
# Configure how to quantize weights.
def get_weights_and_quantizers(self, layer):
return [(layer.kernel, LastValueQuantizer(num_bits=8, symmetric=True, narrow_range=False, per_axis=False))]
# Configure how to quantize activations.
def get_activations_and_quantizers(self, layer):
return [(layer.activation, MovingAverageQuantizer(num_bits=8, symmetric=False, narrow_range=False, per_axis=False))]
def set_quantize_weights(self, layer, quantize_weights):
# Add this line for each item returned in `get_weights_and_quantizers`
# , in the same order
layer.kernel = quantize_weights[0]
def set_quantize_activations(self, layer, quantize_activations):
# Add this line for each item returned in `get_activations_and_quantizers`
# , in the same order.
layer.activation = quantize_activations[0]
# Configure how to quantize outputs (may be equivalent to activations).
def get_output_quantizers(self, layer):
return []
def get_config(self):
return {}
###Output
_____no_output_____
###Markdown
Quantize custom Keras layer This example uses the `DefaultDenseQuantizeConfig` to quantize the `CustomLayer`.Applying the configuration is the same acrossthe "Experiment with quantization" use cases. * Apply `tfmot.quantization.keras.quantize_annotate_layer` to the `CustomLayer` and pass in the `QuantizeConfig`. * Use`tfmot.quantization.keras.quantize_annotate_model` to continue to quantize the rest of the model with the API defaults.
###Code
quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer
quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model
quantize_scope = tfmot.quantization.keras.quantize_scope
class CustomLayer(tf.keras.layers.Dense):
pass
model = quantize_annotate_model(tf.keras.Sequential([
quantize_annotate_layer(CustomLayer(20, input_shape=(20,)), DefaultDenseQuantizeConfig()),
tf.keras.layers.Flatten()
]))
# `quantize_apply` requires mentioning `DefaultDenseQuantizeConfig` with `quantize_scope`
# as well as the custom Keras layer.
with quantize_scope(
{'DefaultDenseQuantizeConfig': DefaultDenseQuantizeConfig,
'CustomLayer': CustomLayer}):
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(model)
quant_aware_model.summary()
###Output
_____no_output_____
###Markdown
Modify quantization parameters **Common mistake:** quantizing the bias to fewer than 32-bits usually harms model accuracy too much.This example modifies the `Dense` layer to use 4-bits for its weights insteadof the default 8-bits. The rest of the model continues to use API defaults.
###Code
quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer
quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model
quantize_scope = tfmot.quantization.keras.quantize_scope
class ModifiedDenseQuantizeConfig(DefaultDenseQuantizeConfig):
# Configure weights to quantize with 4-bit instead of 8-bits.
def get_weights_and_quantizers(self, layer):
return [(layer.kernel, LastValueQuantizer(num_bits=4, symmetric=True, narrow_range=False, per_axis=False))]
###Output
_____no_output_____
###Markdown
Applying the configuration is the same acrossthe "Experiment with quantization" use cases. * Apply `tfmot.quantization.keras.quantize_annotate_layer` to the `Dense` layer and pass in the `QuantizeConfig`. * Use`tfmot.quantization.keras.quantize_annotate_model` to continue to quantize the rest of the model with the API defaults.
###Code
model = quantize_annotate_model(tf.keras.Sequential([
# Pass in modified `QuantizeConfig` to modify this Dense layer.
quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=(20,)), ModifiedDenseQuantizeConfig()),
tf.keras.layers.Flatten()
]))
# `quantize_apply` requires mentioning `ModifiedDenseQuantizeConfig` with `quantize_scope`:
with quantize_scope(
{'ModifiedDenseQuantizeConfig': ModifiedDenseQuantizeConfig}):
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(model)
quant_aware_model.summary()
###Output
_____no_output_____
###Markdown
Modify parts of layer to quantize This example modifies the `Dense` layer to skip quantizing the activation. The rest of the model continues to use API defaults.
###Code
quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer
quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model
quantize_scope = tfmot.quantization.keras.quantize_scope
class ModifiedDenseQuantizeConfig(DefaultDenseQuantizeConfig):
def get_activations_and_quantizers(self, layer):
# Skip quantizing activations.
return []
def set_quantize_activations(self, layer, quantize_activations):
# Empty since `get_activaations_and_quantizers` returns
# an empty list.
return
###Output
_____no_output_____
###Markdown
Applying the configuration is the same acrossthe "Experiment with quantization" use cases. * Apply `tfmot.quantization.keras.quantize_annotate_layer` to the `Dense` layer and pass in the `QuantizeConfig`. * Use`tfmot.quantization.keras.quantize_annotate_model` to continue to quantize the rest of the model with the API defaults.
###Code
model = quantize_annotate_model(tf.keras.Sequential([
# Pass in modified `QuantizeConfig` to modify this Dense layer.
quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=(20,)), ModifiedDenseQuantizeConfig()),
tf.keras.layers.Flatten()
]))
# `quantize_apply` requires mentioning `ModifiedDenseQuantizeConfig` with `quantize_scope`:
with quantize_scope(
{'ModifiedDenseQuantizeConfig': ModifiedDenseQuantizeConfig}):
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(model)
quant_aware_model.summary()
###Output
_____no_output_____
###Markdown
Use custom quantization algorithm The `tfmot.quantization.keras.quantizers.Quantizer` class is a callable thatcan apply any algorithm to its inputs.In this example, the inputs are the weights, and we apply the math in the`FixedRangeQuantizer` \_\_call\_\_ function to the weights. Instead of the originalweights values, the output of the`FixedRangeQuantizer` is now passed to whatever would have used the weights.
###Code
quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer
quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model
quantize_scope = tfmot.quantization.keras.quantize_scope
class FixedRangeQuantizer(tfmot.quantization.keras.quantizers.Quantizer):
"""Quantizer which forces outputs to be between -1 and 1."""
def build(self, tensor_shape, name, layer):
# Not needed. No new TensorFlow variables needed.
return {}
def __call__(self, inputs, training, weights, **kwargs):
return tf.keras.backend.clip(inputs, -1.0, 1.0)
def get_config(self):
# Not needed. No __init__ parameters to serialize.
return {}
class ModifiedDenseQuantizeConfig(DefaultDenseQuantizeConfig):
# Configure weights to quantize with 4-bit instead of 8-bits.
def get_weights_and_quantizers(self, layer):
# Use custom algorithm defined in `FixedRangeQuantizer` instead of default Quantizer.
return [(layer.kernel, FixedRangeQuantizer())]
###Output
_____no_output_____
###Markdown
Applying the configuration is the same acrossthe "Experiment with quantization" use cases. * Apply `tfmot.quantization.keras.quantize_annotate_layer` to the `Dense` layer and pass in the `QuantizeConfig`. * Use`tfmot.quantization.keras.quantize_annotate_model` to continue to quantize the rest of the model with the API defaults.
###Code
model = quantize_annotate_model(tf.keras.Sequential([
# Pass in modified `QuantizeConfig` to modify this `Dense` layer.
quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=(20,)), ModifiedDenseQuantizeConfig()),
tf.keras.layers.Flatten()
]))
# `quantize_apply` requires mentioning `ModifiedDenseQuantizeConfig` with `quantize_scope`:
with quantize_scope(
{'ModifiedDenseQuantizeConfig': ModifiedDenseQuantizeConfig}):
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(model)
quant_aware_model.summary()
###Output
_____no_output_____ |
_episodes/07-find-challenges.ipynb | ###Markdown
Challenge 1: Using `grep`>> Which command would result in the following output:>```> and the presence of absence:```> {: .output}>> 1. `grep "of" haiku.txt`> 2. `grep -E "of" haiku.txt`> 3. `grep -w "of" haiku.txt`> 4. `grep -i "of" haiku.txt`Solution> > The correct answer is 3, because the `-w` option looks only for whole-word matches.> > The other options will also match 'of' when part of another word. Challenge 2: Tracking a Species>> Leah has several hundred> data files saved in one directory, each of which is formatted like this:>```> 2013-11-05,deer,5> 2013-11-05,rabbit,22> 2013-11-05,raccoon,7> 2013-11-06,rabbit,19> 2013-11-06,deer,2```> {: .source}>> She wants to write a shell script that takes a species as the first command-line argument> and a directory as the second argument. The script should return one file called `species.txt`> containing a list of dates and the number of that species seen on each date.> For example using the data shown above, `rabbit.txt` would contain:>```> 2013-11-05,22> 2013-11-06,19```> {: .source}>> Put these commands and pipes in the right order to achieve this:>
###Code
%%bash
> cut -d : -f 2
> >
> |
> grep -w $1 -r $2
> |
> $1.txt
> cut -d , -f 1,3
###Output
_____no_output_____ |
.ipynb_checkpoints/Curso Pandas #4-checkpoint.ipynb | ###Markdown
Exportando a Base de Dados
###Code
dados_residencial.to_csv('dados/aluguel_residencial.csv', sep=';')
dados_residencial_2 = pd.read_csv('dados/aluguel_residencial.csv', sep=';')
dados_residencial_2
dados_residencial.to_csv('dados/aluguel_residencial.csv', sep = ';', index = False)
dados_residencial_2 = pd.read_csv('dados/aluguel_residencial.csv', sep=';')
dados_residencial_2
###Output
_____no_output_____
###Markdown
Organizando DataFrames
###Code
data = [[1,2,3],[4,5,6],[7,8,9]]
data
list('321')
df = pd.DataFrame(data,list('321'), list('ZYX'))
df
df.sort_index(inplace = True)
df
df.sort_index(inplace = True, axis = 1)
df
###Output
_____no_output_____ |
cortography/notebooks/PlotNodesEdges.ipynb | ###Markdown
`plot_connectome` Example---Use `Nilearn`'s `plot_connectome` function to visualize nodes and edges. Credits to Dr. Pablo Damasceno for the center of mass file:
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from nilearn import plotting
###Output
_____no_output_____
###Markdown
Load Data---Sample data from Dr. Fei Jiang and centers of mass for Desikan-Killiany atlas regions:
###Code
# load smaple data
data = np.load('../data/weighU.npy')
print(data.shape)
# center of mass file:
CenterOfMass_DK = np.load('../data/atlases/DK/com_dk.npy', allow_pickle = True)
CenterOfMass_DK = CenterOfMass_DK.ravel()[0]
print('Center of masses for ', len(CenterOfMass_DK), ' parcels')
# Load DK region names:
DK_region_names = pd.read_csv('../data/atlases/DK/dk_names.csv').set_index('Atlas')
###Output
(68, 6)
Center of masses for 112 parcels
###Markdown
Match the `data` to each region name, assuming these are the correct label and ordering ...
###Code
DK_dict68 = {}
for i, region in enumerate(DK_region_names[:68].index):
DK_dict68.update({region:data[i,:]})
DK_data68 = pd.DataFrame(DK_dict68)
DK_data68
###Output
_____no_output_____
###Markdown
Create array of coordinates for each DK region:
###Code
coords = np.array([CenterOfMass_DK[region] for region in DK_region_names[:68].index])
print('Array of coordinates for ', len(coords) , 'regions')
###Output
Array of coordinates for 68 regions
###Markdown
Plotting Nodes:
###Code
network = np.array([[0]*68]*68) # 68 cortical regions
plotting.plot_connectome(network, coords, node_color = [[0, 0.5, 0.8]]*68, node_size = data[:,0]**2*120)
###Output
_____no_output_____ |
docs/Tutorial_Time_Series_Chains.ipynb | ###Markdown
Time Series Chains Forecasting Web Query Data with Anchored Time Series Chains (ATSC)This example is adapted from the [Web Query Volume case study](http://www.www2015.it/documents/proceedings/proceedings/p721.pdf) and utilizes the main takeaways from the [Matrix Profile VII](https://www.cs.ucr.edu/~eamonn/chains_ICDM.pdf) research paper. Getting StartedLet's import the packages that we'll need to load, analyze, and plot the data.
###Code
%matplotlib inline
import pandas as pd
import numpy as np
import stumpy
from scipy.io import loadmat
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle, FancyArrowPatch
import itertools
plt.rcParams["figure.figsize"] = [20, 6] # width, height
plt.rcParams['xtick.direction'] = 'out'
###Output
_____no_output_____
###Markdown
What are Time Series Chains?Time series chains may be informally considered as motifs that evolve or drift in some direction over time. The figure below illustrates the differencebetween [time series motifs](Tutorial_1.ipynb) (left) and time series chains (right).
###Code
x = np.random.rand(20)
y = np.random.rand(20)
n = 10
motifs_x = 0.5 * np.ones(n) + np.random.uniform(-0.05, 0.05, n)
motifs_y = 0.5 * np.ones(n) + np.random.uniform(-0.05, 0.05, n)
sin_x = np.linspace(0, np.pi/2, n+1)
sin_y = np.sin(sin_x)/4
chains_x = 0.5 * np.ones(n+1) + 0.02 * np.arange(n+1)
chains_y = 0.5 * np.ones(n+1) + sin_y
fig, axes = plt.subplots(nrows=1, ncols=2)
axes[0].scatter(x, y, color='lightgrey')
axes[0].scatter(motifs_x, motifs_y, color='red')
axes[1].scatter(x, y, color='lightgrey')
axes[1].scatter(chains_x[0], chains_y[0], edgecolor='red', color='white')
axes[1].scatter(chains_x[1:n], chains_y[1:n], color='red')
axes[1].scatter(chains_x[n], chains_y[n], edgecolor='red', color='white', marker='*', s=200)
plt.show()
###Output
_____no_output_____
###Markdown
Above, we are visualizing time series subsequences as points in high-dimensional space. Shown on the left is a time series motif and it can be thought of as a collection of points that approximate a platonic ideal. In contrast, depicted on the right, is a time series chain and it may be thought of as an evolving trail of points in the space. Here, the open red circle represents the first link in the chain, the anchor. Both motifs and chains have the property that each subsequence is relatively close to its nearest neighbor. However, the motif set (left) also has a relatively small diameter. In contrast, the set of points in a chain (right) has a diameter that is much larger than the mean of each member’s distance to its nearest neighbor and, moreover, the chain has the important property of directionality. For example, in the case of a motif, if an additional member was added to the motif set, its location will also be somewhere near the platonic ideal, but independent of the previous subsequences. In contrast, in the case of a chain, the location of the next member of the chain would be somewhere after the last red circle, possibly where the open red star is located. A Simplified ExampleAdapted from the [Matrix Profile VII](https://www.cs.ucr.edu/~eamonn/chains_ICDM.pdf) paper, consider the following time series:47, 32, 1, 22, 2, 58, 3, 36, 4, -5, 5, 40Assume that the subsequence length is 1 and the distance between two subsequences is simply the absolute differencebetween them. To be clear, we are making these simple and pathological assumptions here just for the purposes ofelucidation; we are actually targeting much longer subsequence lengths and using z-normalized Euclidean distance in ourapplications. To capture the directionality of a time series chain, we need to store the left and right nearest neighbor information into the left (IL) and right (IR) matrix profile indices:| Index | Value | Left Index (IL) | Right Index (IR) ||-------|-------|-----------------|------------------|| 1 | 47 | - | 12 || 2 | 32 | 1 | 8 || 3 | 1 | 2 | 5 || 4 | 22 | 2 | 8 || 5 | 2 | 3 | 7 || 6 | 58 | 1 | 12 || 7 | 3 | 5 | 9 || 8 | 36 | 2 | 12 || 9 | 4 | 7 | 11 || 10 | -5 | 3 | 11 || 11 | 5 | 9 | 12 || 12 | 40 | 8 | - |In this vertical/transposed representation, the `index` column shows the location of every subsequence in the time series, the `value` column contains the original numbers from our time series above, the `IL` column shows the left matrix profile indices, and `IR` is the right matrix profile indices. For example, `IR[2] = 8` means the right nearest neighbor of `index = 2` (which has `value = 32`) is at `index = 8` (which has `value = 36`). Similarly, `IL[3] = 2` means that the left nearest neighbor of `index = 3` (with `value = 1`) is at `index = 2` (which has `value = 32`). To better visualize the left/right matrix profile index, we use arrows to link every subsequence in the time series with its left and right nearest neighbors:
###Code
nearest_neighbors = np.array([[1, 47, np.nan, 12],
[2, 32, 1, 8],
[3, 1, 2, 5],
[4, 22, 2, 8],
[5, 2, 3, 7],
[6, 58, 1, 12],
[7, 3, 5, 9],
[8, 36, 2, 12],
[9, 4, 7, 11],
[10, -5, 3, 11],
[11, 5, 9, 12],
[12, 40, 8, np.nan]])
colors = [['C1', 'C1'],
['C2', 'C5'],
['C3', 'C5'],
['C4', 'C4'],
['C3', 'C2'],
['C5', 'C3'],
['C3', 'C2'],
['C2', 'C1'],
['C3', 'C2'],
['C6', 'C1'],
['C6', 'C2'],
['C1', 'C1']]
style="Simple, tail_width=0.5, head_width=6, head_length=8"
kw = dict(arrowstyle=style, connectionstyle="arc3, rad=-.5",)
xs = np.arange(nearest_neighbors.shape[0]) + 1
ys = np.zeros(nearest_neighbors.shape[0])
plt.plot(xs, ys, "-o", markerfacecolor="None", markeredgecolor="None", linestyle="None")
x0, x1, y0, y1 = plt.axis()
plot_margin = 5.0
plt.axis((x0 - plot_margin,
x1 + plot_margin,
y0 - plot_margin,
y1 + plot_margin))
plt.axis('off')
for x, y, nearest_neighbor, color in zip(xs, ys, nearest_neighbors, colors):
plt.text(x, y, str(int(nearest_neighbor[1])), color="black", fontsize=20)
# Plot right matrix profile indices
if not np.isnan(nearest_neighbor[3]):
arrow = FancyArrowPatch((x, 0.5), (nearest_neighbor[3], 0.5), color=color[0], **kw)
plt.gca().add_patch(arrow)
# Plot left matrix profile indices
if not np.isnan(nearest_neighbor[2]):
arrow = FancyArrowPatch((x, 0.0), (nearest_neighbor[2], 0.0), color=color[1], **kw)
plt.gca().add_patch(arrow)
plt.show()
###Output
_____no_output_____
###Markdown
An arrow pointing from a number to its right nearest neighbor (arrows shown above the time series) can be referred to as forward arrow and an arrow pointing from a number to its left nearest neighbor (arrows shown below the time series) can be referred to as a backward arrow. According to the formal definition of a time series chain (see [Matrix Profile VII](https://www.cs.ucr.edu/~eamonn/chains_ICDM.pdf) for a thorough definition and discussion), every pair of consecutive subsequences in a chain must be connected by both a forward arrow and a backward arrow. A keen eye will spot the fact that the longest chain in our simplified example is:
###Code
nearest_neighbors = np.array([[1, 47, np.nan, np.nan],
[2, 32, np.nan, np.nan],
[3, 1, np.nan, 5],
[4, 22, np.nan, np.nan],
[5, 2, 3, 7],
[6, 58, np.nan, np.nan],
[7, 3, 5, 9],
[8, 36, np.nan, np.nan],
[9, 4, 7, 11],
[10, -5, np.nan, np.nan],
[11, 5, 9, np.nan],
[12, 40, np.nan, np.nan]])
colors = [['C1', 'C1'],
['C2', 'C5'],
['C3', 'C5'],
['C4', 'C4'],
['C3', 'C2'],
['C5', 'C3'],
['C3', 'C2'],
['C2', 'C1'],
['C3', 'C2'],
['C6', 'C1'],
['C6', 'C2'],
['C1', 'C1']]
style="Simple, tail_width=0.5, head_width=6, head_length=8"
kw = dict(arrowstyle=style, connectionstyle="arc3, rad=-.5",)
xs = np.arange(nearest_neighbors.shape[0]) + 1
ys = np.zeros(nearest_neighbors.shape[0])
plt.plot(xs, ys, "-o", markerfacecolor="None", markeredgecolor="None", linestyle="None")
x0, x1, y0, y1 = plt.axis()
plot_margin = 5.0
plt.axis((x0 - plot_margin,
x1 + plot_margin,
y0 - plot_margin,
y1 + plot_margin))
plt.axis('off')
for x, y, nearest_neighbor, color in zip(xs, ys, nearest_neighbors, colors):
plt.text(x, y, str(int(nearest_neighbor[1])), color="black", fontsize=20)
# Plot right matrix profile indices
if not np.isnan(nearest_neighbor[3]):
arrow = FancyArrowPatch((x, 0.5), (nearest_neighbor[3], 0.5), color=color[0], **kw)
plt.gca().add_patch(arrow)
# Plot left matrix profile indices
if not np.isnan(nearest_neighbor[2]):
arrow = FancyArrowPatch((x, 0.0), (nearest_neighbor[2], 0.0), color=color[1], **kw)
plt.gca().add_patch(arrow)
plt.show()
###Output
_____no_output_____
###Markdown
The longest extracted chain is therefore 1 ⇌ 2 ⇌ 3 ⇌ 4 ⇌ 5. Note that we see a gradual monotonic increase in the data but, in reality, the increase or decrease in drift can happen in arbitrarily complex ways that can be detected by the time series chains approach. The key component of drifting is that the time series must contain chains with clear directionality.STUMPY is capable of computing:1. anchored time series chains (ATSC) - grow a chain from a user-specified anchor (i.e., specific subsequence)2. all-chain set (ALLC) - a set of anchored time series chains (i.e., each chain starts with a particular subsequence) that are not subsumed by another longer chain3. unanchored time series chain(s) - the unconditionally longest chain within a time series (there could be more than one if there were chains with the same length) So, what does this mean in the context of a real time series? Let's take a look at a real example from web query data! Retrieve the DataWe will be looking at a noisy dataset that is under-sampled and has a growing trend, which will perfectly illustrate the idea regarding time series chains. The data contains a decade-long GoogleTrend query volume (collected weekly from 2004-2014) for the keyword Kohl’s, an American retail chain. First, we'll download the data, extract it, and insert it into a pandas dataframe.
###Code
df = pd.read_csv("https://zenodo.org/record/4276348/files/Time_Series_Chains_Kohls_data.csv?download=1")
df.head()
###Output
_____no_output_____
###Markdown
Visualizing the Data
###Code
plt.plot(df['volume'], color='black')
plt.xlim(0, df.shape[0]+12)
color = itertools.cycle(['white', 'gainsboro'])
for i, x in enumerate(range(0, df.shape[0], 52)):
plt.text(x+12, 0.9, str(2004+i), color="black", fontsize=20)
rect = Rectangle((x, -1), 52, 2.5, facecolor=next(color))
plt.gca().add_patch(rect)
plt.show()
###Output
_____no_output_____
###Markdown
The raw time series above displays ten years of web query volume for the keyword "Kohl's", where each alternating white and grey vertical band represents a 52 week period starting from 2004 to 2014. As depicted, the time series features a significant but unsurprising "end-of-year holiday bump". Relating back to time series chains, we can see that the bump is generally increasing over time and so we might be able to capture this when we compute the unanchored chain.However, as we learned above, in order to compute any time series chains, we also need the left and right matrix profile indices. Luckily for us, according to the docstring, the `stump` function not only returns the (bidirectional) matrix profile and the matrix profile indices in the first and second columns of the NumPy array, respectively, but the third and fourth columns consists of the left matrix profile indices and the right matrix profile indices, respectively. Computing the Left and Right Matrix Profile IndicesSo, let's go ahead and compute the matrix profile indices and we'll set the window size, `m = 20`, which is the approximate length of a "bump".
###Code
m = 20
mp = stumpy.stump(df['volume'], m=m)
###Output
_____no_output_____
###Markdown
Computing the Unanchored ChainNow, with our left and right matrix profile indices in hand, we are ready to call the all-chain set function, `allc`, which not only returns the all-chain set but, as a freebie, it also returns the unconditionally longest chain, also know as the unanchored chain. The latter of which is really what we're most interested in.
###Code
all_chain_set, unanchored_chain = stumpy.allc(mp[:, 2], mp[:, 3])
###Output
_____no_output_____
###Markdown
Visualizing the Unanchored Chain
###Code
plt.plot(df['volume'], linewidth=1, color='black')
for i in range(unanchored_chain.shape[0]):
y = df['volume'].iloc[unanchored_chain[i]:unanchored_chain[i]+m]
x = y.index.values
plt.plot(x, y, linewidth=3)
color = itertools.cycle(['white', 'gainsboro'])
for i, x in enumerate(range(0, df.shape[0], 52)):
plt.text(x+12, 0.9, str(2004+i), color="black", fontsize=20)
rect = Rectangle((x, -1), 52, 2.5, facecolor=next(color))
plt.gca().add_patch(rect)
plt.show()
plt.axis('off')
for i in range(unanchored_chain.shape[0]):
data = df['volume'].iloc[unanchored_chain[i]:unanchored_chain[i]+m].reset_index().values
x = data[:, 0]
y = data[:, 1]
plt.axvline(x=x[0]-x.min()+(m+5)*i + 11, alpha=0.3)
plt.axvline(x=x[0]-x.min()+(m+5)*i + 15, alpha=0.3, linestyle='-.')
plt.plot(x-x.min()+(m+5)*i, y-y.min(), linewidth=3)
plt.show()
###Output
_____no_output_____
###Markdown
Time Series Chains Forecasting Web Query Data with Anchored Time Series Chains (ATSC)This example is adapted from the [Web Query Volume case study](http://www.www2015.it/documents/proceedings/proceedings/p721.pdf) and utilizes the main takeaways from the [Matrix Profile VII](https://www.cs.ucr.edu/~eamonn/chains_ICDM.pdf) research paper. Getting StartedLet's import the packages that we'll need to load, analyze, and plot the data.
###Code
%matplotlib inline
import pandas as pd
import numpy as np
import stumpy
from scipy.io import loadmat
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle, FancyArrowPatch
import itertools
plt.rcParams["figure.figsize"] = [20, 6] # width, height
plt.rcParams['xtick.direction'] = 'out'
###Output
_____no_output_____
###Markdown
What are Time Series Chains?Time series chains may be informally considered as motifs that evolve or drift in some direction over time. The figure below illustrates the differencebetween [time series motifs](Tutorial_1.ipynb) (left) and time series chains (right).
###Code
x = np.random.rand(20)
y = np.random.rand(20)
n = 10
motifs_x = 0.5 * np.ones(n) + np.random.uniform(-0.05, 0.05, n)
motifs_y = 0.5 * np.ones(n) + np.random.uniform(-0.05, 0.05, n)
sin_x = np.linspace(0, np.pi/2, n+1)
sin_y = np.sin(sin_x)/4
chains_x = 0.5 * np.ones(n+1) + 0.02 * np.arange(n+1)
chains_y = 0.5 * np.ones(n+1) + sin_y
fig, axes = plt.subplots(nrows=1, ncols=2)
axes[0].scatter(x, y, color='lightgrey')
axes[0].scatter(motifs_x, motifs_y, color='red')
axes[1].scatter(x, y, color='lightgrey')
axes[1].scatter(chains_x[0], chains_y[0], edgecolor='red', color='white')
axes[1].scatter(chains_x[1:n], chains_y[1:n], color='red')
axes[1].scatter(chains_x[n], chains_y[n], edgecolor='red', color='white', marker='*', s=200)
plt.show()
###Output
_____no_output_____
###Markdown
Above, we are visualizing time series subsequences as points in high-dimensional space. Shown on the left is a time series motif and it can be thought of as a collection of points that approximate a platonic ideal. In contrast, depicted on the right, is a time series chain and it may be thought of as an evolving trail of points in the space. Here, the open red circle represents the first link in the chain, the anchor. Both motifs and chains have the property that each subsequence is relatively close to its nearest neighbor. However, the motif set (left) also has a relatively small diameter. In contrast, the set of points in a chain (right) has a diameter that is much larger than the mean of each member’s distance to its nearest neighbor and, moreover, the chain has the important property of directionality. For example, in the case of a motif, if an additional member was added to the motif set, its location will also be somewhere near the platonic ideal, but independent of the previous subsequences. In contrast, in the case of a chain, the location of the next member of the chain would be somewhere after the last red circle, possibly where the open red star is located. A Simplified ExampleAdapted from the [Matrix Profile VII](https://www.cs.ucr.edu/~eamonn/chains_ICDM.pdf) paper, consider the following time series:47, 32, 1, 22, 2, 58, 3, 36, 4, -5, 5, 40Assume that the subsequence length is 1 and the distance between two subsequences is simply the absolute differencebetween them. To be clear, we are making these simple and pathological assumptions here just for the purposes ofelucidation; we are actually targeting much longer subsequence lengths and using z-normalized Euclidean distance in ourapplications. To capture the directionality of a time series chain, we need to store the left and right nearest neighbor information into the left (IL) and right (IR) matrix profile indices:| Index | Value | Left Index (IL) | Right Index (IR) ||-------|-------|-----------------|------------------|| 1 | 47 | - | 12 || 2 | 32 | 1 | 8 || 3 | 1 | 2 | 5 || 4 | 22 | 2 | 8 || 5 | 2 | 3 | 7 || 6 | 58 | 1 | 12 || 7 | 3 | 5 | 9 || 8 | 36 | 2 | 12 || 9 | 4 | 7 | 11 || 10 | -5 | 3 | 11 || 11 | 5 | 9 | 12 || 12 | 40 | 8 | - |In this vertical/transposed representation, the `index` column shows the location of every subsequence in the time series, the `value` column contains the original numbers from our time series above, the `IL` column shows the left matrix profile indices, and `IR` is the right matrix profile indices. For example, `IR[2] = 8` means the right nearest neighbor of `index = 2` (which has `value = 32`) is at `index = 8` (which has `value = 36`). Similarly, `IL[3] = 2` means that the left nearest neighbor of `index = 3` (with `value = 1`) is at `index = 2` (which has `value = 32`). To better visualize the left/right matrix profile index, we use arrows to link every subsequence in the time series with its left and right nearest neighbors:
###Code
nearest_neighbors = np.array([[1, 47, np.nan, 12],
[2, 32, 1, 8],
[3, 1, 2, 5],
[4, 22, 2, 8],
[5, 2, 3, 7],
[6, 58, 1, 12],
[7, 3, 5, 9],
[8, 36, 2, 12],
[9, 4, 7, 11],
[10, -5, 3, 11],
[11, 5, 9, 12],
[12, 40, 8, np.nan]])
colors = [['C1', 'C1'],
['C2', 'C5'],
['C3', 'C5'],
['C4', 'C4'],
['C3', 'C2'],
['C5', 'C3'],
['C3', 'C2'],
['C2', 'C1'],
['C3', 'C2'],
['C6', 'C1'],
['C6', 'C2'],
['C1', 'C1']]
style="Simple, tail_width=0.5, head_width=6, head_length=8"
kw = dict(arrowstyle=style, connectionstyle="arc3, rad=-.5",)
xs = np.arange(nearest_neighbors.shape[0]) + 1
ys = np.zeros(nearest_neighbors.shape[0])
plt.plot(xs, ys, "-o", markerfacecolor="None", markeredgecolor="None", linestyle="None")
x0, x1, y0, y1 = plt.axis()
plot_margin = 5.0
plt.axis((x0 - plot_margin,
x1 + plot_margin,
y0 - plot_margin,
y1 + plot_margin))
plt.axis('off')
for x, y, nearest_neighbor, color in zip(xs, ys, nearest_neighbors, colors):
plt.text(x, y, str(int(nearest_neighbor[1])), color="black", fontsize=20)
# Plot right matrix profile indices
if not np.isnan(nearest_neighbor[3]):
arrow = FancyArrowPatch((x, 0.5), (nearest_neighbor[3], 0.5), color=color[0], **kw)
plt.gca().add_patch(arrow)
# Plot left matrix profile indices
if not np.isnan(nearest_neighbor[2]):
arrow = FancyArrowPatch((x, 0.0), (nearest_neighbor[2], 0.0), color=color[1], **kw)
plt.gca().add_patch(arrow)
plt.show()
###Output
_____no_output_____
###Markdown
An arrow pointing from a number to its right nearest neighbor (arrows shown above the time series) can be referred to as forward arrow and an arrow pointing from a number to its left nearest neighbor (arrows shown below the time series) can be referred to as a backward arrow. According to the formal definition of a time series chain (see [Matrix Profile VII](https://www.cs.ucr.edu/~eamonn/chains_ICDM.pdf) for a thorough definition and discussion), every pair of consecutive subsequences in a chain must be connected by both a forward arrow and a backward arrow. A keen eye will spot the fact that the longest chain in our simplified example is:
###Code
nearest_neighbors = np.array([[1, 47, np.nan, np.nan],
[2, 32, np.nan, np.nan],
[3, 1, np.nan, 5],
[4, 22, np.nan, np.nan],
[5, 2, 3, 7],
[6, 58, np.nan, np.nan],
[7, 3, 5, 9],
[8, 36, np.nan, np.nan],
[9, 4, 7, 11],
[10, -5, np.nan, np.nan],
[11, 5, 9, np.nan],
[12, 40, np.nan, np.nan]])
colors = [['C1', 'C1'],
['C2', 'C5'],
['C3', 'C5'],
['C4', 'C4'],
['C3', 'C2'],
['C5', 'C3'],
['C3', 'C2'],
['C2', 'C1'],
['C3', 'C2'],
['C6', 'C1'],
['C6', 'C2'],
['C1', 'C1']]
style="Simple, tail_width=0.5, head_width=6, head_length=8"
kw = dict(arrowstyle=style, connectionstyle="arc3, rad=-.5",)
xs = np.arange(nearest_neighbors.shape[0]) + 1
ys = np.zeros(nearest_neighbors.shape[0])
plt.plot(xs, ys, "-o", markerfacecolor="None", markeredgecolor="None", linestyle="None")
x0, x1, y0, y1 = plt.axis()
plot_margin = 5.0
plt.axis((x0 - plot_margin,
x1 + plot_margin,
y0 - plot_margin,
y1 + plot_margin))
plt.axis('off')
for x, y, nearest_neighbor, color in zip(xs, ys, nearest_neighbors, colors):
plt.text(x, y, str(int(nearest_neighbor[1])), color="black", fontsize=20)
# Plot right matrix profile indices
if not np.isnan(nearest_neighbor[3]):
arrow = FancyArrowPatch((x, 0.5), (nearest_neighbor[3], 0.5), color=color[0], **kw)
plt.gca().add_patch(arrow)
# Plot left matrix profile indices
if not np.isnan(nearest_neighbor[2]):
arrow = FancyArrowPatch((x, 0.0), (nearest_neighbor[2], 0.0), color=color[1], **kw)
plt.gca().add_patch(arrow)
plt.show()
###Output
_____no_output_____
###Markdown
The longest extracted chain is therefore 1 ⇌ 2 ⇌ 3 ⇌ 4 ⇌ 5. Note that we see a gradual monotonic increase in the data but, in reality, the increase or decrease in drift can happen in arbitrarily complex ways that can be detected by the time series chains approach. The key component of drifting is that the time series must contain chains with clear directionality.STUMPY is capable of computing:1. anchored time series chains (ATSC) - grow a chain from a user-specified anchor (i.e., specific subsequence)2. all-chain set (ALLC) - a set of anchored time series chains (i.e., each chain starts with a particular subsequence) that are not subsumed by another longer chain3. unanchored time series chain(s) - the unconditionally longest chain within a time series (there could be more than one if there were chains with the same length) So, what does this mean in the context of a real time series? Let's take a look at a real example from web query data! Retrieve the DataWe will be looking at a noisy dataset that is under-sampled and has a growing trend, which will perfectly illustrate the idea regarding time series chains. The data contains a decade-long GoogleTrend query volume (collected weekly from 2004-2014) for the keyword Kohl’s, an American retail chain. First, we'll download the data, extract it, and insert it into a pandas dataframe.
###Code
df = pd.read_csv("https://zenodo.org/record/4276348/files/Time_Series_Chains_Kohls_data.csv?download=1")
df.head()
###Output
_____no_output_____
###Markdown
Visualizing the Data
###Code
plt.plot(df['volume'], color='black')
plt.xlim(0, df.shape[0]+12)
color = itertools.cycle(['white', 'gainsboro'])
for i, x in enumerate(range(0, df.shape[0], 52)):
plt.text(x+12, 0.9, str(2004+i), color="black", fontsize=20)
rect = Rectangle((x, -1), 52, 2.5, facecolor=next(color))
plt.gca().add_patch(rect)
plt.show()
###Output
_____no_output_____
###Markdown
The raw time series above displays ten years of web query volume for the keyword "Kohl's", where each alternating white and grey vertical band represents a 52 week period starting from 2004 to 2014. As depicted, the time series features a significant but unsurprising "end-of-year holiday bump". Relating back to time series chains, we can see that the bump is generally increasing over time and so we might be able to capture this when we compute the unanchored chain.However, as we learned above, in order to compute any time series chains, we also need the left and right matrix profile indices. Luckily for us, according to the docstring, the `stump` function not only returns the (bidirectional) matrix profile and the matrix profile indices in the first and second columns of the NumPy array, respectively, but the third and fourth columns consists of the left matrix profile indices and the right matrix profile indices, respectively. Computing the Left and Right Matrix Profile IndicesSo, let's go ahead and compute the matrix profile indices and we'll set the window size, `m = 20`, which is the approximate length of a "bump".
###Code
m = 20
mp = stumpy.stump(df['volume'], m=m)
###Output
_____no_output_____
###Markdown
Computing the Unanchored ChainNow, with our left and right matrix profile indices in hand, we are ready to call the all-chain set function, `allc`, which not only returns the all-chain set but, as a freebie, it also returns the unconditionally longest chain, also know as the unanchored chain. The latter of which is really what we're most interested in.
###Code
all_chain_set, unanchored_chain = stumpy.allc(mp[:, 2], mp[:, 3])
###Output
_____no_output_____
###Markdown
Visualizing the Unanchored Chain
###Code
plt.plot(df['volume'], linewidth=1, color='black')
for i in range(unanchored_chain.shape[0]):
y = df['volume'].iloc[unanchored_chain[i]:unanchored_chain[i]+m]
x = y.index.values
plt.plot(x, y, linewidth=3)
color = itertools.cycle(['white', 'gainsboro'])
for i, x in enumerate(range(0, df.shape[0], 52)):
plt.text(x+12, 0.9, str(2004+i), color="black", fontsize=20)
rect = Rectangle((x, -1), 52, 2.5, facecolor=next(color))
plt.gca().add_patch(rect)
plt.show()
plt.axis('off')
for i in range(unanchored_chain.shape[0]):
data = df['volume'].iloc[unanchored_chain[i]:unanchored_chain[i]+m].reset_index().values
x = data[:, 0]
y = data[:, 1]
plt.axvline(x=x[0]-x.min()+(m+5)*i + 11, alpha=0.3)
plt.axvline(x=x[0]-x.min()+(m+5)*i + 15, alpha=0.3, linestyle='-.')
plt.plot(x-x.min()+(m+5)*i, y-y.min(), linewidth=3)
plt.show()
###Output
_____no_output_____
###Markdown
Time Series Chains Analyzing Web Query Data with Anchored Time Series Chains (ATSC)This example is adapted from the [Web Query Volume case study](http://www.www2015.it/documents/proceedings/proceedings/p721.pdf) and utilizes the main takeways from the [Matrix Profile VII](https://www.cs.ucr.edu/~eamonn/chains_ICDM.pdf) research paper. Getting StartedLet's import the packages that we'll need to load, analyze, and plot the data.
###Code
%matplotlib inline
import pandas as pd
import numpy as np
import stumpy
from scipy.io import loadmat
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle, FancyArrowPatch
import urllib
import ssl
import io
import itertools
###Output
_____no_output_____
###Markdown
What are Time Series Chains?Time series chains may be informally considered motifs that evolve or drift in some direction over time. The figure below illustrates the differencebetween [time series motifs](Tutorial_1.ipynb) (left) and time series chains (right).
###Code
def change_plot_size(width, height, plt):
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = width
fig_size[1] = height
plt.rcParams["figure.figsize"] = fig_size
change_plot_size(20, 6, plt)
x = np.random.rand(20)
y = np.random.rand(20)
n = 10
motifs_x = 0.5 * np.ones(n) + np.random.uniform(-0.05, 0.05, n)
motifs_y = 0.5 * np.ones(n) + np.random.uniform(-0.05, 0.05, n)
sin_x = np.linspace(0, np.pi/2, n+1)
sin_y = np.sin(sin_x)/4
chains_x = 0.5 * np.ones(n+1) + 0.02 * np.arange(n+1)
chains_y = 0.5 * np.ones(n+1) + sin_y
fig, axes = plt.subplots(nrows=1, ncols=2)
axes[0].scatter(x, y, color='lightgrey')
axes[0].scatter(motifs_x, motifs_y, color='red')
axes[1].scatter(x, y, color='lightgrey')
axes[1].scatter(chains_x[0], chains_y[0], edgecolor='red', color='white')
axes[1].scatter(chains_x[1:n], chains_y[1:n], color='red')
axes[1].scatter(chains_x[n], chains_y[n], edgecolor='red', color='white', marker='*', s=200)
###Output
_____no_output_____
###Markdown
Above, we are visualizing time series subsequences as points in high-dimensional space. Shown on the left is a time series motif and it can be thought of as a collection of points that approximate a platonic ideal. In contrast, depicted on the right is a time series chain and it may be thought of as an evolving trail of points in the space. Here, the open red circle represents the first link in the chain, the anchor. Both motifs and chains have the property that each subsequence is relatively close to its nearest neighbor. However, the motif set (left) also has a relatively small diameter. In contrast, the set of points in a chain (right) has a diameter that is much larger than the mean of each member’s distance to its nearest neighbor and, moreover, the chain has the important property of directionality. For example, in the case of a motif, if an additional member was added to the motif set, its location will also be somewhere near the platonic ideal, but independent of the previous subsequences. In contrast, in the case of a chain, the location of the next member of the chain would be somewhere after the last red circle, possibly where the open red star is located. A Simplified ExampleAdapted from the [Matrix Profile VII](https://www.cs.ucr.edu/~eamonn/chains_ICDM.pdf) paper, consider the following time series:47, 32, 1, 22, 2, 58, 3, 36, 4, -5, 5, 40Assume that the subsequence length is 1, and the distance between two subsequences is simply the absolute differencebetween them (to be clear, we are making these simple and pathological assumptions here just for the purposes ofelucidation; we are actually targeting much longer subsequence lengths and using z-normalized Euclidean distance in ourapplications). To capture the directionality of a time series chain, we need to store the left and right nearest neighbor information into the left (IL) and right (IR) matrix profile indices:| Index | Value | Left Index (IL) | Right Index (IR) ||-------|-------|-----------------|------------------|| 1 | 47 | - | 12 || 2 | 32 | 1 | 8 || 3 | 1 | 2 | 5 || 4 | 22 | 2 | 8 || 5 | 2 | 3 | 7 || 6 | 58 | 1 | 12 || 7 | 3 | 5 | 9 || 8 | 36 | 2 | 12 || 9 | 4 | 7 | 11 || 10 | -5 | 3 | 11 || 11 | 5 | 9 | 12 || 12 | 40 | 8 | - |In this vertical/transposed representation, the `index` column shows the location of every subsequence in the time series, the `value` column contains the original numbers from our time series above, the `IL` column shows the left matrix profile indices, and `IR` is the right matrix profile indices. For example, `IR[2] = 8` means the right nearest neighbor of `index = 2` (which has `value = 32`) is at `index = 8` (which has `value = 36`). Similarly, `IL[3] = 2` means that the left nearest neighbor of `index = 3` (with `value = 1`) is at `index = 2` (which has `value = 32`). To better visualize the left/right matrix profile index, we use arrows to link every subsequence in the time series with its left and right nearest neighbors:
###Code
nearest_neighbors = np.array([[1, 47, np.nan, 12],
[2, 32, 1, 8],
[3, 1, 2, 5],
[4, 22, 2, 8],
[5, 2, 3, 7],
[6, 58, 1, 12],
[7, 3, 5, 9],
[8, 36, 2, 12],
[9, 4, 7, 11],
[10, -5, 3, 11],
[11, 5, 9, 12],
[12, 40, 8, np.nan]])
colors = [['C1', 'C1'],
['C2', 'C5'],
['C3', 'C5'],
['C4', 'C4'],
['C3', 'C2'],
['C5', 'C3'],
['C3', 'C2'],
['C2', 'C1'],
['C3', 'C2'],
['C6', 'C1'],
['C6', 'C2'],
['C1', 'C1']]
style="Simple, tail_width=0.5, head_width=6, head_length=8"
kw = dict(arrowstyle=style, connectionstyle="arc3, rad=-.5",)
xs = np.arange(nearest_neighbors.shape[0]) + 1
ys = np.zeros(nearest_neighbors.shape[0])
plt.plot(xs, ys, "-o", markerfacecolor="None", markeredgecolor="None", linestyle="None")
x0, x1, y0, y1 = plt.axis()
plot_margin = 5.0
plt.axis((x0 - plot_margin,
x1 + plot_margin,
y0 - plot_margin,
y1 + plot_margin))
plt.axis('off')
for x, y, nearest_neighbor, color in zip(xs, ys, nearest_neighbors, colors):
plt.text(x, y, str(int(nearest_neighbor[1])), color="black", fontsize=20)
# Plot right matrix profile indices
if not np.isnan(nearest_neighbor[3]):
arrow = FancyArrowPatch((x, 0.5), (nearest_neighbor[3], 0.5), color=color[0], **kw)
plt.gca().add_patch(arrow)
# Plot left matrix profile indices
if not np.isnan(nearest_neighbor[2]):
arrow = FancyArrowPatch((x, 0.0), (nearest_neighbor[2], 0.0), color=color[1], **kw)
plt.gca().add_patch(arrow)
###Output
_____no_output_____
###Markdown
An arrow pointing from a number to its right nearest neighbor (arrows shown above the time series) can be referred to as forward arrow and an arrow pointing from a number to its left nearest neighbor (arrows shown below the time series) can be referred to as a backward arrow. According to the formal definition of a time series chain (see [Matrix Profile VII](https://www.cs.ucr.edu/~eamonn/chains_ICDM.pdf) for a thorough definition and discussion), every pair of consecutive subsequences in a chain must be connected by both a forward arrow and a backward arrow. A keen eye will spot the fact that the longest chain in our simplified example is:
###Code
nearest_neighbors = np.array([[1, 47, np.nan, np.nan],
[2, 32, np.nan, np.nan],
[3, 1, np.nan, 5],
[4, 22, np.nan, np.nan],
[5, 2, 3, 7],
[6, 58, np.nan, np.nan],
[7, 3, 5, 9],
[8, 36, np.nan, np.nan],
[9, 4, 7, 11],
[10, -5, np.nan, np.nan],
[11, 5, 9, np.nan],
[12, 40, np.nan, np.nan]])
colors = [['C1', 'C1'],
['C2', 'C5'],
['C3', 'C5'],
['C4', 'C4'],
['C3', 'C2'],
['C5', 'C3'],
['C3', 'C2'],
['C2', 'C1'],
['C3', 'C2'],
['C6', 'C1'],
['C6', 'C2'],
['C1', 'C1']]
style="Simple, tail_width=0.5, head_width=6, head_length=8"
kw = dict(arrowstyle=style, connectionstyle="arc3, rad=-.5",)
xs = np.arange(nearest_neighbors.shape[0]) + 1
ys = np.zeros(nearest_neighbors.shape[0])
plt.plot(xs, ys, "-o", markerfacecolor="None", markeredgecolor="None", linestyle="None")
x0, x1, y0, y1 = plt.axis()
plot_margin = 5.0
plt.axis((x0 - plot_margin,
x1 + plot_margin,
y0 - plot_margin,
y1 + plot_margin))
plt.axis('off')
for x, y, nearest_neighbor, color in zip(xs, ys, nearest_neighbors, colors):
plt.text(x, y, str(int(nearest_neighbor[1])), color="black", fontsize=20)
# Plot right matrix profile indices
if not np.isnan(nearest_neighbor[3]):
arrow = FancyArrowPatch((x, 0.5), (nearest_neighbor[3], 0.5), color=color[0], **kw)
plt.gca().add_patch(arrow)
# Plot left matrix profile indices
if not np.isnan(nearest_neighbor[2]):
arrow = FancyArrowPatch((x, 0.0), (nearest_neighbor[2], 0.0), color=color[1], **kw)
plt.gca().add_patch(arrow)
###Output
_____no_output_____
###Markdown
The longest extracted chain is therefore 1 ⇌ 2 ⇌ 3 ⇌ 4 ⇌ 5. Note that we see a gradual monotonic increase in the data but, in reality, the increase or decrease in drift can happen in arbitrarily complex ways that can be detected by the time series chains approach. The key component of drifting is that the time series must contain chains with clear directionality.STUMPY is capable of computing:1. anchored time series chains (ATSC) - grow a chain from a user-specified anchor (i.e., specific subsequence)2. all-chain set (ALLC) - a set of anchored time series chains (i.e., each chain starts with a particular subsequence) that are not subsumed by another longer chain3. unanchored time series chain(s) - the unconditionally longest chain within a time series (there could be more than one if there were chains with the same length) So, what does this mean in the context of a real time series? Let's take a look at a real example from web query data! Retrieve the DataWe will be looking at a noisy dataset that is under-sampled and has a growing trend, which will perfectly illustrate the idea regarding time series chains. The data contains a decade-long GoogleTrend query volume (collected weekly from 2004-2014) for the keyword Kohl’s, an American retail chain. First, we'll download the data, extract it, and insert it into a pandas dataframe.
###Code
context = ssl.SSLContext() # Ignore SSL certificate verification for simplicity
url = 'https://sites.google.com/site/timeserieschain/home/Kohls_data.mat?attredirects=0&revision=1'
raw_bytes = urllib.request.urlopen(url, context=context).read()
data = io.BytesIO(raw_bytes)
mat = loadmat(data)
mdata = mat['VarName1']
mdtype = mdata.dtype
df = pd.DataFrame(mdata, dtype=mdtype, columns=['volume'])
df.head()
###Output
_____no_output_____
###Markdown
Visualizing the Data
###Code
plt.plot(df['volume'], color='black')
plt.xlim(0, df.shape[0]+12)
color = itertools.cycle(['white', 'gainsboro'])
for i, x in enumerate(range(0, df.shape[0], 52)):
plt.text(x+12, 0.9, str(2004+i), color="black", fontsize=20)
rect = Rectangle((x, -1), 52, 2.5, facecolor=next(color))
plt.gca().add_patch(rect)
###Output
_____no_output_____
###Markdown
In the raw time series above displays ten years of web query volume for the keyword "Kohl's", where each alternating white and grey vertical band represents a 52 week period starting from 2004 to 2014. As depicted, the time series features a significant but unsurprising "end-of-year holiday bump". Relating back to time series chains, we can see that the bump is generally increasing over time and so we might be able to capture this when we compute unanchored chain.However, as we learned above, in order to compute any time series chains, we also need the left and right matrix profile indices. Luckily for us, according to the docstring, the `stump` function not only returns the (bidirectional) matrix profile and the matrix profile indices in the first and second columns of the ndarray, respectively, but the third and fourth columns consists of the left matrix profile indices and the right matrix profile indices, respectively:
###Code
?stumpy.stump
###Output
_____no_output_____
###Markdown
Computing the Left and Right Matrix Profile IndicesSo, let's go ahead and compute the matrix profile indices and we'll set the window size, `m = 20`, which is the approximate length of a "bump".
###Code
m = 20
mp = stumpy.stump(df['volume'], m=m)
###Output
_____no_output_____
###Markdown
Computing the Unanchored ChainNow, with our left and right matrix profile indices in hand, we are ready to call the all-chain set function, `allc`, which not only returns the all-chain set but, as a freebie, it also returns the unconditionally longest chain, also know as the unanchored chain. The latter of which is really what we're most interested in.
###Code
all_chain_set, unanchored_chain = stumpy.allc(mp[:, 2], mp[:, 3])
###Output
_____no_output_____
###Markdown
Visualizing the Unanchored Chain
###Code
plt.plot(df['volume'], linewidth=1, color='black')
for i in range(unanchored_chain.shape[0]):
y = df['volume'].iloc[unanchored_chain[i]:unanchored_chain[i]+m]
x = y.index.values
plt.plot(x, y, linewidth=3)
color = itertools.cycle(['white', 'gainsboro'])
for i, x in enumerate(range(0, df.shape[0], 52)):
plt.text(x+12, 0.9, str(2004+i), color="black", fontsize=20)
rect = Rectangle((x, -1), 52, 2.5, facecolor=next(color))
plt.gca().add_patch(rect)
plt.axis('off')
for i in range(unanchored_chain.shape[0]):
data = df['volume'].iloc[unanchored_chain[i]:unanchored_chain[i]+m].reset_index().values
x = data[:, 0]
y = data[:, 1]
plt.axvline(x=x[0]-x.min()+(m+5)*i + 11, alpha=0.3)
plt.axvline(x=x[0]-x.min()+(m+5)*i + 15, alpha=0.3, linestyle='-.')
plt.plot(x-x.min()+(m+5)*i, y-y.min(), linewidth=3)
###Output
_____no_output_____
###Markdown
Time Series Chains[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/TDAmeritrade/stumpy/main?filepath=notebooks/Tutorial_Time_Series_Chains.ipynb) Forecasting Web Query Data with Anchored Time Series Chains (ATSC)This example is adapted from the [Web Query Volume case study](http://www2015.thewebconf.org/documents/proceedings/companion/p721.pdf) and utilizes the main takeaways from the [Matrix Profile VII](https://www.cs.ucr.edu/~eamonn/chains_ICDM.pdf) research paper. Getting StartedLet's import the packages that we'll need to load, analyze, and plot the data.
###Code
%matplotlib inline
import pandas as pd
import numpy as np
import stumpy
from scipy.io import loadmat
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle, FancyArrowPatch
import itertools
plt.style.use('https://raw.githubusercontent.com/TDAmeritrade/stumpy/main/docs/stumpy.mplstyle')
###Output
_____no_output_____
###Markdown
What are Time Series Chains?Time series chains may be informally considered as motifs that evolve or drift in some direction over time. The figure below illustrates the differencebetween [time series motifs](Tutorial_STUMPY_Basics.ipynb) (left) and time series chains (right).
###Code
x = np.random.rand(20)
y = np.random.rand(20)
n = 10
motifs_x = 0.5 * np.ones(n) + np.random.uniform(-0.05, 0.05, n)
motifs_y = 0.5 * np.ones(n) + np.random.uniform(-0.05, 0.05, n)
sin_x = np.linspace(0, np.pi/2, n+1)
sin_y = np.sin(sin_x)/4
chains_x = 0.5 * np.ones(n+1) + 0.02 * np.arange(n+1)
chains_y = 0.5 * np.ones(n+1) + sin_y
fig, axes = plt.subplots(nrows=1, ncols=2)
axes[0].scatter(x, y, color='lightgrey')
axes[0].scatter(motifs_x, motifs_y, color='red')
axes[1].scatter(x, y, color='lightgrey')
axes[1].scatter(chains_x[0], chains_y[0], edgecolor='red', color='white')
axes[1].scatter(chains_x[1:n], chains_y[1:n], color='red')
axes[1].scatter(chains_x[n], chains_y[n], edgecolor='red', color='white', marker='*', s=200)
plt.show()
###Output
_____no_output_____
###Markdown
Above, we are visualizing time series subsequences as points in high-dimensional space. Shown on the left is a time series motif and it can be thought of as a collection of points that approximate a platonic ideal. In contrast, depicted on the right, is a time series chain and it may be thought of as an evolving trail of points in the space. Here, the open red circle represents the first link in the chain, the anchor. Both motifs and chains have the property that each subsequence is relatively close to its nearest neighbor. However, the motif set (left) also has a relatively small diameter. In contrast, the set of points in a chain (right) has a diameter that is much larger than the mean of each member’s distance to its nearest neighbor and, moreover, the chain has the important property of directionality. For example, in the case of a motif, if an additional member was added to the motif set, its location will also be somewhere near the platonic ideal, but independent of the previous subsequences. In contrast, in the case of a chain, the location of the next member of the chain would be somewhere after the last red circle, possibly where the open red star is located. A Simplified ExampleAdapted from the [Matrix Profile VII](https://www.cs.ucr.edu/~eamonn/chains_ICDM.pdf) paper, consider the following time series:47, 32, 1, 22, 2, 58, 3, 36, 4, -5, 5, 40Assume that the subsequence length is 1 and the distance between two subsequences is simply the absolute differencebetween them. To be clear, we are making these simple and pathological assumptions here just for the purposes ofelucidation; we are actually targeting much longer subsequence lengths and using z-normalized Euclidean distance in ourapplications. To capture the directionality of a time series chain, we need to store the left and right nearest neighbor information into the left (IL) and right (IR) matrix profile indices:| Index | Value | Left Index (IL) | Right Index (IR) ||-------|-------|-----------------|------------------|| 1 | 47 | - | 12 || 2 | 32 | 1 | 8 || 3 | 1 | 2 | 5 || 4 | 22 | 2 | 8 || 5 | 2 | 3 | 7 || 6 | 58 | 1 | 12 || 7 | 3 | 5 | 9 || 8 | 36 | 2 | 12 || 9 | 4 | 7 | 11 || 10 | -5 | 3 | 11 || 11 | 5 | 9 | 12 || 12 | 40 | 8 | - |In this vertical/transposed representation, the `index` column shows the location of every subsequence in the time series, the `value` column contains the original numbers from our time series above, the `IL` column shows the left matrix profile indices, and `IR` is the right matrix profile indices. For example, `IR[2] = 8` means the right nearest neighbor of `index = 2` (which has `value = 32`) is at `index = 8` (which has `value = 36`). Similarly, `IL[3] = 2` means that the left nearest neighbor of `index = 3` (with `value = 1`) is at `index = 2` (which has `value = 32`). To better visualize the left/right matrix profile index, we use arrows to link every subsequence in the time series with its left and right nearest neighbors:
###Code
nearest_neighbors = np.array([[1, 47, np.nan, 12],
[2, 32, 1, 8],
[3, 1, 2, 5],
[4, 22, 2, 8],
[5, 2, 3, 7],
[6, 58, 1, 12],
[7, 3, 5, 9],
[8, 36, 2, 12],
[9, 4, 7, 11],
[10, -5, 3, 11],
[11, 5, 9, 12],
[12, 40, 8, np.nan]])
colors = [['C1', 'C1'],
['C2', 'C5'],
['C3', 'C5'],
['C4', 'C4'],
['C3', 'C2'],
['C5', 'C3'],
['C3', 'C2'],
['C2', 'C1'],
['C3', 'C2'],
['C6', 'C1'],
['C6', 'C2'],
['C1', 'C1']]
style="Simple, tail_width=0.5, head_width=6, head_length=8"
kw = dict(arrowstyle=style, connectionstyle="arc3, rad=-.5",)
xs = np.arange(nearest_neighbors.shape[0]) + 1
ys = np.zeros(nearest_neighbors.shape[0])
plt.plot(xs, ys, markerfacecolor="None", markeredgecolor="None", linewidth=0)
x0, x1, y0, y1 = plt.axis()
plot_margin = 5.0
plt.axis((x0 - plot_margin,
x1 + plot_margin,
y0 - plot_margin,
y1 + plot_margin))
plt.axis('off')
for x, y, nearest_neighbor, color in zip(xs, ys, nearest_neighbors, colors):
plt.text(x, y, str(int(nearest_neighbor[1])), color="black", fontsize=20)
# Plot right matrix profile indices
if not np.isnan(nearest_neighbor[3]):
arrow = FancyArrowPatch((x, 0.5), (nearest_neighbor[3], 0.5), color=color[0], **kw)
plt.gca().add_patch(arrow)
# Plot left matrix profile indices
if not np.isnan(nearest_neighbor[2]):
arrow = FancyArrowPatch((x, 0.0), (nearest_neighbor[2], 0.0), color=color[1], **kw)
plt.gca().add_patch(arrow)
plt.show()
###Output
_____no_output_____
###Markdown
An arrow pointing from a number to its right nearest neighbor (arrows shown above the time series) can be referred to as forward arrow and an arrow pointing from a number to its left nearest neighbor (arrows shown below the time series) can be referred to as a backward arrow. According to the formal definition of a time series chain (see [Matrix Profile VII](https://www.cs.ucr.edu/~eamonn/chains_ICDM.pdf) for a thorough definition and discussion), every pair of consecutive subsequences in a chain must be connected by both a forward arrow and a backward arrow. A keen eye will spot the fact that the longest chain in our simplified example is:
###Code
nearest_neighbors = np.array([[1, 47, np.nan, np.nan],
[2, 32, np.nan, np.nan],
[3, 1, np.nan, 5],
[4, 22, np.nan, np.nan],
[5, 2, 3, 7],
[6, 58, np.nan, np.nan],
[7, 3, 5, 9],
[8, 36, np.nan, np.nan],
[9, 4, 7, 11],
[10, -5, np.nan, np.nan],
[11, 5, 9, np.nan],
[12, 40, np.nan, np.nan]])
colors = [['C1', 'C1'],
['C2', 'C5'],
['C3', 'C5'],
['C4', 'C4'],
['C3', 'C2'],
['C5', 'C3'],
['C3', 'C2'],
['C2', 'C1'],
['C3', 'C2'],
['C6', 'C1'],
['C6', 'C2'],
['C1', 'C1']]
style="Simple, tail_width=0.5, head_width=6, head_length=8"
kw = dict(arrowstyle=style, connectionstyle="arc3, rad=-.5",)
xs = np.arange(nearest_neighbors.shape[0]) + 1
ys = np.zeros(nearest_neighbors.shape[0])
plt.plot(xs, ys, markerfacecolor="None", markeredgecolor="None", linewidth=0)
x0, x1, y0, y1 = plt.axis()
plot_margin = 5.0
plt.axis((x0 - plot_margin,
x1 + plot_margin,
y0 - plot_margin,
y1 + plot_margin))
plt.axis('off')
for x, y, nearest_neighbor, color in zip(xs, ys, nearest_neighbors, colors):
plt.text(x, y, str(int(nearest_neighbor[1])), color="black", fontsize=20)
# Plot right matrix profile indices
if not np.isnan(nearest_neighbor[3]):
arrow = FancyArrowPatch((x, 0.5), (nearest_neighbor[3], 0.5), color=color[0], **kw)
plt.gca().add_patch(arrow)
# Plot left matrix profile indices
if not np.isnan(nearest_neighbor[2]):
arrow = FancyArrowPatch((x, 0.0), (nearest_neighbor[2], 0.0), color=color[1], **kw)
plt.gca().add_patch(arrow)
plt.show()
###Output
_____no_output_____
###Markdown
The longest extracted chain is therefore 1 ⇌ 2 ⇌ 3 ⇌ 4 ⇌ 5. Note that we see a gradual monotonic increase in the data but, in reality, the increase or decrease in drift can happen in arbitrarily complex ways that can be detected by the time series chains approach. The key component of drifting is that the time series must contain chains with clear directionality.STUMPY is capable of computing:1. anchored time series chains (ATSC) - grow a chain from a user-specified anchor (i.e., specific subsequence)2. all-chain set (ALLC) - a set of anchored time series chains (i.e., each chain starts with a particular subsequence) that are not subsumed by another longer chain3. unanchored time series chain(s) - the unconditionally longest chain within a time series (there could be more than one if there were chains with the same length) So, what does this mean in the context of a real time series? Let's take a look at a real example from web query data! Retrieve the DataWe will be looking at a noisy dataset that is under-sampled and has a growing trend, which will perfectly illustrate the idea regarding time series chains. The data contains a decade-long GoogleTrend query volume (collected weekly from 2004-2014) for the keyword Kohl’s, an American retail chain. First, we'll download the data, extract it, and insert it into a pandas dataframe.
###Code
df = pd.read_csv("https://zenodo.org/record/4276348/files/Time_Series_Chains_Kohls_data.csv?download=1")
df.head()
###Output
_____no_output_____
###Markdown
Visualizing the Data
###Code
plt.plot(df['volume'], color='black')
plt.xlim(0, df.shape[0]+12)
color = itertools.cycle(['white', 'gainsboro'])
for i, x in enumerate(range(0, df.shape[0], 52)):
plt.text(x+12, 0.9, str(2004+i), color="black", fontsize=20)
rect = Rectangle((x, -1), 52, 2.5, facecolor=next(color))
plt.gca().add_patch(rect)
plt.show()
###Output
_____no_output_____
###Markdown
The raw time series above displays ten years of web query volume for the keyword "Kohl's", where each alternating white and grey vertical band represents a 52 week period starting from 2004 to 2014. As depicted, the time series features a significant but unsurprising "end-of-year holiday bump". Relating back to time series chains, we can see that the bump is generally increasing over time and so we might be able to capture this when we compute the unanchored chain.However, as we learned above, in order to compute any time series chains, we also need the left and right matrix profile indices. Luckily for us, according to the docstring, the `stump` function not only returns the (bidirectional) matrix profile and the matrix profile indices in the first and second columns of the NumPy array, respectively, but the third and fourth columns consists of the left matrix profile indices and the right matrix profile indices, respectively. Computing the Left and Right Matrix Profile IndicesSo, let's go ahead and compute the matrix profile indices and we'll set the window size, `m = 20`, which is the approximate length of a "bump".
###Code
m = 20
mp = stumpy.stump(df['volume'], m=m)
###Output
_____no_output_____
###Markdown
Computing the Unanchored ChainNow, with our left and right matrix profile indices in hand, we are ready to call the all-chain set function, `allc`, which not only returns the all-chain set but, as a freebie, it also returns the unconditionally longest chain, also know as the unanchored chain. The latter of which is really what we're most interested in.
###Code
all_chain_set, unanchored_chain = stumpy.allc(mp[:, 2], mp[:, 3])
###Output
_____no_output_____
###Markdown
Visualizing the Unanchored Chain
###Code
plt.plot(df['volume'], linewidth=1, color='black')
for i in range(unanchored_chain.shape[0]):
y = df['volume'].iloc[unanchored_chain[i]:unanchored_chain[i]+m]
x = y.index.values
plt.plot(x, y, linewidth=3)
color = itertools.cycle(['white', 'gainsboro'])
for i, x in enumerate(range(0, df.shape[0], 52)):
plt.text(x+12, 0.9, str(2004+i), color="black", fontsize=20)
rect = Rectangle((x, -1), 52, 2.5, facecolor=next(color))
plt.gca().add_patch(rect)
plt.show()
plt.axis('off')
for i in range(unanchored_chain.shape[0]):
data = df['volume'].iloc[unanchored_chain[i]:unanchored_chain[i]+m].reset_index().values
x = data[:, 0]
y = data[:, 1]
plt.axvline(x=x[0]-x.min()+(m+5)*i + 11, alpha=0.3)
plt.axvline(x=x[0]-x.min()+(m+5)*i + 15, alpha=0.3, linestyle='-.')
plt.plot(x-x.min()+(m+5)*i, y-y.min(), linewidth=3)
plt.show()
###Output
_____no_output_____
###Markdown
Time Series Chains Forecasting Web Query Data with Anchored Time Series Chains (ATSC)This example is adapted from the [Web Query Volume case study](http://www.www2015.it/documents/proceedings/proceedings/p721.pdf) and utilizes the main takeaways from the [Matrix Profile VII](https://www.cs.ucr.edu/~eamonn/chains_ICDM.pdf) research paper. Getting StartedLet's import the packages that we'll need to load, analyze, and plot the data.
###Code
%matplotlib inline
import pandas as pd
import numpy as np
import stumpy
from scipy.io import loadmat
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle, FancyArrowPatch
import itertools
plt.style.use('stumpy.mplstyle')
###Output
_____no_output_____
###Markdown
What are Time Series Chains?Time series chains may be informally considered as motifs that evolve or drift in some direction over time. The figure below illustrates the differencebetween [time series motifs](Tutorial_1.ipynb) (left) and time series chains (right).
###Code
x = np.random.rand(20)
y = np.random.rand(20)
n = 10
motifs_x = 0.5 * np.ones(n) + np.random.uniform(-0.05, 0.05, n)
motifs_y = 0.5 * np.ones(n) + np.random.uniform(-0.05, 0.05, n)
sin_x = np.linspace(0, np.pi/2, n+1)
sin_y = np.sin(sin_x)/4
chains_x = 0.5 * np.ones(n+1) + 0.02 * np.arange(n+1)
chains_y = 0.5 * np.ones(n+1) + sin_y
fig, axes = plt.subplots(nrows=1, ncols=2)
axes[0].scatter(x, y, color='lightgrey')
axes[0].scatter(motifs_x, motifs_y, color='red')
axes[1].scatter(x, y, color='lightgrey')
axes[1].scatter(chains_x[0], chains_y[0], edgecolor='red', color='white')
axes[1].scatter(chains_x[1:n], chains_y[1:n], color='red')
axes[1].scatter(chains_x[n], chains_y[n], edgecolor='red', color='white', marker='*', s=200)
plt.show()
###Output
_____no_output_____
###Markdown
Above, we are visualizing time series subsequences as points in high-dimensional space. Shown on the left is a time series motif and it can be thought of as a collection of points that approximate a platonic ideal. In contrast, depicted on the right, is a time series chain and it may be thought of as an evolving trail of points in the space. Here, the open red circle represents the first link in the chain, the anchor. Both motifs and chains have the property that each subsequence is relatively close to its nearest neighbor. However, the motif set (left) also has a relatively small diameter. In contrast, the set of points in a chain (right) has a diameter that is much larger than the mean of each member’s distance to its nearest neighbor and, moreover, the chain has the important property of directionality. For example, in the case of a motif, if an additional member was added to the motif set, its location will also be somewhere near the platonic ideal, but independent of the previous subsequences. In contrast, in the case of a chain, the location of the next member of the chain would be somewhere after the last red circle, possibly where the open red star is located. A Simplified ExampleAdapted from the [Matrix Profile VII](https://www.cs.ucr.edu/~eamonn/chains_ICDM.pdf) paper, consider the following time series:47, 32, 1, 22, 2, 58, 3, 36, 4, -5, 5, 40Assume that the subsequence length is 1 and the distance between two subsequences is simply the absolute differencebetween them. To be clear, we are making these simple and pathological assumptions here just for the purposes ofelucidation; we are actually targeting much longer subsequence lengths and using z-normalized Euclidean distance in ourapplications. To capture the directionality of a time series chain, we need to store the left and right nearest neighbor information into the left (IL) and right (IR) matrix profile indices:| Index | Value | Left Index (IL) | Right Index (IR) ||-------|-------|-----------------|------------------|| 1 | 47 | - | 12 || 2 | 32 | 1 | 8 || 3 | 1 | 2 | 5 || 4 | 22 | 2 | 8 || 5 | 2 | 3 | 7 || 6 | 58 | 1 | 12 || 7 | 3 | 5 | 9 || 8 | 36 | 2 | 12 || 9 | 4 | 7 | 11 || 10 | -5 | 3 | 11 || 11 | 5 | 9 | 12 || 12 | 40 | 8 | - |In this vertical/transposed representation, the `index` column shows the location of every subsequence in the time series, the `value` column contains the original numbers from our time series above, the `IL` column shows the left matrix profile indices, and `IR` is the right matrix profile indices. For example, `IR[2] = 8` means the right nearest neighbor of `index = 2` (which has `value = 32`) is at `index = 8` (which has `value = 36`). Similarly, `IL[3] = 2` means that the left nearest neighbor of `index = 3` (with `value = 1`) is at `index = 2` (which has `value = 32`). To better visualize the left/right matrix profile index, we use arrows to link every subsequence in the time series with its left and right nearest neighbors:
###Code
nearest_neighbors = np.array([[1, 47, np.nan, 12],
[2, 32, 1, 8],
[3, 1, 2, 5],
[4, 22, 2, 8],
[5, 2, 3, 7],
[6, 58, 1, 12],
[7, 3, 5, 9],
[8, 36, 2, 12],
[9, 4, 7, 11],
[10, -5, 3, 11],
[11, 5, 9, 12],
[12, 40, 8, np.nan]])
colors = [['C1', 'C1'],
['C2', 'C5'],
['C3', 'C5'],
['C4', 'C4'],
['C3', 'C2'],
['C5', 'C3'],
['C3', 'C2'],
['C2', 'C1'],
['C3', 'C2'],
['C6', 'C1'],
['C6', 'C2'],
['C1', 'C1']]
style="Simple, tail_width=0.5, head_width=6, head_length=8"
kw = dict(arrowstyle=style, connectionstyle="arc3, rad=-.5",)
xs = np.arange(nearest_neighbors.shape[0]) + 1
ys = np.zeros(nearest_neighbors.shape[0])
plt.plot(xs, ys, "-o", markerfacecolor="None", markeredgecolor="None", linestyle="None")
x0, x1, y0, y1 = plt.axis()
plot_margin = 5.0
plt.axis((x0 - plot_margin,
x1 + plot_margin,
y0 - plot_margin,
y1 + plot_margin))
plt.axis('off')
for x, y, nearest_neighbor, color in zip(xs, ys, nearest_neighbors, colors):
plt.text(x, y, str(int(nearest_neighbor[1])), color="black", fontsize=20)
# Plot right matrix profile indices
if not np.isnan(nearest_neighbor[3]):
arrow = FancyArrowPatch((x, 0.5), (nearest_neighbor[3], 0.5), color=color[0], **kw)
plt.gca().add_patch(arrow)
# Plot left matrix profile indices
if not np.isnan(nearest_neighbor[2]):
arrow = FancyArrowPatch((x, 0.0), (nearest_neighbor[2], 0.0), color=color[1], **kw)
plt.gca().add_patch(arrow)
plt.show()
###Output
<ipython-input-3-105203bdfa7f>:32: UserWarning: linestyle is redundantly defined by the 'linestyle' keyword argument and the fmt string "-o" (-> linestyle='-'). The keyword argument will take precedence.
plt.plot(xs, ys, "-o", markerfacecolor="None", markeredgecolor="None", linestyle="None")
###Markdown
An arrow pointing from a number to its right nearest neighbor (arrows shown above the time series) can be referred to as forward arrow and an arrow pointing from a number to its left nearest neighbor (arrows shown below the time series) can be referred to as a backward arrow. According to the formal definition of a time series chain (see [Matrix Profile VII](https://www.cs.ucr.edu/~eamonn/chains_ICDM.pdf) for a thorough definition and discussion), every pair of consecutive subsequences in a chain must be connected by both a forward arrow and a backward arrow. A keen eye will spot the fact that the longest chain in our simplified example is:
###Code
nearest_neighbors = np.array([[1, 47, np.nan, np.nan],
[2, 32, np.nan, np.nan],
[3, 1, np.nan, 5],
[4, 22, np.nan, np.nan],
[5, 2, 3, 7],
[6, 58, np.nan, np.nan],
[7, 3, 5, 9],
[8, 36, np.nan, np.nan],
[9, 4, 7, 11],
[10, -5, np.nan, np.nan],
[11, 5, 9, np.nan],
[12, 40, np.nan, np.nan]])
colors = [['C1', 'C1'],
['C2', 'C5'],
['C3', 'C5'],
['C4', 'C4'],
['C3', 'C2'],
['C5', 'C3'],
['C3', 'C2'],
['C2', 'C1'],
['C3', 'C2'],
['C6', 'C1'],
['C6', 'C2'],
['C1', 'C1']]
style="Simple, tail_width=0.5, head_width=6, head_length=8"
kw = dict(arrowstyle=style, connectionstyle="arc3, rad=-.5",)
xs = np.arange(nearest_neighbors.shape[0]) + 1
ys = np.zeros(nearest_neighbors.shape[0])
plt.plot(xs, ys, "-o", markerfacecolor="None", markeredgecolor="None", linestyle="None")
x0, x1, y0, y1 = plt.axis()
plot_margin = 5.0
plt.axis((x0 - plot_margin,
x1 + plot_margin,
y0 - plot_margin,
y1 + plot_margin))
plt.axis('off')
for x, y, nearest_neighbor, color in zip(xs, ys, nearest_neighbors, colors):
plt.text(x, y, str(int(nearest_neighbor[1])), color="black", fontsize=20)
# Plot right matrix profile indices
if not np.isnan(nearest_neighbor[3]):
arrow = FancyArrowPatch((x, 0.5), (nearest_neighbor[3], 0.5), color=color[0], **kw)
plt.gca().add_patch(arrow)
# Plot left matrix profile indices
if not np.isnan(nearest_neighbor[2]):
arrow = FancyArrowPatch((x, 0.0), (nearest_neighbor[2], 0.0), color=color[1], **kw)
plt.gca().add_patch(arrow)
plt.show()
###Output
<ipython-input-4-b09fcbf8ba29>:32: UserWarning: linestyle is redundantly defined by the 'linestyle' keyword argument and the fmt string "-o" (-> linestyle='-'). The keyword argument will take precedence.
plt.plot(xs, ys, "-o", markerfacecolor="None", markeredgecolor="None", linestyle="None")
###Markdown
The longest extracted chain is therefore 1 ⇌ 2 ⇌ 3 ⇌ 4 ⇌ 5. Note that we see a gradual monotonic increase in the data but, in reality, the increase or decrease in drift can happen in arbitrarily complex ways that can be detected by the time series chains approach. The key component of drifting is that the time series must contain chains with clear directionality.STUMPY is capable of computing:1. anchored time series chains (ATSC) - grow a chain from a user-specified anchor (i.e., specific subsequence)2. all-chain set (ALLC) - a set of anchored time series chains (i.e., each chain starts with a particular subsequence) that are not subsumed by another longer chain3. unanchored time series chain(s) - the unconditionally longest chain within a time series (there could be more than one if there were chains with the same length) So, what does this mean in the context of a real time series? Let's take a look at a real example from web query data! Retrieve the DataWe will be looking at a noisy dataset that is under-sampled and has a growing trend, which will perfectly illustrate the idea regarding time series chains. The data contains a decade-long GoogleTrend query volume (collected weekly from 2004-2014) for the keyword Kohl’s, an American retail chain. First, we'll download the data, extract it, and insert it into a pandas dataframe.
###Code
df = pd.read_csv("https://zenodo.org/record/4276348/files/Time_Series_Chains_Kohls_data.csv?download=1")
df.head()
###Output
_____no_output_____
###Markdown
Visualizing the Data
###Code
plt.plot(df['volume'], color='black')
plt.xlim(0, df.shape[0]+12)
color = itertools.cycle(['white', 'gainsboro'])
for i, x in enumerate(range(0, df.shape[0], 52)):
plt.text(x+12, 0.9, str(2004+i), color="black", fontsize=20)
rect = Rectangle((x, -1), 52, 2.5, facecolor=next(color))
plt.gca().add_patch(rect)
plt.show()
###Output
_____no_output_____
###Markdown
The raw time series above displays ten years of web query volume for the keyword "Kohl's", where each alternating white and grey vertical band represents a 52 week period starting from 2004 to 2014. As depicted, the time series features a significant but unsurprising "end-of-year holiday bump". Relating back to time series chains, we can see that the bump is generally increasing over time and so we might be able to capture this when we compute the unanchored chain.However, as we learned above, in order to compute any time series chains, we also need the left and right matrix profile indices. Luckily for us, according to the docstring, the `stump` function not only returns the (bidirectional) matrix profile and the matrix profile indices in the first and second columns of the NumPy array, respectively, but the third and fourth columns consists of the left matrix profile indices and the right matrix profile indices, respectively. Computing the Left and Right Matrix Profile IndicesSo, let's go ahead and compute the matrix profile indices and we'll set the window size, `m = 20`, which is the approximate length of a "bump".
###Code
m = 20
mp = stumpy.stump(df['volume'], m=m)
###Output
_____no_output_____
###Markdown
Computing the Unanchored ChainNow, with our left and right matrix profile indices in hand, we are ready to call the all-chain set function, `allc`, which not only returns the all-chain set but, as a freebie, it also returns the unconditionally longest chain, also know as the unanchored chain. The latter of which is really what we're most interested in.
###Code
all_chain_set, unanchored_chain = stumpy.allc(mp[:, 2], mp[:, 3])
###Output
_____no_output_____
###Markdown
Visualizing the Unanchored Chain
###Code
plt.plot(df['volume'], linewidth=1, color='black')
for i in range(unanchored_chain.shape[0]):
y = df['volume'].iloc[unanchored_chain[i]:unanchored_chain[i]+m]
x = y.index.values
plt.plot(x, y, linewidth=3)
color = itertools.cycle(['white', 'gainsboro'])
for i, x in enumerate(range(0, df.shape[0], 52)):
plt.text(x+12, 0.9, str(2004+i), color="black", fontsize=20)
rect = Rectangle((x, -1), 52, 2.5, facecolor=next(color))
plt.gca().add_patch(rect)
plt.show()
plt.axis('off')
for i in range(unanchored_chain.shape[0]):
data = df['volume'].iloc[unanchored_chain[i]:unanchored_chain[i]+m].reset_index().values
x = data[:, 0]
y = data[:, 1]
plt.axvline(x=x[0]-x.min()+(m+5)*i + 11, alpha=0.3)
plt.axvline(x=x[0]-x.min()+(m+5)*i + 15, alpha=0.3, linestyle='-.')
plt.plot(x-x.min()+(m+5)*i, y-y.min(), linewidth=3)
plt.show()
###Output
_____no_output_____
###Markdown
Time Series Chains Forecasting Web Query Data with Anchored Time Series Chains (ATSC)This example is adapted from the [Web Query Volume case study](http://www.www2015.it/documents/proceedings/proceedings/p721.pdf) and utilizes the main takeaways from the [Matrix Profile VII](https://www.cs.ucr.edu/~eamonn/chains_ICDM.pdf) research paper. Getting StartedLet's import the packages that we'll need to load, analyze, and plot the data.
###Code
%matplotlib inline
import pandas as pd
import numpy as np
import stumpy
from scipy.io import loadmat
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle, FancyArrowPatch
import urllib
import ssl
import io
import itertools
###Output
_____no_output_____
###Markdown
What are Time Series Chains?Time series chains may be informally considered as motifs that evolve or drift in some direction over time. The figure below illustrates the differencebetween [time series motifs](Tutorial_1.ipynb) (left) and time series chains (right).
###Code
def change_plot_size(width, height, plt):
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = width
fig_size[1] = height
plt.rcParams["figure.figsize"] = fig_size
change_plot_size(20, 6, plt)
x = np.random.rand(20)
y = np.random.rand(20)
n = 10
motifs_x = 0.5 * np.ones(n) + np.random.uniform(-0.05, 0.05, n)
motifs_y = 0.5 * np.ones(n) + np.random.uniform(-0.05, 0.05, n)
sin_x = np.linspace(0, np.pi/2, n+1)
sin_y = np.sin(sin_x)/4
chains_x = 0.5 * np.ones(n+1) + 0.02 * np.arange(n+1)
chains_y = 0.5 * np.ones(n+1) + sin_y
fig, axes = plt.subplots(nrows=1, ncols=2)
axes[0].scatter(x, y, color='lightgrey')
axes[0].scatter(motifs_x, motifs_y, color='red')
axes[1].scatter(x, y, color='lightgrey')
axes[1].scatter(chains_x[0], chains_y[0], edgecolor='red', color='white')
axes[1].scatter(chains_x[1:n], chains_y[1:n], color='red')
axes[1].scatter(chains_x[n], chains_y[n], edgecolor='red', color='white', marker='*', s=200)
plt.show()
###Output
_____no_output_____
###Markdown
Above, we are visualizing time series subsequences as points in high-dimensional space. Shown on the left is a time series motif and it can be thought of as a collection of points that approximate a platonic ideal. In contrast, depicted on the right, is a time series chain and it may be thought of as an evolving trail of points in the space. Here, the open red circle represents the first link in the chain, the anchor. Both motifs and chains have the property that each subsequence is relatively close to its nearest neighbor. However, the motif set (left) also has a relatively small diameter. In contrast, the set of points in a chain (right) has a diameter that is much larger than the mean of each member’s distance to its nearest neighbor and, moreover, the chain has the important property of directionality. For example, in the case of a motif, if an additional member was added to the motif set, its location will also be somewhere near the platonic ideal, but independent of the previous subsequences. In contrast, in the case of a chain, the location of the next member of the chain would be somewhere after the last red circle, possibly where the open red star is located. A Simplified ExampleAdapted from the [Matrix Profile VII](https://www.cs.ucr.edu/~eamonn/chains_ICDM.pdf) paper, consider the following time series:47, 32, 1, 22, 2, 58, 3, 36, 4, -5, 5, 40Assume that the subsequence length is 1 and the distance between two subsequences is simply the absolute differencebetween them. To be clear, we are making these simple and pathological assumptions here just for the purposes ofelucidation; we are actually targeting much longer subsequence lengths and using z-normalized Euclidean distance in ourapplications. To capture the directionality of a time series chain, we need to store the left and right nearest neighbor information into the left (IL) and right (IR) matrix profile indices:| Index | Value | Left Index (IL) | Right Index (IR) ||-------|-------|-----------------|------------------|| 1 | 47 | - | 12 || 2 | 32 | 1 | 8 || 3 | 1 | 2 | 5 || 4 | 22 | 2 | 8 || 5 | 2 | 3 | 7 || 6 | 58 | 1 | 12 || 7 | 3 | 5 | 9 || 8 | 36 | 2 | 12 || 9 | 4 | 7 | 11 || 10 | -5 | 3 | 11 || 11 | 5 | 9 | 12 || 12 | 40 | 8 | - |In this vertical/transposed representation, the `index` column shows the location of every subsequence in the time series, the `value` column contains the original numbers from our time series above, the `IL` column shows the left matrix profile indices, and `IR` is the right matrix profile indices. For example, `IR[2] = 8` means the right nearest neighbor of `index = 2` (which has `value = 32`) is at `index = 8` (which has `value = 36`). Similarly, `IL[3] = 2` means that the left nearest neighbor of `index = 3` (with `value = 1`) is at `index = 2` (which has `value = 32`). To better visualize the left/right matrix profile index, we use arrows to link every subsequence in the time series with its left and right nearest neighbors:
###Code
nearest_neighbors = np.array([[1, 47, np.nan, 12],
[2, 32, 1, 8],
[3, 1, 2, 5],
[4, 22, 2, 8],
[5, 2, 3, 7],
[6, 58, 1, 12],
[7, 3, 5, 9],
[8, 36, 2, 12],
[9, 4, 7, 11],
[10, -5, 3, 11],
[11, 5, 9, 12],
[12, 40, 8, np.nan]])
colors = [['C1', 'C1'],
['C2', 'C5'],
['C3', 'C5'],
['C4', 'C4'],
['C3', 'C2'],
['C5', 'C3'],
['C3', 'C2'],
['C2', 'C1'],
['C3', 'C2'],
['C6', 'C1'],
['C6', 'C2'],
['C1', 'C1']]
style="Simple, tail_width=0.5, head_width=6, head_length=8"
kw = dict(arrowstyle=style, connectionstyle="arc3, rad=-.5",)
xs = np.arange(nearest_neighbors.shape[0]) + 1
ys = np.zeros(nearest_neighbors.shape[0])
plt.plot(xs, ys, "-o", markerfacecolor="None", markeredgecolor="None", linestyle="None")
x0, x1, y0, y1 = plt.axis()
plot_margin = 5.0
plt.axis((x0 - plot_margin,
x1 + plot_margin,
y0 - plot_margin,
y1 + plot_margin))
plt.axis('off')
for x, y, nearest_neighbor, color in zip(xs, ys, nearest_neighbors, colors):
plt.text(x, y, str(int(nearest_neighbor[1])), color="black", fontsize=20)
# Plot right matrix profile indices
if not np.isnan(nearest_neighbor[3]):
arrow = FancyArrowPatch((x, 0.5), (nearest_neighbor[3], 0.5), color=color[0], **kw)
plt.gca().add_patch(arrow)
# Plot left matrix profile indices
if not np.isnan(nearest_neighbor[2]):
arrow = FancyArrowPatch((x, 0.0), (nearest_neighbor[2], 0.0), color=color[1], **kw)
plt.gca().add_patch(arrow)
plt.show()
###Output
_____no_output_____
###Markdown
An arrow pointing from a number to its right nearest neighbor (arrows shown above the time series) can be referred to as forward arrow and an arrow pointing from a number to its left nearest neighbor (arrows shown below the time series) can be referred to as a backward arrow. According to the formal definition of a time series chain (see [Matrix Profile VII](https://www.cs.ucr.edu/~eamonn/chains_ICDM.pdf) for a thorough definition and discussion), every pair of consecutive subsequences in a chain must be connected by both a forward arrow and a backward arrow. A keen eye will spot the fact that the longest chain in our simplified example is:
###Code
nearest_neighbors = np.array([[1, 47, np.nan, np.nan],
[2, 32, np.nan, np.nan],
[3, 1, np.nan, 5],
[4, 22, np.nan, np.nan],
[5, 2, 3, 7],
[6, 58, np.nan, np.nan],
[7, 3, 5, 9],
[8, 36, np.nan, np.nan],
[9, 4, 7, 11],
[10, -5, np.nan, np.nan],
[11, 5, 9, np.nan],
[12, 40, np.nan, np.nan]])
colors = [['C1', 'C1'],
['C2', 'C5'],
['C3', 'C5'],
['C4', 'C4'],
['C3', 'C2'],
['C5', 'C3'],
['C3', 'C2'],
['C2', 'C1'],
['C3', 'C2'],
['C6', 'C1'],
['C6', 'C2'],
['C1', 'C1']]
style="Simple, tail_width=0.5, head_width=6, head_length=8"
kw = dict(arrowstyle=style, connectionstyle="arc3, rad=-.5",)
xs = np.arange(nearest_neighbors.shape[0]) + 1
ys = np.zeros(nearest_neighbors.shape[0])
plt.plot(xs, ys, "-o", markerfacecolor="None", markeredgecolor="None", linestyle="None")
x0, x1, y0, y1 = plt.axis()
plot_margin = 5.0
plt.axis((x0 - plot_margin,
x1 + plot_margin,
y0 - plot_margin,
y1 + plot_margin))
plt.axis('off')
for x, y, nearest_neighbor, color in zip(xs, ys, nearest_neighbors, colors):
plt.text(x, y, str(int(nearest_neighbor[1])), color="black", fontsize=20)
# Plot right matrix profile indices
if not np.isnan(nearest_neighbor[3]):
arrow = FancyArrowPatch((x, 0.5), (nearest_neighbor[3], 0.5), color=color[0], **kw)
plt.gca().add_patch(arrow)
# Plot left matrix profile indices
if not np.isnan(nearest_neighbor[2]):
arrow = FancyArrowPatch((x, 0.0), (nearest_neighbor[2], 0.0), color=color[1], **kw)
plt.gca().add_patch(arrow)
plt.show()
###Output
_____no_output_____
###Markdown
The longest extracted chain is therefore 1 ⇌ 2 ⇌ 3 ⇌ 4 ⇌ 5. Note that we see a gradual monotonic increase in the data but, in reality, the increase or decrease in drift can happen in arbitrarily complex ways that can be detected by the time series chains approach. The key component of drifting is that the time series must contain chains with clear directionality.STUMPY is capable of computing:1. anchored time series chains (ATSC) - grow a chain from a user-specified anchor (i.e., specific subsequence)2. all-chain set (ALLC) - a set of anchored time series chains (i.e., each chain starts with a particular subsequence) that are not subsumed by another longer chain3. unanchored time series chain(s) - the unconditionally longest chain within a time series (there could be more than one if there were chains with the same length) So, what does this mean in the context of a real time series? Let's take a look at a real example from web query data! Retrieve the DataWe will be looking at a noisy dataset that is under-sampled and has a growing trend, which will perfectly illustrate the idea regarding time series chains. The data contains a decade-long GoogleTrend query volume (collected weekly from 2004-2014) for the keyword Kohl’s, an American retail chain. First, we'll download the data, extract it, and insert it into a pandas dataframe.
###Code
context = ssl.SSLContext() # Ignore SSL certificate verification for simplicity
url = 'https://sites.google.com/site/timeserieschain/home/Kohls_data.mat?attredirects=0&revision=1'
raw_bytes = urllib.request.urlopen(url, context=context).read()
data = io.BytesIO(raw_bytes)
mat = loadmat(data)
mdata = mat['VarName1']
mdtype = mdata.dtype
df = pd.DataFrame(mdata, dtype=mdtype, columns=['volume'])
df.head()
###Output
_____no_output_____
###Markdown
Visualizing the Data
###Code
plt.plot(df['volume'], color='black')
plt.xlim(0, df.shape[0]+12)
color = itertools.cycle(['white', 'gainsboro'])
for i, x in enumerate(range(0, df.shape[0], 52)):
plt.text(x+12, 0.9, str(2004+i), color="black", fontsize=20)
rect = Rectangle((x, -1), 52, 2.5, facecolor=next(color))
plt.gca().add_patch(rect)
plt.show()
###Output
_____no_output_____
###Markdown
The raw time series above displays ten years of web query volume for the keyword "Kohl's", where each alternating white and grey vertical band represents a 52 week period starting from 2004 to 2014. As depicted, the time series features a significant but unsurprising "end-of-year holiday bump". Relating back to time series chains, we can see that the bump is generally increasing over time and so we might be able to capture this when we compute the unanchored chain.However, as we learned above, in order to compute any time series chains, we also need the left and right matrix profile indices. Luckily for us, according to the docstring, the `stump` function not only returns the (bidirectional) matrix profile and the matrix profile indices in the first and second columns of the NumPy array, respectively, but the third and fourth columns consists of the left matrix profile indices and the right matrix profile indices, respectively:
###Code
?stumpy.stump
###Output
_____no_output_____
###Markdown
Computing the Left and Right Matrix Profile IndicesSo, let's go ahead and compute the matrix profile indices and we'll set the window size, `m = 20`, which is the approximate length of a "bump".
###Code
m = 20
mp = stumpy.stump(df['volume'], m=m)
###Output
_____no_output_____
###Markdown
Computing the Unanchored ChainNow, with our left and right matrix profile indices in hand, we are ready to call the all-chain set function, `allc`, which not only returns the all-chain set but, as a freebie, it also returns the unconditionally longest chain, also know as the unanchored chain. The latter of which is really what we're most interested in.
###Code
all_chain_set, unanchored_chain = stumpy.allc(mp[:, 2], mp[:, 3])
###Output
_____no_output_____
###Markdown
Visualizing the Unanchored Chain
###Code
plt.plot(df['volume'], linewidth=1, color='black')
for i in range(unanchored_chain.shape[0]):
y = df['volume'].iloc[unanchored_chain[i]:unanchored_chain[i]+m]
x = y.index.values
plt.plot(x, y, linewidth=3)
color = itertools.cycle(['white', 'gainsboro'])
for i, x in enumerate(range(0, df.shape[0], 52)):
plt.text(x+12, 0.9, str(2004+i), color="black", fontsize=20)
rect = Rectangle((x, -1), 52, 2.5, facecolor=next(color))
plt.gca().add_patch(rect)
plt.show()
plt.axis('off')
for i in range(unanchored_chain.shape[0]):
data = df['volume'].iloc[unanchored_chain[i]:unanchored_chain[i]+m].reset_index().values
x = data[:, 0]
y = data[:, 1]
plt.axvline(x=x[0]-x.min()+(m+5)*i + 11, alpha=0.3)
plt.axvline(x=x[0]-x.min()+(m+5)*i + 15, alpha=0.3, linestyle='-.')
plt.plot(x-x.min()+(m+5)*i, y-y.min(), linewidth=3)
plt.show()
###Output
_____no_output_____
###Markdown
Time Series Chains Forecasting Web Query Data with Anchored Time Series Chains (ATSC)This example is adapted from the [Web Query Volume case study](http://www.www2015.it/documents/proceedings/proceedings/p721.pdf) and utilizes the main takeaways from the [Matrix Profile VII](https://www.cs.ucr.edu/~eamonn/chains_ICDM.pdf) research paper. Getting StartedLet's import the packages that we'll need to load, analyze, and plot the data.
###Code
%matplotlib inline
import pandas as pd
import numpy as np
import stumpy
from scipy.io import loadmat
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle, FancyArrowPatch
import urllib
import ssl
import io
import itertools
###Output
_____no_output_____
###Markdown
What are Time Series Chains?Time series chains may be informally considered as motifs that evolve or drift in some direction over time. The figure below illustrates the differencebetween [time series motifs](Tutorial_1.ipynb) (left) and time series chains (right).
###Code
def change_plot_size(width, height, plt):
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = width
fig_size[1] = height
plt.rcParams["figure.figsize"] = fig_size
change_plot_size(20, 6, plt)
x = np.random.rand(20)
y = np.random.rand(20)
n = 10
motifs_x = 0.5 * np.ones(n) + np.random.uniform(-0.05, 0.05, n)
motifs_y = 0.5 * np.ones(n) + np.random.uniform(-0.05, 0.05, n)
sin_x = np.linspace(0, np.pi/2, n+1)
sin_y = np.sin(sin_x)/4
chains_x = 0.5 * np.ones(n+1) + 0.02 * np.arange(n+1)
chains_y = 0.5 * np.ones(n+1) + sin_y
fig, axes = plt.subplots(nrows=1, ncols=2)
axes[0].scatter(x, y, color='lightgrey')
axes[0].scatter(motifs_x, motifs_y, color='red')
axes[1].scatter(x, y, color='lightgrey')
axes[1].scatter(chains_x[0], chains_y[0], edgecolor='red', color='white')
axes[1].scatter(chains_x[1:n], chains_y[1:n], color='red')
axes[1].scatter(chains_x[n], chains_y[n], edgecolor='red', color='white', marker='*', s=200)
plt.show()
###Output
_____no_output_____
###Markdown
Above, we are visualizing time series subsequences as points in high-dimensional space. Shown on the left is a time series motif and it can be thought of as a collection of points that approximate a platonic ideal. In contrast, depicted on the right, is a time series chain and it may be thought of as an evolving trail of points in the space. Here, the open red circle represents the first link in the chain, the anchor. Both motifs and chains have the property that each subsequence is relatively close to its nearest neighbor. However, the motif set (left) also has a relatively small diameter. In contrast, the set of points in a chain (right) has a diameter that is much larger than the mean of each member’s distance to its nearest neighbor and, moreover, the chain has the important property of directionality. For example, in the case of a motif, if an additional member was added to the motif set, its location will also be somewhere near the platonic ideal, but independent of the previous subsequences. In contrast, in the case of a chain, the location of the next member of the chain would be somewhere after the last red circle, possibly where the open red star is located. A Simplified ExampleAdapted from the [Matrix Profile VII](https://www.cs.ucr.edu/~eamonn/chains_ICDM.pdf) paper, consider the following time series:47, 32, 1, 22, 2, 58, 3, 36, 4, -5, 5, 40Assume that the subsequence length is 1 and the distance between two subsequences is simply the absolute differencebetween them. To be clear, we are making these simple and pathological assumptions here just for the purposes ofelucidation; we are actually targeting much longer subsequence lengths and using z-normalized Euclidean distance in ourapplications. To capture the directionality of a time series chain, we need to store the left and right nearest neighbor information into the left (IL) and right (IR) matrix profile indices:| Index | Value | Left Index (IL) | Right Index (IR) ||-------|-------|-----------------|------------------|| 1 | 47 | - | 12 || 2 | 32 | 1 | 8 || 3 | 1 | 2 | 5 || 4 | 22 | 2 | 8 || 5 | 2 | 3 | 7 || 6 | 58 | 1 | 12 || 7 | 3 | 5 | 9 || 8 | 36 | 2 | 12 || 9 | 4 | 7 | 11 || 10 | -5 | 3 | 11 || 11 | 5 | 9 | 12 || 12 | 40 | 8 | - |In this vertical/transposed representation, the `index` column shows the location of every subsequence in the time series, the `value` column contains the original numbers from our time series above, the `IL` column shows the left matrix profile indices, and `IR` is the right matrix profile indices. For example, `IR[2] = 8` means the right nearest neighbor of `index = 2` (which has `value = 32`) is at `index = 8` (which has `value = 36`). Similarly, `IL[3] = 2` means that the left nearest neighbor of `index = 3` (with `value = 1`) is at `index = 2` (which has `value = 32`). To better visualize the left/right matrix profile index, we use arrows to link every subsequence in the time series with its left and right nearest neighbors:
###Code
nearest_neighbors = np.array([[1, 47, np.nan, 12],
[2, 32, 1, 8],
[3, 1, 2, 5],
[4, 22, 2, 8],
[5, 2, 3, 7],
[6, 58, 1, 12],
[7, 3, 5, 9],
[8, 36, 2, 12],
[9, 4, 7, 11],
[10, -5, 3, 11],
[11, 5, 9, 12],
[12, 40, 8, np.nan]])
colors = [['C1', 'C1'],
['C2', 'C5'],
['C3', 'C5'],
['C4', 'C4'],
['C3', 'C2'],
['C5', 'C3'],
['C3', 'C2'],
['C2', 'C1'],
['C3', 'C2'],
['C6', 'C1'],
['C6', 'C2'],
['C1', 'C1']]
style="Simple, tail_width=0.5, head_width=6, head_length=8"
kw = dict(arrowstyle=style, connectionstyle="arc3, rad=-.5",)
xs = np.arange(nearest_neighbors.shape[0]) + 1
ys = np.zeros(nearest_neighbors.shape[0])
plt.plot(xs, ys, "-o", markerfacecolor="None", markeredgecolor="None", linestyle="None")
x0, x1, y0, y1 = plt.axis()
plot_margin = 5.0
plt.axis((x0 - plot_margin,
x1 + plot_margin,
y0 - plot_margin,
y1 + plot_margin))
plt.axis('off')
for x, y, nearest_neighbor, color in zip(xs, ys, nearest_neighbors, colors):
plt.text(x, y, str(int(nearest_neighbor[1])), color="black", fontsize=20)
# Plot right matrix profile indices
if not np.isnan(nearest_neighbor[3]):
arrow = FancyArrowPatch((x, 0.5), (nearest_neighbor[3], 0.5), color=color[0], **kw)
plt.gca().add_patch(arrow)
# Plot left matrix profile indices
if not np.isnan(nearest_neighbor[2]):
arrow = FancyArrowPatch((x, 0.0), (nearest_neighbor[2], 0.0), color=color[1], **kw)
plt.gca().add_patch(arrow)
plt.show()
###Output
_____no_output_____
###Markdown
An arrow pointing from a number to its right nearest neighbor (arrows shown above the time series) can be referred to as forward arrow and an arrow pointing from a number to its left nearest neighbor (arrows shown below the time series) can be referred to as a backward arrow. According to the formal definition of a time series chain (see [Matrix Profile VII](https://www.cs.ucr.edu/~eamonn/chains_ICDM.pdf) for a thorough definition and discussion), every pair of consecutive subsequences in a chain must be connected by both a forward arrow and a backward arrow. A keen eye will spot the fact that the longest chain in our simplified example is:
###Code
nearest_neighbors = np.array([[1, 47, np.nan, np.nan],
[2, 32, np.nan, np.nan],
[3, 1, np.nan, 5],
[4, 22, np.nan, np.nan],
[5, 2, 3, 7],
[6, 58, np.nan, np.nan],
[7, 3, 5, 9],
[8, 36, np.nan, np.nan],
[9, 4, 7, 11],
[10, -5, np.nan, np.nan],
[11, 5, 9, np.nan],
[12, 40, np.nan, np.nan]])
colors = [['C1', 'C1'],
['C2', 'C5'],
['C3', 'C5'],
['C4', 'C4'],
['C3', 'C2'],
['C5', 'C3'],
['C3', 'C2'],
['C2', 'C1'],
['C3', 'C2'],
['C6', 'C1'],
['C6', 'C2'],
['C1', 'C1']]
style="Simple, tail_width=0.5, head_width=6, head_length=8"
kw = dict(arrowstyle=style, connectionstyle="arc3, rad=-.5",)
xs = np.arange(nearest_neighbors.shape[0]) + 1
ys = np.zeros(nearest_neighbors.shape[0])
plt.plot(xs, ys, "-o", markerfacecolor="None", markeredgecolor="None", linestyle="None")
x0, x1, y0, y1 = plt.axis()
plot_margin = 5.0
plt.axis((x0 - plot_margin,
x1 + plot_margin,
y0 - plot_margin,
y1 + plot_margin))
plt.axis('off')
for x, y, nearest_neighbor, color in zip(xs, ys, nearest_neighbors, colors):
plt.text(x, y, str(int(nearest_neighbor[1])), color="black", fontsize=20)
# Plot right matrix profile indices
if not np.isnan(nearest_neighbor[3]):
arrow = FancyArrowPatch((x, 0.5), (nearest_neighbor[3], 0.5), color=color[0], **kw)
plt.gca().add_patch(arrow)
# Plot left matrix profile indices
if not np.isnan(nearest_neighbor[2]):
arrow = FancyArrowPatch((x, 0.0), (nearest_neighbor[2], 0.0), color=color[1], **kw)
plt.gca().add_patch(arrow)
plt.show()
###Output
_____no_output_____
###Markdown
The longest extracted chain is therefore 1 ⇌ 2 ⇌ 3 ⇌ 4 ⇌ 5. Note that we see a gradual monotonic increase in the data but, in reality, the increase or decrease in drift can happen in arbitrarily complex ways that can be detected by the time series chains approach. The key component of drifting is that the time series must contain chains with clear directionality.STUMPY is capable of computing:1. anchored time series chains (ATSC) - grow a chain from a user-specified anchor (i.e., specific subsequence)2. all-chain set (ALLC) - a set of anchored time series chains (i.e., each chain starts with a particular subsequence) that are not subsumed by another longer chain3. unanchored time series chain(s) - the unconditionally longest chain within a time series (there could be more than one if there were chains with the same length) So, what does this mean in the context of a real time series? Let's take a look at a real example from web query data! Retrieve the DataWe will be looking at a noisy dataset that is under-sampled and has a growing trend, which will perfectly illustrate the idea regarding time series chains. The data contains a decade-long GoogleTrend query volume (collected weekly from 2004-2014) for the keyword Kohl’s, an American retail chain. First, we'll download the data, extract it, and insert it into a pandas dataframe.
###Code
context = ssl.SSLContext() # Ignore SSL certificate verification for simplicity
url = 'https://sites.google.com/site/timeserieschain/home/Kohls_data.mat?attredirects=0&revision=1'
raw_bytes = urllib.request.urlopen(url, context=context).read()
data = io.BytesIO(raw_bytes)
mat = loadmat(data)
mdata = mat['VarName1']
mdtype = mdata.dtype
df = pd.DataFrame(mdata, dtype=mdtype, columns=['volume'])
df.head()
###Output
_____no_output_____
###Markdown
Visualizing the Data
###Code
plt.plot(df['volume'], color='black')
plt.xlim(0, df.shape[0]+12)
color = itertools.cycle(['white', 'gainsboro'])
for i, x in enumerate(range(0, df.shape[0], 52)):
plt.text(x+12, 0.9, str(2004+i), color="black", fontsize=20)
rect = Rectangle((x, -1), 52, 2.5, facecolor=next(color))
plt.gca().add_patch(rect)
plt.show()
###Output
_____no_output_____
###Markdown
The raw time series above displays ten years of web query volume for the keyword "Kohl's", where each alternating white and grey vertical band represents a 52 week period starting from 2004 to 2014. As depicted, the time series features a significant but unsurprising "end-of-year holiday bump". Relating back to time series chains, we can see that the bump is generally increasing over time and so we might be able to capture this when we compute the unanchored chain.However, as we learned above, in order to compute any time series chains, we also need the left and right matrix profile indices. Luckily for us, according to the docstring, the `stump` function not only returns the (bidirectional) matrix profile and the matrix profile indices in the first and second columns of the NumPy array, respectively, but the third and fourth columns consists of the left matrix profile indices and the right matrix profile indices, respectively:
###Code
?stumpy.stump
###Output
_____no_output_____
###Markdown
Computing the Left and Right Matrix Profile IndicesSo, let's go ahead and compute the matrix profile indices and we'll set the window size, `m = 20`, which is the approximate length of a "bump".
###Code
m = 20
mp = stumpy.stump(df['volume'], m=m)
###Output
_____no_output_____
###Markdown
Computing the Unanchored ChainNow, with our left and right matrix profile indices in hand, we are ready to call the all-chain set function, `allc`, which not only returns the all-chain set but, as a freebie, it also returns the unconditionally longest chain, also know as the unanchored chain. The latter of which is really what we're most interested in.
###Code
all_chain_set, unanchored_chain = stumpy.allc(mp[:, 2], mp[:, 3])
###Output
_____no_output_____
###Markdown
Visualizing the Unanchored Chain
###Code
plt.plot(df['volume'], linewidth=1, color='black')
for i in range(unanchored_chain.shape[0]):
y = df['volume'].iloc[unanchored_chain[i]:unanchored_chain[i]+m]
x = y.index.values
plt.plot(x, y, linewidth=3)
color = itertools.cycle(['white', 'gainsboro'])
for i, x in enumerate(range(0, df.shape[0], 52)):
plt.text(x+12, 0.9, str(2004+i), color="black", fontsize=20)
rect = Rectangle((x, -1), 52, 2.5, facecolor=next(color))
plt.gca().add_patch(rect)
plt.show()
plt.axis('off')
for i in range(unanchored_chain.shape[0]):
data = df['volume'].iloc[unanchored_chain[i]:unanchored_chain[i]+m].reset_index().values
x = data[:, 0]
y = data[:, 1]
plt.axvline(x=x[0]-x.min()+(m+5)*i + 11, alpha=0.3)
plt.axvline(x=x[0]-x.min()+(m+5)*i + 15, alpha=0.3, linestyle='-.')
plt.plot(x-x.min()+(m+5)*i, y-y.min(), linewidth=3)
plt.show()
###Output
_____no_output_____
###Markdown
Time Series Chains[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/TDAmeritrade/stumpy/main?filepath=notebooks/Tutorial_Time_Series_Chains.ipynb) Forecasting Web Query Data with Anchored Time Series Chains (ATSC)This example is adapted from the [Web Query Volume case study](http://www2015.thewebconf.org/documents/proceedings/companion/p721.pdf) and utilizes the main takeaways from the [Matrix Profile VII](https://www.cs.ucr.edu/~eamonn/chains_ICDM.pdf) research paper. Getting StartedLet's import the packages that we'll need to load, analyze, and plot the data.
###Code
%matplotlib inline
import pandas as pd
import numpy as np
import stumpy
from scipy.io import loadmat
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle, FancyArrowPatch
import itertools
plt.style.use('https://raw.githubusercontent.com/TDAmeritrade/stumpy/main/docs/stumpy.mplstyle')
###Output
_____no_output_____
###Markdown
What are Time Series Chains?Time series chains may be informally considered as motifs that evolve or drift in some direction over time. The figure below illustrates the differencebetween [time series motifs](Tutorial_STUMPY_Basics.ipynb) (left) and time series chains (right).
###Code
x = np.random.rand(20)
y = np.random.rand(20)
n = 10
motifs_x = 0.5 * np.ones(n) + np.random.uniform(-0.05, 0.05, n)
motifs_y = 0.5 * np.ones(n) + np.random.uniform(-0.05, 0.05, n)
sin_x = np.linspace(0, np.pi/2, n+1)
sin_y = np.sin(sin_x)/4
chains_x = 0.5 * np.ones(n+1) + 0.02 * np.arange(n+1)
chains_y = 0.5 * np.ones(n+1) + sin_y
fig, axes = plt.subplots(nrows=1, ncols=2)
axes[0].scatter(x, y, color='lightgrey')
axes[0].scatter(motifs_x, motifs_y, color='red')
axes[1].scatter(x, y, color='lightgrey')
axes[1].scatter(chains_x[0], chains_y[0], edgecolor='red', color='white')
axes[1].scatter(chains_x[1:n], chains_y[1:n], color='red')
axes[1].scatter(chains_x[n], chains_y[n], edgecolor='red', color='white', marker='*', s=200)
plt.show()
###Output
_____no_output_____
###Markdown
Above, we are visualizing time series subsequences as points in high-dimensional space. Shown on the left is a time series motif and it can be thought of as a collection of points that approximate a platonic ideal. In contrast, depicted on the right, is a time series chain and it may be thought of as an evolving trail of points in the space. Here, the open red circle represents the first link in the chain, the anchor. Both motifs and chains have the property that each subsequence is relatively close to its nearest neighbor. However, the motif set (left) also has a relatively small diameter. In contrast, the set of points in a chain (right) has a diameter that is much larger than the mean of each member’s distance to its nearest neighbor and, moreover, the chain has the important property of directionality. For example, in the case of a motif, if an additional member was added to the motif set, its location will also be somewhere near the platonic ideal, but independent of the previous subsequences. In contrast, in the case of a chain, the location of the next member of the chain would be somewhere after the last red circle, possibly where the open red star is located. A Simplified ExampleAdapted from the [Matrix Profile VII](https://www.cs.ucr.edu/~eamonn/chains_ICDM.pdf) paper, consider the following time series:47, 32, 1, 22, 2, 58, 3, 36, 4, -5, 5, 40Assume that the subsequence length is 1 and the distance between two subsequences is simply the absolute differencebetween them. To be clear, we are making these simple and pathological assumptions here just for the purposes ofelucidation; we are actually targeting much longer subsequence lengths and using z-normalized Euclidean distance in ourapplications. To capture the directionality of a time series chain, we need to store the left and right nearest neighbor information into the left (IL) and right (IR) matrix profile indices:| Index | Value | Left Index (IL) | Right Index (IR) ||-------|-------|-----------------|------------------|| 1 | 47 | - | 12 || 2 | 32 | 1 | 8 || 3 | 1 | 2 | 5 || 4 | 22 | 2 | 8 || 5 | 2 | 3 | 7 || 6 | 58 | 1 | 12 || 7 | 3 | 5 | 9 || 8 | 36 | 2 | 12 || 9 | 4 | 7 | 11 || 10 | -5 | 3 | 11 || 11 | 5 | 9 | 12 || 12 | 40 | 8 | - |In this vertical/transposed representation, the `index` column shows the location of every subsequence in the time series, the `value` column contains the original numbers from our time series above, the `IL` column shows the left matrix profile indices, and `IR` is the right matrix profile indices. For example, `IR[2] = 8` means the right nearest neighbor of `index = 2` (which has `value = 32`) is at `index = 8` (which has `value = 36`). Similarly, `IL[3] = 2` means that the left nearest neighbor of `index = 3` (with `value = 1`) is at `index = 2` (which has `value = 32`). To better visualize the left/right matrix profile index, we use arrows to link every subsequence in the time series with its left and right nearest neighbors:
###Code
nearest_neighbors = np.array([[1, 47, np.nan, 12],
[2, 32, 1, 8],
[3, 1, 2, 5],
[4, 22, 2, 8],
[5, 2, 3, 7],
[6, 58, 1, 12],
[7, 3, 5, 9],
[8, 36, 2, 12],
[9, 4, 7, 11],
[10, -5, 3, 11],
[11, 5, 9, 12],
[12, 40, 8, np.nan]])
colors = [['C1', 'C1'],
['C2', 'C5'],
['C3', 'C5'],
['C4', 'C4'],
['C3', 'C2'],
['C5', 'C3'],
['C3', 'C2'],
['C2', 'C1'],
['C3', 'C2'],
['C6', 'C1'],
['C6', 'C2'],
['C1', 'C1']]
style="Simple, tail_width=0.5, head_width=6, head_length=8"
kw = dict(arrowstyle=style, connectionstyle="arc3, rad=-.5",)
xs = np.arange(nearest_neighbors.shape[0]) + 1
ys = np.zeros(nearest_neighbors.shape[0])
plt.plot(xs, ys, "-o", markerfacecolor="None", markeredgecolor="None", linestyle="None")
x0, x1, y0, y1 = plt.axis()
plot_margin = 5.0
plt.axis((x0 - plot_margin,
x1 + plot_margin,
y0 - plot_margin,
y1 + plot_margin))
plt.axis('off')
for x, y, nearest_neighbor, color in zip(xs, ys, nearest_neighbors, colors):
plt.text(x, y, str(int(nearest_neighbor[1])), color="black", fontsize=20)
# Plot right matrix profile indices
if not np.isnan(nearest_neighbor[3]):
arrow = FancyArrowPatch((x, 0.5), (nearest_neighbor[3], 0.5), color=color[0], **kw)
plt.gca().add_patch(arrow)
# Plot left matrix profile indices
if not np.isnan(nearest_neighbor[2]):
arrow = FancyArrowPatch((x, 0.0), (nearest_neighbor[2], 0.0), color=color[1], **kw)
plt.gca().add_patch(arrow)
plt.show()
###Output
<ipython-input-3-105203bdfa7f>:32: UserWarning: linestyle is redundantly defined by the 'linestyle' keyword argument and the fmt string "-o" (-> linestyle='-'). The keyword argument will take precedence.
plt.plot(xs, ys, "-o", markerfacecolor="None", markeredgecolor="None", linestyle="None")
###Markdown
An arrow pointing from a number to its right nearest neighbor (arrows shown above the time series) can be referred to as forward arrow and an arrow pointing from a number to its left nearest neighbor (arrows shown below the time series) can be referred to as a backward arrow. According to the formal definition of a time series chain (see [Matrix Profile VII](https://www.cs.ucr.edu/~eamonn/chains_ICDM.pdf) for a thorough definition and discussion), every pair of consecutive subsequences in a chain must be connected by both a forward arrow and a backward arrow. A keen eye will spot the fact that the longest chain in our simplified example is:
###Code
nearest_neighbors = np.array([[1, 47, np.nan, np.nan],
[2, 32, np.nan, np.nan],
[3, 1, np.nan, 5],
[4, 22, np.nan, np.nan],
[5, 2, 3, 7],
[6, 58, np.nan, np.nan],
[7, 3, 5, 9],
[8, 36, np.nan, np.nan],
[9, 4, 7, 11],
[10, -5, np.nan, np.nan],
[11, 5, 9, np.nan],
[12, 40, np.nan, np.nan]])
colors = [['C1', 'C1'],
['C2', 'C5'],
['C3', 'C5'],
['C4', 'C4'],
['C3', 'C2'],
['C5', 'C3'],
['C3', 'C2'],
['C2', 'C1'],
['C3', 'C2'],
['C6', 'C1'],
['C6', 'C2'],
['C1', 'C1']]
style="Simple, tail_width=0.5, head_width=6, head_length=8"
kw = dict(arrowstyle=style, connectionstyle="arc3, rad=-.5",)
xs = np.arange(nearest_neighbors.shape[0]) + 1
ys = np.zeros(nearest_neighbors.shape[0])
plt.plot(xs, ys, "-o", markerfacecolor="None", markeredgecolor="None", linestyle="None")
x0, x1, y0, y1 = plt.axis()
plot_margin = 5.0
plt.axis((x0 - plot_margin,
x1 + plot_margin,
y0 - plot_margin,
y1 + plot_margin))
plt.axis('off')
for x, y, nearest_neighbor, color in zip(xs, ys, nearest_neighbors, colors):
plt.text(x, y, str(int(nearest_neighbor[1])), color="black", fontsize=20)
# Plot right matrix profile indices
if not np.isnan(nearest_neighbor[3]):
arrow = FancyArrowPatch((x, 0.5), (nearest_neighbor[3], 0.5), color=color[0], **kw)
plt.gca().add_patch(arrow)
# Plot left matrix profile indices
if not np.isnan(nearest_neighbor[2]):
arrow = FancyArrowPatch((x, 0.0), (nearest_neighbor[2], 0.0), color=color[1], **kw)
plt.gca().add_patch(arrow)
plt.show()
###Output
<ipython-input-4-b09fcbf8ba29>:32: UserWarning: linestyle is redundantly defined by the 'linestyle' keyword argument and the fmt string "-o" (-> linestyle='-'). The keyword argument will take precedence.
plt.plot(xs, ys, "-o", markerfacecolor="None", markeredgecolor="None", linestyle="None")
###Markdown
The longest extracted chain is therefore 1 ⇌ 2 ⇌ 3 ⇌ 4 ⇌ 5. Note that we see a gradual monotonic increase in the data but, in reality, the increase or decrease in drift can happen in arbitrarily complex ways that can be detected by the time series chains approach. The key component of drifting is that the time series must contain chains with clear directionality.STUMPY is capable of computing:1. anchored time series chains (ATSC) - grow a chain from a user-specified anchor (i.e., specific subsequence)2. all-chain set (ALLC) - a set of anchored time series chains (i.e., each chain starts with a particular subsequence) that are not subsumed by another longer chain3. unanchored time series chain(s) - the unconditionally longest chain within a time series (there could be more than one if there were chains with the same length) So, what does this mean in the context of a real time series? Let's take a look at a real example from web query data! Retrieve the DataWe will be looking at a noisy dataset that is under-sampled and has a growing trend, which will perfectly illustrate the idea regarding time series chains. The data contains a decade-long GoogleTrend query volume (collected weekly from 2004-2014) for the keyword Kohl’s, an American retail chain. First, we'll download the data, extract it, and insert it into a pandas dataframe.
###Code
df = pd.read_csv("https://zenodo.org/record/4276348/files/Time_Series_Chains_Kohls_data.csv?download=1")
df.head()
###Output
_____no_output_____
###Markdown
Visualizing the Data
###Code
plt.plot(df['volume'], color='black')
plt.xlim(0, df.shape[0]+12)
color = itertools.cycle(['white', 'gainsboro'])
for i, x in enumerate(range(0, df.shape[0], 52)):
plt.text(x+12, 0.9, str(2004+i), color="black", fontsize=20)
rect = Rectangle((x, -1), 52, 2.5, facecolor=next(color))
plt.gca().add_patch(rect)
plt.show()
###Output
_____no_output_____
###Markdown
The raw time series above displays ten years of web query volume for the keyword "Kohl's", where each alternating white and grey vertical band represents a 52 week period starting from 2004 to 2014. As depicted, the time series features a significant but unsurprising "end-of-year holiday bump". Relating back to time series chains, we can see that the bump is generally increasing over time and so we might be able to capture this when we compute the unanchored chain.However, as we learned above, in order to compute any time series chains, we also need the left and right matrix profile indices. Luckily for us, according to the docstring, the `stump` function not only returns the (bidirectional) matrix profile and the matrix profile indices in the first and second columns of the NumPy array, respectively, but the third and fourth columns consists of the left matrix profile indices and the right matrix profile indices, respectively. Computing the Left and Right Matrix Profile IndicesSo, let's go ahead and compute the matrix profile indices and we'll set the window size, `m = 20`, which is the approximate length of a "bump".
###Code
m = 20
mp = stumpy.stump(df['volume'], m=m)
###Output
_____no_output_____
###Markdown
Computing the Unanchored ChainNow, with our left and right matrix profile indices in hand, we are ready to call the all-chain set function, `allc`, which not only returns the all-chain set but, as a freebie, it also returns the unconditionally longest chain, also know as the unanchored chain. The latter of which is really what we're most interested in.
###Code
all_chain_set, unanchored_chain = stumpy.allc(mp[:, 2], mp[:, 3])
###Output
_____no_output_____
###Markdown
Visualizing the Unanchored Chain
###Code
plt.plot(df['volume'], linewidth=1, color='black')
for i in range(unanchored_chain.shape[0]):
y = df['volume'].iloc[unanchored_chain[i]:unanchored_chain[i]+m]
x = y.index.values
plt.plot(x, y, linewidth=3)
color = itertools.cycle(['white', 'gainsboro'])
for i, x in enumerate(range(0, df.shape[0], 52)):
plt.text(x+12, 0.9, str(2004+i), color="black", fontsize=20)
rect = Rectangle((x, -1), 52, 2.5, facecolor=next(color))
plt.gca().add_patch(rect)
plt.show()
plt.axis('off')
for i in range(unanchored_chain.shape[0]):
data = df['volume'].iloc[unanchored_chain[i]:unanchored_chain[i]+m].reset_index().values
x = data[:, 0]
y = data[:, 1]
plt.axvline(x=x[0]-x.min()+(m+5)*i + 11, alpha=0.3)
plt.axvline(x=x[0]-x.min()+(m+5)*i + 15, alpha=0.3, linestyle='-.')
plt.plot(x-x.min()+(m+5)*i, y-y.min(), linewidth=3)
plt.show()
###Output
_____no_output_____
###Markdown
Time Series Chains[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/TDAmeritrade/stumpy/main?filepath=notebooks/Tutorial_Time_Series_Chains.ipynb) Forecasting Web Query Data with Anchored Time Series Chains (ATSC)This example is adapted from the [Web Query Volume case study](http://www2015.thewebconf.org/documents/proceedings/companion/p721.pdf) and utilizes the main takeaways from the [Matrix Profile VII](https://www.cs.ucr.edu/~eamonn/chains_ICDM.pdf) research paper. Getting StartedLet's import the packages that we'll need to load, analyze, and plot the data.
###Code
%matplotlib inline
import pandas as pd
import numpy as np
import stumpy
from scipy.io import loadmat
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle, FancyArrowPatch
import itertools
plt.style.use('https://raw.githubusercontent.com/TDAmeritrade/stumpy/main/docs/stumpy.mplstyle')
###Output
_____no_output_____
###Markdown
What are Time Series Chains?Time series chains may be informally considered as motifs that evolve or drift in some direction over time. The figure below illustrates the differencebetween [time series motifs](Tutorial_STUMPY_Basics.ipynb) (left) and time series chains (right).
###Code
x = np.random.rand(20)
y = np.random.rand(20)
n = 10
motifs_x = 0.5 * np.ones(n) + np.random.uniform(-0.05, 0.05, n)
motifs_y = 0.5 * np.ones(n) + np.random.uniform(-0.05, 0.05, n)
sin_x = np.linspace(0, np.pi/2, n+1)
sin_y = np.sin(sin_x)/4
chains_x = 0.5 * np.ones(n+1) + 0.02 * np.arange(n+1)
chains_y = 0.5 * np.ones(n+1) + sin_y
fig, axes = plt.subplots(nrows=1, ncols=2)
axes[0].scatter(x, y, color='lightgrey')
axes[0].scatter(motifs_x, motifs_y, color='red')
axes[1].scatter(x, y, color='lightgrey')
axes[1].scatter(chains_x[0], chains_y[0], edgecolor='red', color='white')
axes[1].scatter(chains_x[1:n], chains_y[1:n], color='red')
axes[1].scatter(chains_x[n], chains_y[n], edgecolor='red', color='white', marker='*', s=200)
plt.show()
###Output
_____no_output_____
###Markdown
Above, we are visualizing time series subsequences as points in high-dimensional space. Shown on the left is a time series motif and it can be thought of as a collection of points that approximate a platonic ideal. In contrast, depicted on the right, is a time series chain and it may be thought of as an evolving trail of points in the space. Here, the open red circle represents the first link in the chain, the anchor. Both motifs and chains have the property that each subsequence is relatively close to its nearest neighbor. However, the motif set (left) also has a relatively small diameter. In contrast, the set of points in a chain (right) has a diameter that is much larger than the mean of each member’s distance to its nearest neighbor and, moreover, the chain has the important property of directionality. For example, in the case of a motif, if an additional member was added to the motif set, its location will also be somewhere near the platonic ideal, but independent of the previous subsequences. In contrast, in the case of a chain, the location of the next member of the chain would be somewhere after the last red circle, possibly where the open red star is located. A Simplified ExampleAdapted from the [Matrix Profile VII](https://www.cs.ucr.edu/~eamonn/chains_ICDM.pdf) paper, consider the following time series:47, 32, 1, 22, 2, 58, 3, 36, 4, -5, 5, 40Assume that the subsequence length is 1 and the distance between two subsequences is simply the absolute differencebetween them. To be clear, we are making these simple and pathological assumptions here just for the purposes ofelucidation; we are actually targeting much longer subsequence lengths and using z-normalized Euclidean distance in ourapplications. To capture the directionality of a time series chain, we need to store the left and right nearest neighbor information into the left (IL) and right (IR) matrix profile indices:| Index | Value | Left Index (IL) | Right Index (IR) ||-------|-------|-----------------|------------------|| 1 | 47 | - | 12 || 2 | 32 | 1 | 8 || 3 | 1 | 2 | 5 || 4 | 22 | 2 | 8 || 5 | 2 | 3 | 7 || 6 | 58 | 1 | 12 || 7 | 3 | 5 | 9 || 8 | 36 | 2 | 12 || 9 | 4 | 7 | 11 || 10 | -5 | 3 | 11 || 11 | 5 | 9 | 12 || 12 | 40 | 8 | - |In this vertical/transposed representation, the `index` column shows the location of every subsequence in the time series, the `value` column contains the original numbers from our time series above, the `IL` column shows the left matrix profile indices, and `IR` is the right matrix profile indices. For example, `IR[2] = 8` means the right nearest neighbor of `index = 2` (which has `value = 32`) is at `index = 8` (which has `value = 36`). Similarly, `IL[3] = 2` means that the left nearest neighbor of `index = 3` (with `value = 1`) is at `index = 2` (which has `value = 32`). To better visualize the left/right matrix profile index, we use arrows to link every subsequence in the time series with its left and right nearest neighbors:
###Code
nearest_neighbors = np.array([[1, 47, np.nan, 12],
[2, 32, 1, 8],
[3, 1, 2, 5],
[4, 22, 2, 8],
[5, 2, 3, 7],
[6, 58, 1, 12],
[7, 3, 5, 9],
[8, 36, 2, 12],
[9, 4, 7, 11],
[10, -5, 3, 11],
[11, 5, 9, 12],
[12, 40, 8, np.nan]])
colors = [['C1', 'C1'],
['C2', 'C5'],
['C3', 'C5'],
['C4', 'C4'],
['C3', 'C2'],
['C5', 'C3'],
['C3', 'C2'],
['C2', 'C1'],
['C3', 'C2'],
['C6', 'C1'],
['C6', 'C2'],
['C1', 'C1']]
style="Simple, tail_width=0.5, head_width=6, head_length=8"
kw = dict(arrowstyle=style, connectionstyle="arc3, rad=-.5",)
xs = np.arange(nearest_neighbors.shape[0]) + 1
ys = np.zeros(nearest_neighbors.shape[0])
plt.plot(xs, ys, markerfacecolor="None", markeredgecolor="None", linewidth=0)
x0, x1, y0, y1 = plt.axis()
plot_margin = 5.0
plt.axis((x0 - plot_margin,
x1 + plot_margin,
y0 - plot_margin,
y1 + plot_margin))
plt.axis('off')
for x, y, nearest_neighbor, color in zip(xs, ys, nearest_neighbors, colors):
plt.text(x, y, str(int(nearest_neighbor[1])), color="black", fontsize=20)
# Plot right matrix profile indices
if not np.isnan(nearest_neighbor[3]):
arrow = FancyArrowPatch((x, 0.5), (nearest_neighbor[3], 0.5), color=color[0], **kw)
plt.gca().add_patch(arrow)
# Plot left matrix profile indices
if not np.isnan(nearest_neighbor[2]):
arrow = FancyArrowPatch((x, 0.0), (nearest_neighbor[2], 0.0), color=color[1], **kw)
plt.gca().add_patch(arrow)
plt.show()
###Output
_____no_output_____
###Markdown
An arrow pointing from a number to its right nearest neighbor (arrows shown above the time series) can be referred to as forward arrow and an arrow pointing from a number to its left nearest neighbor (arrows shown below the time series) can be referred to as a backward arrow. According to the formal definition of a time series chain (see [Matrix Profile VII](https://www.cs.ucr.edu/~eamonn/chains_ICDM.pdf) for a thorough definition and discussion), every pair of consecutive subsequences in a chain must be connected by both a forward arrow and a backward arrow. A keen eye will spot the fact that the longest chain in our simplified example is:
###Code
nearest_neighbors = np.array([[1, 47, np.nan, np.nan],
[2, 32, np.nan, np.nan],
[3, 1, np.nan, 5],
[4, 22, np.nan, np.nan],
[5, 2, 3, 7],
[6, 58, np.nan, np.nan],
[7, 3, 5, 9],
[8, 36, np.nan, np.nan],
[9, 4, 7, 11],
[10, -5, np.nan, np.nan],
[11, 5, 9, np.nan],
[12, 40, np.nan, np.nan]])
colors = [['C1', 'C1'],
['C2', 'C5'],
['C3', 'C5'],
['C4', 'C4'],
['C3', 'C2'],
['C5', 'C3'],
['C3', 'C2'],
['C2', 'C1'],
['C3', 'C2'],
['C6', 'C1'],
['C6', 'C2'],
['C1', 'C1']]
style="Simple, tail_width=0.5, head_width=6, head_length=8"
kw = dict(arrowstyle=style, connectionstyle="arc3, rad=-.5",)
xs = np.arange(nearest_neighbors.shape[0]) + 1
ys = np.zeros(nearest_neighbors.shape[0])
plt.plot(xs, ys, markerfacecolor="None", markeredgecolor="None", linewidth=0)
x0, x1, y0, y1 = plt.axis()
plot_margin = 5.0
plt.axis((x0 - plot_margin,
x1 + plot_margin,
y0 - plot_margin,
y1 + plot_margin))
plt.axis('off')
for x, y, nearest_neighbor, color in zip(xs, ys, nearest_neighbors, colors):
plt.text(x, y, str(int(nearest_neighbor[1])), color="black", fontsize=20)
# Plot right matrix profile indices
if not np.isnan(nearest_neighbor[3]):
arrow = FancyArrowPatch((x, 0.5), (nearest_neighbor[3], 0.5), color=color[0], **kw)
plt.gca().add_patch(arrow)
# Plot left matrix profile indices
if not np.isnan(nearest_neighbor[2]):
arrow = FancyArrowPatch((x, 0.0), (nearest_neighbor[2], 0.0), color=color[1], **kw)
plt.gca().add_patch(arrow)
plt.show()
###Output
_____no_output_____
###Markdown
The longest extracted chain is therefore 1 ⇌ 2 ⇌ 3 ⇌ 4 ⇌ 5. Note that we see a gradual monotonic increase in the data but, in reality, the increase or decrease in drift can happen in arbitrarily complex ways that can be detected by the time series chains approach. The key component of drifting is that the time series must contain chains with clear directionality.STUMPY is capable of computing:1. anchored time series chains (ATSC) - grow a chain from a user-specified anchor (i.e., specific subsequence)2. all-chain set (ALLC) - a set of anchored time series chains (i.e., each chain starts with a particular subsequence) that are not subsumed by another longer chain3. unanchored time series chain - the unconditionally longest chain within a time series (note that there could be more than one if there were chains with the same length but only one is returned) So, what does this mean in the context of a real time series? Let's take a look at a real example from web query data! Retrieve the DataWe will be looking at a noisy dataset that is under-sampled and has a growing trend, which will perfectly illustrate the idea regarding time series chains. The data contains a decade-long GoogleTrend query volume (collected weekly from 2004-2014) for the keyword Kohl’s, an American retail chain. First, we'll download the data, extract it, and insert it into a pandas dataframe.
###Code
df = pd.read_csv("https://zenodo.org/record/4276348/files/Time_Series_Chains_Kohls_data.csv?download=1")
df.head()
###Output
_____no_output_____
###Markdown
Visualizing the Data
###Code
plt.plot(df['volume'], color='black')
plt.xlim(0, df.shape[0]+12)
color = itertools.cycle(['white', 'gainsboro'])
for i, x in enumerate(range(0, df.shape[0], 52)):
plt.text(x+12, 0.9, str(2004+i), color="black", fontsize=20)
rect = Rectangle((x, -1), 52, 2.5, facecolor=next(color))
plt.gca().add_patch(rect)
plt.show()
###Output
_____no_output_____
###Markdown
The raw time series above displays ten years of web query volume for the keyword "Kohl's", where each alternating white and grey vertical band represents a 52 week period starting from 2004 to 2014. As depicted, the time series features a significant but unsurprising "end-of-year holiday bump". Relating back to time series chains, we can see that the bump is generally increasing over time and so we might be able to capture this when we compute the unanchored chain.However, as we learned above, in order to compute any time series chains, we also need the left and right matrix profile indices. Luckily for us, according to the docstring, the `stump` function not only returns the (bidirectional) matrix profile and the matrix profile indices in the first and second columns of the NumPy array, respectively, but the third and fourth columns consists of the left matrix profile indices and the right matrix profile indices, respectively. Computing the Left and Right Matrix Profile IndicesSo, let's go ahead and compute the matrix profile indices and we'll set the window size, `m = 20`, which is the approximate length of a "bump".
###Code
m = 20
mp = stumpy.stump(df['volume'], m=m)
###Output
_____no_output_____
###Markdown
Computing the Unanchored ChainNow, with our left and right matrix profile indices in hand, we are ready to call the all-chain set function, `allc`, which not only returns the all-chain set but, as a freebie, it also returns the unconditionally longest chain, also know as the unanchored chain. The latter of which is really what we're most interested in.
###Code
all_chain_set, unanchored_chain = stumpy.allc(mp[:, 2], mp[:, 3])
###Output
_____no_output_____
###Markdown
Visualizing the Unanchored Chain
###Code
plt.plot(df['volume'], linewidth=1, color='black')
for i in range(unanchored_chain.shape[0]):
y = df['volume'].iloc[unanchored_chain[i]:unanchored_chain[i]+m]
x = y.index.values
plt.plot(x, y, linewidth=3)
color = itertools.cycle(['white', 'gainsboro'])
for i, x in enumerate(range(0, df.shape[0], 52)):
plt.text(x+12, 0.9, str(2004+i), color="black", fontsize=20)
rect = Rectangle((x, -1), 52, 2.5, facecolor=next(color))
plt.gca().add_patch(rect)
plt.show()
plt.axis('off')
for i in range(unanchored_chain.shape[0]):
data = df['volume'].iloc[unanchored_chain[i]:unanchored_chain[i]+m].reset_index().values
x = data[:, 0]
y = data[:, 1]
plt.axvline(x=x[0]-x.min()+(m+5)*i + 11, alpha=0.3)
plt.axvline(x=x[0]-x.min()+(m+5)*i + 15, alpha=0.3, linestyle='-.')
plt.plot(x-x.min()+(m+5)*i, y-y.min(), linewidth=3)
plt.show()
###Output
_____no_output_____
###Markdown
Time Series Chains[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/TDAmeritrade/stumpy/main?filepath=notebooks/Tutorial_Time_Series_Chains.ipynb) Forecasting Web Query Data with Anchored Time Series Chains (ATSC)This example is adapted from the [Web Query Volume case study](http://www.www2015.it/documents/proceedings/proceedings/p721.pdf) and utilizes the main takeaways from the [Matrix Profile VII](https://www.cs.ucr.edu/~eamonn/chains_ICDM.pdf) research paper. Getting StartedLet's import the packages that we'll need to load, analyze, and plot the data.
###Code
%matplotlib inline
import pandas as pd
import numpy as np
import stumpy
from scipy.io import loadmat
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle, FancyArrowPatch
import itertools
plt.style.use('stumpy.mplstyle')
###Output
_____no_output_____
###Markdown
What are Time Series Chains?Time series chains may be informally considered as motifs that evolve or drift in some direction over time. The figure below illustrates the differencebetween [time series motifs](Tutorial_1.ipynb) (left) and time series chains (right).
###Code
x = np.random.rand(20)
y = np.random.rand(20)
n = 10
motifs_x = 0.5 * np.ones(n) + np.random.uniform(-0.05, 0.05, n)
motifs_y = 0.5 * np.ones(n) + np.random.uniform(-0.05, 0.05, n)
sin_x = np.linspace(0, np.pi/2, n+1)
sin_y = np.sin(sin_x)/4
chains_x = 0.5 * np.ones(n+1) + 0.02 * np.arange(n+1)
chains_y = 0.5 * np.ones(n+1) + sin_y
fig, axes = plt.subplots(nrows=1, ncols=2)
axes[0].scatter(x, y, color='lightgrey')
axes[0].scatter(motifs_x, motifs_y, color='red')
axes[1].scatter(x, y, color='lightgrey')
axes[1].scatter(chains_x[0], chains_y[0], edgecolor='red', color='white')
axes[1].scatter(chains_x[1:n], chains_y[1:n], color='red')
axes[1].scatter(chains_x[n], chains_y[n], edgecolor='red', color='white', marker='*', s=200)
plt.show()
###Output
_____no_output_____
###Markdown
Above, we are visualizing time series subsequences as points in high-dimensional space. Shown on the left is a time series motif and it can be thought of as a collection of points that approximate a platonic ideal. In contrast, depicted on the right, is a time series chain and it may be thought of as an evolving trail of points in the space. Here, the open red circle represents the first link in the chain, the anchor. Both motifs and chains have the property that each subsequence is relatively close to its nearest neighbor. However, the motif set (left) also has a relatively small diameter. In contrast, the set of points in a chain (right) has a diameter that is much larger than the mean of each member’s distance to its nearest neighbor and, moreover, the chain has the important property of directionality. For example, in the case of a motif, if an additional member was added to the motif set, its location will also be somewhere near the platonic ideal, but independent of the previous subsequences. In contrast, in the case of a chain, the location of the next member of the chain would be somewhere after the last red circle, possibly where the open red star is located. A Simplified ExampleAdapted from the [Matrix Profile VII](https://www.cs.ucr.edu/~eamonn/chains_ICDM.pdf) paper, consider the following time series:47, 32, 1, 22, 2, 58, 3, 36, 4, -5, 5, 40Assume that the subsequence length is 1 and the distance between two subsequences is simply the absolute differencebetween them. To be clear, we are making these simple and pathological assumptions here just for the purposes ofelucidation; we are actually targeting much longer subsequence lengths and using z-normalized Euclidean distance in ourapplications. To capture the directionality of a time series chain, we need to store the left and right nearest neighbor information into the left (IL) and right (IR) matrix profile indices:| Index | Value | Left Index (IL) | Right Index (IR) ||-------|-------|-----------------|------------------|| 1 | 47 | - | 12 || 2 | 32 | 1 | 8 || 3 | 1 | 2 | 5 || 4 | 22 | 2 | 8 || 5 | 2 | 3 | 7 || 6 | 58 | 1 | 12 || 7 | 3 | 5 | 9 || 8 | 36 | 2 | 12 || 9 | 4 | 7 | 11 || 10 | -5 | 3 | 11 || 11 | 5 | 9 | 12 || 12 | 40 | 8 | - |In this vertical/transposed representation, the `index` column shows the location of every subsequence in the time series, the `value` column contains the original numbers from our time series above, the `IL` column shows the left matrix profile indices, and `IR` is the right matrix profile indices. For example, `IR[2] = 8` means the right nearest neighbor of `index = 2` (which has `value = 32`) is at `index = 8` (which has `value = 36`). Similarly, `IL[3] = 2` means that the left nearest neighbor of `index = 3` (with `value = 1`) is at `index = 2` (which has `value = 32`). To better visualize the left/right matrix profile index, we use arrows to link every subsequence in the time series with its left and right nearest neighbors:
###Code
nearest_neighbors = np.array([[1, 47, np.nan, 12],
[2, 32, 1, 8],
[3, 1, 2, 5],
[4, 22, 2, 8],
[5, 2, 3, 7],
[6, 58, 1, 12],
[7, 3, 5, 9],
[8, 36, 2, 12],
[9, 4, 7, 11],
[10, -5, 3, 11],
[11, 5, 9, 12],
[12, 40, 8, np.nan]])
colors = [['C1', 'C1'],
['C2', 'C5'],
['C3', 'C5'],
['C4', 'C4'],
['C3', 'C2'],
['C5', 'C3'],
['C3', 'C2'],
['C2', 'C1'],
['C3', 'C2'],
['C6', 'C1'],
['C6', 'C2'],
['C1', 'C1']]
style="Simple, tail_width=0.5, head_width=6, head_length=8"
kw = dict(arrowstyle=style, connectionstyle="arc3, rad=-.5",)
xs = np.arange(nearest_neighbors.shape[0]) + 1
ys = np.zeros(nearest_neighbors.shape[0])
plt.plot(xs, ys, "-o", markerfacecolor="None", markeredgecolor="None", linestyle="None")
x0, x1, y0, y1 = plt.axis()
plot_margin = 5.0
plt.axis((x0 - plot_margin,
x1 + plot_margin,
y0 - plot_margin,
y1 + plot_margin))
plt.axis('off')
for x, y, nearest_neighbor, color in zip(xs, ys, nearest_neighbors, colors):
plt.text(x, y, str(int(nearest_neighbor[1])), color="black", fontsize=20)
# Plot right matrix profile indices
if not np.isnan(nearest_neighbor[3]):
arrow = FancyArrowPatch((x, 0.5), (nearest_neighbor[3], 0.5), color=color[0], **kw)
plt.gca().add_patch(arrow)
# Plot left matrix profile indices
if not np.isnan(nearest_neighbor[2]):
arrow = FancyArrowPatch((x, 0.0), (nearest_neighbor[2], 0.0), color=color[1], **kw)
plt.gca().add_patch(arrow)
plt.show()
###Output
<ipython-input-3-105203bdfa7f>:32: UserWarning: linestyle is redundantly defined by the 'linestyle' keyword argument and the fmt string "-o" (-> linestyle='-'). The keyword argument will take precedence.
plt.plot(xs, ys, "-o", markerfacecolor="None", markeredgecolor="None", linestyle="None")
###Markdown
An arrow pointing from a number to its right nearest neighbor (arrows shown above the time series) can be referred to as forward arrow and an arrow pointing from a number to its left nearest neighbor (arrows shown below the time series) can be referred to as a backward arrow. According to the formal definition of a time series chain (see [Matrix Profile VII](https://www.cs.ucr.edu/~eamonn/chains_ICDM.pdf) for a thorough definition and discussion), every pair of consecutive subsequences in a chain must be connected by both a forward arrow and a backward arrow. A keen eye will spot the fact that the longest chain in our simplified example is:
###Code
nearest_neighbors = np.array([[1, 47, np.nan, np.nan],
[2, 32, np.nan, np.nan],
[3, 1, np.nan, 5],
[4, 22, np.nan, np.nan],
[5, 2, 3, 7],
[6, 58, np.nan, np.nan],
[7, 3, 5, 9],
[8, 36, np.nan, np.nan],
[9, 4, 7, 11],
[10, -5, np.nan, np.nan],
[11, 5, 9, np.nan],
[12, 40, np.nan, np.nan]])
colors = [['C1', 'C1'],
['C2', 'C5'],
['C3', 'C5'],
['C4', 'C4'],
['C3', 'C2'],
['C5', 'C3'],
['C3', 'C2'],
['C2', 'C1'],
['C3', 'C2'],
['C6', 'C1'],
['C6', 'C2'],
['C1', 'C1']]
style="Simple, tail_width=0.5, head_width=6, head_length=8"
kw = dict(arrowstyle=style, connectionstyle="arc3, rad=-.5",)
xs = np.arange(nearest_neighbors.shape[0]) + 1
ys = np.zeros(nearest_neighbors.shape[0])
plt.plot(xs, ys, "-o", markerfacecolor="None", markeredgecolor="None", linestyle="None")
x0, x1, y0, y1 = plt.axis()
plot_margin = 5.0
plt.axis((x0 - plot_margin,
x1 + plot_margin,
y0 - plot_margin,
y1 + plot_margin))
plt.axis('off')
for x, y, nearest_neighbor, color in zip(xs, ys, nearest_neighbors, colors):
plt.text(x, y, str(int(nearest_neighbor[1])), color="black", fontsize=20)
# Plot right matrix profile indices
if not np.isnan(nearest_neighbor[3]):
arrow = FancyArrowPatch((x, 0.5), (nearest_neighbor[3], 0.5), color=color[0], **kw)
plt.gca().add_patch(arrow)
# Plot left matrix profile indices
if not np.isnan(nearest_neighbor[2]):
arrow = FancyArrowPatch((x, 0.0), (nearest_neighbor[2], 0.0), color=color[1], **kw)
plt.gca().add_patch(arrow)
plt.show()
###Output
<ipython-input-4-b09fcbf8ba29>:32: UserWarning: linestyle is redundantly defined by the 'linestyle' keyword argument and the fmt string "-o" (-> linestyle='-'). The keyword argument will take precedence.
plt.plot(xs, ys, "-o", markerfacecolor="None", markeredgecolor="None", linestyle="None")
###Markdown
The longest extracted chain is therefore 1 ⇌ 2 ⇌ 3 ⇌ 4 ⇌ 5. Note that we see a gradual monotonic increase in the data but, in reality, the increase or decrease in drift can happen in arbitrarily complex ways that can be detected by the time series chains approach. The key component of drifting is that the time series must contain chains with clear directionality.STUMPY is capable of computing:1. anchored time series chains (ATSC) - grow a chain from a user-specified anchor (i.e., specific subsequence)2. all-chain set (ALLC) - a set of anchored time series chains (i.e., each chain starts with a particular subsequence) that are not subsumed by another longer chain3. unanchored time series chain(s) - the unconditionally longest chain within a time series (there could be more than one if there were chains with the same length) So, what does this mean in the context of a real time series? Let's take a look at a real example from web query data! Retrieve the DataWe will be looking at a noisy dataset that is under-sampled and has a growing trend, which will perfectly illustrate the idea regarding time series chains. The data contains a decade-long GoogleTrend query volume (collected weekly from 2004-2014) for the keyword Kohl’s, an American retail chain. First, we'll download the data, extract it, and insert it into a pandas dataframe.
###Code
df = pd.read_csv("https://zenodo.org/record/4276348/files/Time_Series_Chains_Kohls_data.csv?download=1")
df.head()
###Output
_____no_output_____
###Markdown
Visualizing the Data
###Code
plt.plot(df['volume'], color='black')
plt.xlim(0, df.shape[0]+12)
color = itertools.cycle(['white', 'gainsboro'])
for i, x in enumerate(range(0, df.shape[0], 52)):
plt.text(x+12, 0.9, str(2004+i), color="black", fontsize=20)
rect = Rectangle((x, -1), 52, 2.5, facecolor=next(color))
plt.gca().add_patch(rect)
plt.show()
###Output
_____no_output_____
###Markdown
The raw time series above displays ten years of web query volume for the keyword "Kohl's", where each alternating white and grey vertical band represents a 52 week period starting from 2004 to 2014. As depicted, the time series features a significant but unsurprising "end-of-year holiday bump". Relating back to time series chains, we can see that the bump is generally increasing over time and so we might be able to capture this when we compute the unanchored chain.However, as we learned above, in order to compute any time series chains, we also need the left and right matrix profile indices. Luckily for us, according to the docstring, the `stump` function not only returns the (bidirectional) matrix profile and the matrix profile indices in the first and second columns of the NumPy array, respectively, but the third and fourth columns consists of the left matrix profile indices and the right matrix profile indices, respectively. Computing the Left and Right Matrix Profile IndicesSo, let's go ahead and compute the matrix profile indices and we'll set the window size, `m = 20`, which is the approximate length of a "bump".
###Code
m = 20
mp = stumpy.stump(df['volume'], m=m)
###Output
_____no_output_____
###Markdown
Computing the Unanchored ChainNow, with our left and right matrix profile indices in hand, we are ready to call the all-chain set function, `allc`, which not only returns the all-chain set but, as a freebie, it also returns the unconditionally longest chain, also know as the unanchored chain. The latter of which is really what we're most interested in.
###Code
all_chain_set, unanchored_chain = stumpy.allc(mp[:, 2], mp[:, 3])
###Output
_____no_output_____
###Markdown
Visualizing the Unanchored Chain
###Code
plt.plot(df['volume'], linewidth=1, color='black')
for i in range(unanchored_chain.shape[0]):
y = df['volume'].iloc[unanchored_chain[i]:unanchored_chain[i]+m]
x = y.index.values
plt.plot(x, y, linewidth=3)
color = itertools.cycle(['white', 'gainsboro'])
for i, x in enumerate(range(0, df.shape[0], 52)):
plt.text(x+12, 0.9, str(2004+i), color="black", fontsize=20)
rect = Rectangle((x, -1), 52, 2.5, facecolor=next(color))
plt.gca().add_patch(rect)
plt.show()
plt.axis('off')
for i in range(unanchored_chain.shape[0]):
data = df['volume'].iloc[unanchored_chain[i]:unanchored_chain[i]+m].reset_index().values
x = data[:, 0]
y = data[:, 1]
plt.axvline(x=x[0]-x.min()+(m+5)*i + 11, alpha=0.3)
plt.axvline(x=x[0]-x.min()+(m+5)*i + 15, alpha=0.3, linestyle='-.')
plt.plot(x-x.min()+(m+5)*i, y-y.min(), linewidth=3)
plt.show()
###Output
_____no_output_____
###Markdown
Time Series Chains Forecasting Web Query Data with Anchored Time Series Chains (ATSC)This example is adapted from the [Web Query Volume case study](http://www.www2015.it/documents/proceedings/proceedings/p721.pdf) and utilizes the main takeaways from the [Matrix Profile VII](https://www.cs.ucr.edu/~eamonn/chains_ICDM.pdf) research paper. Getting StartedLet's import the packages that we'll need to load, analyze, and plot the data.
###Code
%matplotlib inline
import pandas as pd
import numpy as np
import stumpy
from scipy.io import loadmat
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle, FancyArrowPatch
import urllib
import ssl
import io
import itertools
###Output
_____no_output_____
###Markdown
What are Time Series Chains?Time series chains may be informally considered as motifs that evolve or drift in some direction over time. The figure below illustrates the differencebetween [time series motifs](Tutorial_1.ipynb) (left) and time series chains (right).
###Code
def change_plot_size(width, height, plt):
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = width
fig_size[1] = height
plt.rcParams["figure.figsize"] = fig_size
change_plot_size(20, 6, plt)
x = np.random.rand(20)
y = np.random.rand(20)
n = 10
motifs_x = 0.5 * np.ones(n) + np.random.uniform(-0.05, 0.05, n)
motifs_y = 0.5 * np.ones(n) + np.random.uniform(-0.05, 0.05, n)
sin_x = np.linspace(0, np.pi/2, n+1)
sin_y = np.sin(sin_x)/4
chains_x = 0.5 * np.ones(n+1) + 0.02 * np.arange(n+1)
chains_y = 0.5 * np.ones(n+1) + sin_y
fig, axes = plt.subplots(nrows=1, ncols=2)
axes[0].scatter(x, y, color='lightgrey')
axes[0].scatter(motifs_x, motifs_y, color='red')
axes[1].scatter(x, y, color='lightgrey')
axes[1].scatter(chains_x[0], chains_y[0], edgecolor='red', color='white')
axes[1].scatter(chains_x[1:n], chains_y[1:n], color='red')
axes[1].scatter(chains_x[n], chains_y[n], edgecolor='red', color='white', marker='*', s=200)
###Output
_____no_output_____
###Markdown
Above, we are visualizing time series subsequences as points in high-dimensional space. Shown on the left is a time series motif and it can be thought of as a collection of points that approximate a platonic ideal. In contrast, depicted on the right, is a time series chain and it may be thought of as an evolving trail of points in the space. Here, the open red circle represents the first link in the chain, the anchor. Both motifs and chains have the property that each subsequence is relatively close to its nearest neighbor. However, the motif set (left) also has a relatively small diameter. In contrast, the set of points in a chain (right) has a diameter that is much larger than the mean of each member’s distance to its nearest neighbor and, moreover, the chain has the important property of directionality. For example, in the case of a motif, if an additional member was added to the motif set, its location will also be somewhere near the platonic ideal, but independent of the previous subsequences. In contrast, in the case of a chain, the location of the next member of the chain would be somewhere after the last red circle, possibly where the open red star is located. A Simplified ExampleAdapted from the [Matrix Profile VII](https://www.cs.ucr.edu/~eamonn/chains_ICDM.pdf) paper, consider the following time series:47, 32, 1, 22, 2, 58, 3, 36, 4, -5, 5, 40Assume that the subsequence length is 1 and the distance between two subsequences is simply the absolute differencebetween them. To be clear, we are making these simple and pathological assumptions here just for the purposes ofelucidation; we are actually targeting much longer subsequence lengths and using z-normalized Euclidean distance in ourapplications. To capture the directionality of a time series chain, we need to store the left and right nearest neighbor information into the left (IL) and right (IR) matrix profile indices:| Index | Value | Left Index (IL) | Right Index (IR) ||-------|-------|-----------------|------------------|| 1 | 47 | - | 12 || 2 | 32 | 1 | 8 || 3 | 1 | 2 | 5 || 4 | 22 | 2 | 8 || 5 | 2 | 3 | 7 || 6 | 58 | 1 | 12 || 7 | 3 | 5 | 9 || 8 | 36 | 2 | 12 || 9 | 4 | 7 | 11 || 10 | -5 | 3 | 11 || 11 | 5 | 9 | 12 || 12 | 40 | 8 | - |In this vertical/transposed representation, the `index` column shows the location of every subsequence in the time series, the `value` column contains the original numbers from our time series above, the `IL` column shows the left matrix profile indices, and `IR` is the right matrix profile indices. For example, `IR[2] = 8` means the right nearest neighbor of `index = 2` (which has `value = 32`) is at `index = 8` (which has `value = 36`). Similarly, `IL[3] = 2` means that the left nearest neighbor of `index = 3` (with `value = 1`) is at `index = 2` (which has `value = 32`). To better visualize the left/right matrix profile index, we use arrows to link every subsequence in the time series with its left and right nearest neighbors:
###Code
nearest_neighbors = np.array([[1, 47, np.nan, 12],
[2, 32, 1, 8],
[3, 1, 2, 5],
[4, 22, 2, 8],
[5, 2, 3, 7],
[6, 58, 1, 12],
[7, 3, 5, 9],
[8, 36, 2, 12],
[9, 4, 7, 11],
[10, -5, 3, 11],
[11, 5, 9, 12],
[12, 40, 8, np.nan]])
colors = [['C1', 'C1'],
['C2', 'C5'],
['C3', 'C5'],
['C4', 'C4'],
['C3', 'C2'],
['C5', 'C3'],
['C3', 'C2'],
['C2', 'C1'],
['C3', 'C2'],
['C6', 'C1'],
['C6', 'C2'],
['C1', 'C1']]
style="Simple, tail_width=0.5, head_width=6, head_length=8"
kw = dict(arrowstyle=style, connectionstyle="arc3, rad=-.5",)
xs = np.arange(nearest_neighbors.shape[0]) + 1
ys = np.zeros(nearest_neighbors.shape[0])
plt.plot(xs, ys, "-o", markerfacecolor="None", markeredgecolor="None", linestyle="None")
x0, x1, y0, y1 = plt.axis()
plot_margin = 5.0
plt.axis((x0 - plot_margin,
x1 + plot_margin,
y0 - plot_margin,
y1 + plot_margin))
plt.axis('off')
for x, y, nearest_neighbor, color in zip(xs, ys, nearest_neighbors, colors):
plt.text(x, y, str(int(nearest_neighbor[1])), color="black", fontsize=20)
# Plot right matrix profile indices
if not np.isnan(nearest_neighbor[3]):
arrow = FancyArrowPatch((x, 0.5), (nearest_neighbor[3], 0.5), color=color[0], **kw)
plt.gca().add_patch(arrow)
# Plot left matrix profile indices
if not np.isnan(nearest_neighbor[2]):
arrow = FancyArrowPatch((x, 0.0), (nearest_neighbor[2], 0.0), color=color[1], **kw)
plt.gca().add_patch(arrow)
###Output
_____no_output_____
###Markdown
An arrow pointing from a number to its right nearest neighbor (arrows shown above the time series) can be referred to as forward arrow and an arrow pointing from a number to its left nearest neighbor (arrows shown below the time series) can be referred to as a backward arrow. According to the formal definition of a time series chain (see [Matrix Profile VII](https://www.cs.ucr.edu/~eamonn/chains_ICDM.pdf) for a thorough definition and discussion), every pair of consecutive subsequences in a chain must be connected by both a forward arrow and a backward arrow. A keen eye will spot the fact that the longest chain in our simplified example is:
###Code
nearest_neighbors = np.array([[1, 47, np.nan, np.nan],
[2, 32, np.nan, np.nan],
[3, 1, np.nan, 5],
[4, 22, np.nan, np.nan],
[5, 2, 3, 7],
[6, 58, np.nan, np.nan],
[7, 3, 5, 9],
[8, 36, np.nan, np.nan],
[9, 4, 7, 11],
[10, -5, np.nan, np.nan],
[11, 5, 9, np.nan],
[12, 40, np.nan, np.nan]])
colors = [['C1', 'C1'],
['C2', 'C5'],
['C3', 'C5'],
['C4', 'C4'],
['C3', 'C2'],
['C5', 'C3'],
['C3', 'C2'],
['C2', 'C1'],
['C3', 'C2'],
['C6', 'C1'],
['C6', 'C2'],
['C1', 'C1']]
style="Simple, tail_width=0.5, head_width=6, head_length=8"
kw = dict(arrowstyle=style, connectionstyle="arc3, rad=-.5",)
xs = np.arange(nearest_neighbors.shape[0]) + 1
ys = np.zeros(nearest_neighbors.shape[0])
plt.plot(xs, ys, "-o", markerfacecolor="None", markeredgecolor="None", linestyle="None")
x0, x1, y0, y1 = plt.axis()
plot_margin = 5.0
plt.axis((x0 - plot_margin,
x1 + plot_margin,
y0 - plot_margin,
y1 + plot_margin))
plt.axis('off')
for x, y, nearest_neighbor, color in zip(xs, ys, nearest_neighbors, colors):
plt.text(x, y, str(int(nearest_neighbor[1])), color="black", fontsize=20)
# Plot right matrix profile indices
if not np.isnan(nearest_neighbor[3]):
arrow = FancyArrowPatch((x, 0.5), (nearest_neighbor[3], 0.5), color=color[0], **kw)
plt.gca().add_patch(arrow)
# Plot left matrix profile indices
if not np.isnan(nearest_neighbor[2]):
arrow = FancyArrowPatch((x, 0.0), (nearest_neighbor[2], 0.0), color=color[1], **kw)
plt.gca().add_patch(arrow)
###Output
_____no_output_____
###Markdown
The longest extracted chain is therefore 1 ⇌ 2 ⇌ 3 ⇌ 4 ⇌ 5. Note that we see a gradual monotonic increase in the data but, in reality, the increase or decrease in drift can happen in arbitrarily complex ways that can be detected by the time series chains approach. The key component of drifting is that the time series must contain chains with clear directionality.STUMPY is capable of computing:1. anchored time series chains (ATSC) - grow a chain from a user-specified anchor (i.e., specific subsequence)2. all-chain set (ALLC) - a set of anchored time series chains (i.e., each chain starts with a particular subsequence) that are not subsumed by another longer chain3. unanchored time series chain(s) - the unconditionally longest chain within a time series (there could be more than one if there were chains with the same length) So, what does this mean in the context of a real time series? Let's take a look at a real example from web query data! Retrieve the DataWe will be looking at a noisy dataset that is under-sampled and has a growing trend, which will perfectly illustrate the idea regarding time series chains. The data contains a decade-long GoogleTrend query volume (collected weekly from 2004-2014) for the keyword Kohl’s, an American retail chain. First, we'll download the data, extract it, and insert it into a pandas dataframe.
###Code
context = ssl.SSLContext() # Ignore SSL certificate verification for simplicity
url = 'https://sites.google.com/site/timeserieschain/home/Kohls_data.mat?attredirects=0&revision=1'
raw_bytes = urllib.request.urlopen(url, context=context).read()
data = io.BytesIO(raw_bytes)
mat = loadmat(data)
mdata = mat['VarName1']
mdtype = mdata.dtype
df = pd.DataFrame(mdata, dtype=mdtype, columns=['volume'])
df.head()
###Output
_____no_output_____
###Markdown
Visualizing the Data
###Code
plt.plot(df['volume'], color='black')
plt.xlim(0, df.shape[0]+12)
color = itertools.cycle(['white', 'gainsboro'])
for i, x in enumerate(range(0, df.shape[0], 52)):
plt.text(x+12, 0.9, str(2004+i), color="black", fontsize=20)
rect = Rectangle((x, -1), 52, 2.5, facecolor=next(color))
plt.gca().add_patch(rect)
###Output
_____no_output_____
###Markdown
The raw time series above displays ten years of web query volume for the keyword "Kohl's", where each alternating white and grey vertical band represents a 52 week period starting from 2004 to 2014. As depicted, the time series features a significant but unsurprising "end-of-year holiday bump". Relating back to time series chains, we can see that the bump is generally increasing over time and so we might be able to capture this when we compute the unanchored chain.However, as we learned above, in order to compute any time series chains, we also need the left and right matrix profile indices. Luckily for us, according to the docstring, the `stump` function not only returns the (bidirectional) matrix profile and the matrix profile indices in the first and second columns of the NumPy array, respectively, but the third and fourth columns consists of the left matrix profile indices and the right matrix profile indices, respectively:
###Code
?stumpy.stump
###Output
_____no_output_____
###Markdown
Computing the Left and Right Matrix Profile IndicesSo, let's go ahead and compute the matrix profile indices and we'll set the window size, `m = 20`, which is the approximate length of a "bump".
###Code
m = 20
mp = stumpy.stump(df['volume'], m=m)
###Output
_____no_output_____
###Markdown
Computing the Unanchored ChainNow, with our left and right matrix profile indices in hand, we are ready to call the all-chain set function, `allc`, which not only returns the all-chain set but, as a freebie, it also returns the unconditionally longest chain, also know as the unanchored chain. The latter of which is really what we're most interested in.
###Code
all_chain_set, unanchored_chain = stumpy.allc(mp[:, 2], mp[:, 3])
###Output
_____no_output_____
###Markdown
Visualizing the Unanchored Chain
###Code
plt.plot(df['volume'], linewidth=1, color='black')
for i in range(unanchored_chain.shape[0]):
y = df['volume'].iloc[unanchored_chain[i]:unanchored_chain[i]+m]
x = y.index.values
plt.plot(x, y, linewidth=3)
color = itertools.cycle(['white', 'gainsboro'])
for i, x in enumerate(range(0, df.shape[0], 52)):
plt.text(x+12, 0.9, str(2004+i), color="black", fontsize=20)
rect = Rectangle((x, -1), 52, 2.5, facecolor=next(color))
plt.gca().add_patch(rect)
plt.axis('off')
for i in range(unanchored_chain.shape[0]):
data = df['volume'].iloc[unanchored_chain[i]:unanchored_chain[i]+m].reset_index().values
x = data[:, 0]
y = data[:, 1]
plt.axvline(x=x[0]-x.min()+(m+5)*i + 11, alpha=0.3)
plt.axvline(x=x[0]-x.min()+(m+5)*i + 15, alpha=0.3, linestyle='-.')
plt.plot(x-x.min()+(m+5)*i, y-y.min(), linewidth=3)
###Output
_____no_output_____ |
colab/decision_tree_regression.ipynb | ###Markdown
Decision Tree Regression Importing the libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from google.colab import drive
drive.mount('/content/drive')
###Output
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
###Markdown
Importing the dataset
###Code
data_set = pd.read_csv('/content/drive/MyDrive/Position_Salaries.csv')
X = data_set.iloc[:,1:-1].values
y = data_set.iloc[:, -1].values
###Output
_____no_output_____
###Markdown
Training the Decision Tree Regression model on the whole dataset
###Code
from sklearn.tree import DecisionTreeRegressor # Import Decision Tree Regressor
from sklearn import metrics
decision_tree = DecisionTreeRegressor(random_state=0)
decision_tree.fit(X,y);
###Output
_____no_output_____
###Markdown
Predicting a new result
###Code
y_pred = decision_tree.predict([[2.5]])
print(y_pred)
# Model Accuracy, how often is the classifier correct?
print("Accuracy:",metrics.accuracy_score([6000], y_pred))
###Output
[50000.]
Accuracy: 0.0
###Markdown
Visualising the Decision Tree Regression results (higher resolution)
###Code
plt.scatter(X, y, color = 'red')
plt.plot(X, decision_tree.predict(X), color = 'blue')
plt.title('Truth or Bluff (Decision Tree Regressor)')
plt.xlabel('Position level')
plt.ylabel('Salary')
plt.show()
X_grid = np.arange(min(X), max(X), 0.1)
X_grid = X_grid.reshape((len(X_grid), 1))
plt.scatter(X, y, color = 'red')
plt.plot(X_grid, decision_tree.predict(X_grid), color = 'blue')
plt.title('Truth or Bluff (Decision Tree Regressor)')
plt.xlabel('Position level')
plt.ylabel('Salary')
plt.show()
###Output
_____no_output_____ |
model_card_toolkit/documentation/examples/MLMD_Model_Card_Toolkit_Demo.ipynb | ###Markdown
Copyright © 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
MLMD Model Card Toolkit Demo BackgroundThis notebook demonstrates how to generate a model card using the Model Card Toolkit with MLMD and TFX pipeline in a Jupyter/Colab environment. You can learn more about model cards at https://modelcards.withgoogle.com/about. SetupWe first need to a) install and import the necessary packages, and b) download the data. Upgrade Pip and Install TFX
###Code
try:
import colab
!pip install --upgrade pip
except:
pass
!pip install "tfx>=0.21.1,<0.22"
!pip install model-card-toolkit
###Output
_____no_output_____
###Markdown
Did you restart the runtime?If you are using Google Colab, the first time that you run the cell above, you must restart the runtime (Runtime > Restart runtime ...). This is because of the way that Colab loads packages. Import packagesWe import necessary packages, including standard TFX component classes and check the library versions.
###Code
import os
import pprint
import tempfile
import urllib
import absl
import tensorflow as tf
import tensorflow_model_analysis as tfma
tf.get_logger().propagate = False
pp = pprint.PrettyPrinter()
import tfx
from tfx.components import CsvExampleGen
from tfx.components import Evaluator
from tfx.components import ExampleValidator
from tfx.components import Pusher
from tfx.components import ResolverNode
from tfx.components import SchemaGen
from tfx.components import StatisticsGen
from tfx.components import Trainer
from tfx.components import Transform
from tfx.components.base import executor_spec
from tfx.components.trainer.executor import GenericExecutor
from tfx.dsl.experimental import latest_blessed_model_resolver
from tfx.orchestration import metadata
from tfx.orchestration import pipeline
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
from tfx.proto import pusher_pb2
from tfx.proto import trainer_pb2
from tfx.types import Channel
from tfx.types.standard_artifacts import Model
from tfx.types.standard_artifacts import ModelBlessing
from tfx.utils.dsl_utils import external_input
import ml_metadata
print('TensorFlow version: {}'.format(tf.__version__))
print('TFX version: {}'.format(tfx.__version__))
print('MLMD version: {}'.format(ml_metadata.__version__))
###Output
_____no_output_____
###Markdown
Set up pipeline paths
###Code
# This is the root directory for your TFX pip package installation.
_tfx_root = tfx.__path__[0]
# Set up logging.
absl.logging.set_verbosity(absl.logging.INFO)
###Output
_____no_output_____
###Markdown
Download example dataWe download the example dataset for use in our TFX pipeline.
###Code
DATA_PATH = 'https://archive.ics.uci.edu/ml/machine-learning-databases/adult/' \
'adult.data'
_data_root = tempfile.mkdtemp(prefix='tfx-data')
_data_filepath = os.path.join(_data_root, "data.csv")
urllib.request.urlretrieve(DATA_PATH, _data_filepath)
columns = [
"Age", "Workclass", "fnlwgt", "Education", "Education-Num", "Marital-Status",
"Occupation", "Relationship", "Race", "Sex", "Capital-Gain", "Capital-Loss",
"Hours-per-week", "Country", "Over-50K"]
with open(_data_filepath, 'r') as f:
content = f.read()
content = content.replace(", <=50K", ', 0').replace(", >50K", ', 1')
with open(_data_filepath, 'w') as f:
f.write(','.join(columns) + '\n' + content)
###Output
_____no_output_____
###Markdown
Take a quick look at the CSV file.
###Code
!head {_data_filepath}
###Output
_____no_output_____
###Markdown
Create the InteractiveContextLast, we create an InteractiveContext, which will allow us to run TFX components interactively in this notebook.
###Code
# Here, we create an InteractiveContext using default parameters. This will
# use a temporary directory with an ephemeral ML Metadata database instance.
# To use your own pipeline root or database, the optional properties
# `pipeline_root` and `metadata_connection_config` may be passed to
# InteractiveContext. Calls to InteractiveContext are no-ops outside of the
# notebook.
context = InteractiveContext()
###Output
_____no_output_____
###Markdown
Run TFX components interactivelyIn the cells that follow, we create TFX components one-by-one, run each of them, and visualize their output artifacts. In this notebook, we won’t provide detailed explanations of each TFX component, but you can see what each does at [TFX Colab workshop](https://github.com/tensorflow/workshops/blob/master/tfx_labs/Lab_1_Pipeline_in_Colab.ipynb). ExampleGenCreate the `ExampleGen` component to split data into training and evaluation sets, convert the data into `tf.Example` format, and copy data into the `_tfx_root` directory for other components to access.
###Code
example_gen = CsvExampleGen(input=external_input(_data_root))
context.run(example_gen)
artifact = example_gen.outputs['examples'].get()[0]
print(artifact.split_names, artifact.uri)
###Output
_____no_output_____
###Markdown
Let’s take a look at the first three training examples:
###Code
# Get the URI of the output artifact representing the training examples, which is a directory
train_uri = os.path.join(example_gen.outputs['examples'].get()[0].uri, 'train')
# Get the list of files in this directory (all compressed TFRecord files)
tfrecord_filenames = [os.path.join(train_uri, name)
for name in os.listdir(train_uri)]
# Create a `TFRecordDataset` to read these files
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
# Iterate over the first 3 records and decode them.
for tfrecord in dataset.take(3):
serialized_example = tfrecord.numpy()
example = tf.train.Example()
example.ParseFromString(serialized_example)
pp.pprint(example)
###Output
_____no_output_____
###Markdown
StatisticsGen`StatisticsGen` takes as input the dataset we just ingested using `ExampleGen` and allows you to perform some analysis of your dataset using TensorFlow Data Validation (TFDV).
###Code
statistics_gen = StatisticsGen(
examples=example_gen.outputs['examples'])
context.run(statistics_gen)
###Output
_____no_output_____
###Markdown
After `StatisticsGen` finishes running, we can visualize the outputted statistics. Try playing with the different plots!
###Code
context.show(statistics_gen.outputs['statistics'])
###Output
_____no_output_____
###Markdown
SchemaGen`SchemaGen` will take as input the statistics that we generated with `StatisticsGen`, looking at the training split by default.
###Code
schema_gen = SchemaGen(
statistics=statistics_gen.outputs['statistics'],
infer_feature_shape=False)
context.run(schema_gen)
context.show(schema_gen.outputs['schema'])
###Output
_____no_output_____
###Markdown
To learn more about schemas, see [the SchemaGen documentation](https://www.tensorflow.org/tfx/guide/schemagen). ExampleValidator`ExampleValidator` will take as input the statistics from `StatisticsGen`, and the schema from `SchemaGen`.By default, it compares the statistics from the evaluation split to the schema from the training split.
###Code
example_validator = ExampleValidator(
statistics=statistics_gen.outputs['statistics'],
schema=schema_gen.outputs['schema'])
context.run(example_validator)
context.show(example_validator.outputs['anomalies'])
###Output
_____no_output_____
###Markdown
Transform`Transform` will take as input the data from `ExampleGen`, the schema from `SchemaGen`, as well as a module that contains user-defined Transform code.Let's see an example of user-defined Transform code below (for an introduction to the TensorFlow Transform APIs, [see the tutorial](https://www.tensorflow.org/tfx/tutorials/transform/simple)).
###Code
_census_income_constants_module_file = 'census_income_constants.py'
%%writefile {_census_income_constants_module_file}
# Categorical features are assumed to each have a maximum value in the dataset.
MAX_CATEGORICAL_FEATURE_VALUES = [20]
CATEGORICAL_FEATURE_KEYS = ["Education-Num"]
DENSE_FLOAT_FEATURE_KEYS = ["Capital-Gain", "Hours-per-week", "Capital-Loss"]
# Number of buckets used by tf.transform for encoding each feature.
FEATURE_BUCKET_COUNT = 10
BUCKET_FEATURE_KEYS = ["Age"]
# Number of vocabulary terms used for encoding VOCAB_FEATURES by tf.transform
VOCAB_SIZE = 200
# Count of out-of-vocab buckets in which unrecognized VOCAB_FEATURES are hashed.
OOV_SIZE = 10
VOCAB_FEATURE_KEYS = ["Workclass", "Education", "Marital-Status", "Occupation",
"Relationship", "Race", "Sex", "Country"]
# Keys
LABEL_KEY = "Over-50K"
def transformed_name(key):
return key + '_xf'
_census_income_transform_module_file = 'census_income_transform.py'
%%writefile {_census_income_transform_module_file}
import tensorflow as tf
import tensorflow_transform as tft
import census_income_constants
_DENSE_FLOAT_FEATURE_KEYS = census_income_constants.DENSE_FLOAT_FEATURE_KEYS
_VOCAB_FEATURE_KEYS = census_income_constants.VOCAB_FEATURE_KEYS
_VOCAB_SIZE = census_income_constants.VOCAB_SIZE
_OOV_SIZE = census_income_constants.OOV_SIZE
_FEATURE_BUCKET_COUNT = census_income_constants.FEATURE_BUCKET_COUNT
_BUCKET_FEATURE_KEYS = census_income_constants.BUCKET_FEATURE_KEYS
_CATEGORICAL_FEATURE_KEYS = census_income_constants.CATEGORICAL_FEATURE_KEYS
_LABEL_KEY = census_income_constants.LABEL_KEY
_transformed_name = census_income_constants.transformed_name
def preprocessing_fn(inputs):
"""tf.transform's callback function for preprocessing inputs.
Args:
inputs: map from feature keys to raw not-yet-transformed features.
Returns:
Map from string feature key to transformed feature operations.
"""
outputs = {}
for key in _DENSE_FLOAT_FEATURE_KEYS:
# Preserve this feature as a dense float, setting nan's to the mean.
outputs[_transformed_name(key)] = tft.scale_to_z_score(
_fill_in_missing(inputs[key]))
for key in _VOCAB_FEATURE_KEYS:
# Build a vocabulary for this feature.
outputs[_transformed_name(key)] = tft.compute_and_apply_vocabulary(
_fill_in_missing(inputs[key]),
top_k=_VOCAB_SIZE,
num_oov_buckets=_OOV_SIZE)
for key in _BUCKET_FEATURE_KEYS:
outputs[_transformed_name(key)] = tft.bucketize(
_fill_in_missing(inputs[key]), _FEATURE_BUCKET_COUNT)
for key in _CATEGORICAL_FEATURE_KEYS:
outputs[_transformed_name(key)] = _fill_in_missing(inputs[key])
label = _fill_in_missing(inputs[_LABEL_KEY])
outputs[_transformed_name(_LABEL_KEY)] = label
return outputs
def _fill_in_missing(x):
"""Replace missing values in a SparseTensor.
Fills in missing values of `x` with '' or 0, and converts to a dense tensor.
Args:
x: A `SparseTensor` of rank 2. Its dense shape should have size at most 1
in the second dimension.
Returns:
A rank 1 tensor where missing values of `x` have been filled in.
"""
default_value = '' if x.dtype == tf.string else 0
return tf.squeeze(
tf.sparse.to_dense(
tf.SparseTensor(x.indices, x.values, [x.dense_shape[0], 1]),
default_value),
axis=1)
transform = Transform(
examples=example_gen.outputs['examples'],
schema=schema_gen.outputs['schema'],
module_file=os.path.abspath(_census_income_transform_module_file))
context.run(transform)
transform.outputs
###Output
_____no_output_____
###Markdown
TrainerLet's see an example of user-defined model code below (for an introduction to the TensorFlow Keras APIs, [see the tutorial](https://www.tensorflow.org/guide/keras)):
###Code
_census_income_trainer_module_file = 'census_income_trainer.py'
%%writefile {_census_income_trainer_module_file}
from typing import List, Text
import os
import absl
import datetime
import tensorflow as tf
import tensorflow_transform as tft
from tfx.components.trainer.executor import TrainerFnArgs
import census_income_constants
_DENSE_FLOAT_FEATURE_KEYS = census_income_constants.DENSE_FLOAT_FEATURE_KEYS
_VOCAB_FEATURE_KEYS = census_income_constants.VOCAB_FEATURE_KEYS
_VOCAB_SIZE = census_income_constants.VOCAB_SIZE
_OOV_SIZE = census_income_constants.OOV_SIZE
_FEATURE_BUCKET_COUNT = census_income_constants.FEATURE_BUCKET_COUNT
_BUCKET_FEATURE_KEYS = census_income_constants.BUCKET_FEATURE_KEYS
_CATEGORICAL_FEATURE_KEYS = census_income_constants.CATEGORICAL_FEATURE_KEYS
_MAX_CATEGORICAL_FEATURE_VALUES = census_income_constants.MAX_CATEGORICAL_FEATURE_VALUES
_LABEL_KEY = census_income_constants.LABEL_KEY
_transformed_name = census_income_constants.transformed_name
def _transformed_names(keys):
return [_transformed_name(key) for key in keys]
def _gzip_reader_fn(filenames):
"""Small utility returning a record reader that can read gzip'ed files."""
return tf.data.TFRecordDataset(
filenames,
compression_type='GZIP')
def _get_serve_tf_examples_fn(model, tf_transform_output):
"""Returns a function that parses a serialized tf.Example and applies TFT."""
model.tft_layer = tf_transform_output.transform_features_layer()
@tf.function
def serve_tf_examples_fn(serialized_tf_examples):
"""Returns the output to be used in the serving signature."""
feature_spec = tf_transform_output.raw_feature_spec()
feature_spec.pop(_LABEL_KEY)
parsed_features = tf.io.parse_example(serialized_tf_examples, feature_spec)
transformed_features = model.tft_layer(parsed_features)
transformed_features.pop(_transformed_name(_LABEL_KEY))
return model(transformed_features)
return serve_tf_examples_fn
def _input_fn(file_pattern: List[Text],
tf_transform_output: tft.TFTransformOutput,
batch_size: int = 200) -> tf.data.Dataset:
"""Generates features and label for tuning/training.
Args:
file_pattern: List of paths or patterns of input tfrecord files.
tf_transform_output: A TFTransformOutput.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
"""
transformed_feature_spec = (
tf_transform_output.transformed_feature_spec().copy())
dataset = tf.data.experimental.make_batched_features_dataset(
file_pattern=file_pattern,
batch_size=batch_size,
features=transformed_feature_spec,
reader=_gzip_reader_fn,
label_key=_transformed_name(_LABEL_KEY))
return dataset
def _build_keras_model(hidden_units: List[int] = None) -> tf.keras.Model:
"""Creates a DNN Keras model.
Args:
hidden_units: [int], the layer sizes of the DNN (input layer first).
Returns:
A keras Model.
"""
real_valued_columns = [
tf.feature_column.numeric_column(key, shape=())
for key in _transformed_names(_DENSE_FLOAT_FEATURE_KEYS)
]
categorical_columns = [
tf.feature_column.categorical_column_with_identity(
key, num_buckets=_VOCAB_SIZE + _OOV_SIZE, default_value=0)
for key in _transformed_names(_VOCAB_FEATURE_KEYS)
]
categorical_columns += [
tf.feature_column.categorical_column_with_identity(
key, num_buckets=_FEATURE_BUCKET_COUNT, default_value=0)
for key in _transformed_names(_BUCKET_FEATURE_KEYS)
]
categorical_columns += [
tf.feature_column.categorical_column_with_identity( # pylint: disable=g-complex-comprehension
key,
num_buckets=num_buckets,
default_value=0) for key, num_buckets in zip(
_transformed_names(_CATEGORICAL_FEATURE_KEYS),
_MAX_CATEGORICAL_FEATURE_VALUES)
]
indicator_column = [
tf.feature_column.indicator_column(categorical_column)
for categorical_column in categorical_columns
]
model = _wide_and_deep_classifier(
# TODO(b/139668410) replace with premade wide_and_deep keras model
wide_columns=indicator_column,
deep_columns=real_valued_columns,
dnn_hidden_units=hidden_units or [100, 70, 50, 25])
return model
def _wide_and_deep_classifier(wide_columns, deep_columns, dnn_hidden_units):
"""Build a simple keras wide and deep model.
Args:
wide_columns: Feature columns wrapped in indicator_column for wide (linear)
part of the model.
deep_columns: Feature columns for deep part of the model.
dnn_hidden_units: [int], the layer sizes of the hidden DNN.
Returns:
A Wide and Deep Keras model
"""
# Following values are hard coded for simplicity in this example,
# However prefarably they should be passsed in as hparams.
# Keras needs the feature definitions at compile time.
# TODO(b/139081439): Automate generation of input layers from FeatureColumn.
input_layers = {
colname: tf.keras.layers.Input(name=colname, shape=(), dtype=tf.float32)
for colname in _transformed_names(_DENSE_FLOAT_FEATURE_KEYS)
}
input_layers.update({
colname: tf.keras.layers.Input(name=colname, shape=(), dtype='int32')
for colname in _transformed_names(_VOCAB_FEATURE_KEYS)
})
input_layers.update({
colname: tf.keras.layers.Input(name=colname, shape=(), dtype='int32')
for colname in _transformed_names(_BUCKET_FEATURE_KEYS)
})
input_layers.update({
colname: tf.keras.layers.Input(name=colname, shape=(), dtype='int32')
for colname in _transformed_names(_CATEGORICAL_FEATURE_KEYS)
})
# TODO(b/161816639): SparseFeatures for feature columns + Keras.
deep = tf.keras.layers.DenseFeatures(deep_columns)(input_layers)
for numnodes in dnn_hidden_units:
deep = tf.keras.layers.Dense(numnodes)(deep)
wide = tf.keras.layers.DenseFeatures(wide_columns)(input_layers)
output = tf.keras.layers.Dense(
1, activation='sigmoid')(
tf.keras.layers.concatenate([deep, wide]))
model = tf.keras.Model(input_layers, output)
model.compile(
loss='binary_crossentropy',
optimizer=tf.keras.optimizers.Adam(lr=0.001),
metrics=[tf.keras.metrics.BinaryAccuracy()])
model.summary(print_fn=absl.logging.info)
return model
# TFX Trainer will call this function.
def run_fn(fn_args: TrainerFnArgs):
"""Train the model based on given args.
Args:
fn_args: Holds args used to train the model as name/value pairs.
"""
# Number of nodes in the first layer of the DNN
first_dnn_layer_size = 100
num_dnn_layers = 4
dnn_decay_factor = 0.7
tf_transform_output = tft.TFTransformOutput(fn_args.transform_output)
train_dataset = _input_fn(fn_args.train_files, tf_transform_output, 40)
eval_dataset = _input_fn(fn_args.eval_files, tf_transform_output, 40)
model = _build_keras_model(
# Construct layers sizes with exponetial decay
hidden_units=[
max(2, int(first_dnn_layer_size * dnn_decay_factor**i))
for i in range(num_dnn_layers)
])
# This log path might change in the future.
log_dir = os.path.join(os.path.dirname(fn_args.serving_model_dir), 'logs')
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=log_dir, update_freq='batch')
model.fit(
train_dataset,
steps_per_epoch=fn_args.train_steps,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps,
callbacks=[tensorboard_callback])
signatures = {
'serving_default':
_get_serve_tf_examples_fn(model,
tf_transform_output).get_concrete_function(
tf.TensorSpec(
shape=[None],
dtype=tf.string,
name='examples')),
}
model.save(fn_args.serving_model_dir, save_format='tf', signatures=signatures)
trainer = Trainer(
module_file=os.path.abspath(_census_income_trainer_module_file),
custom_executor_spec=executor_spec.ExecutorClassSpec(GenericExecutor),
examples=transform.outputs['transformed_examples'],
transform_graph=transform.outputs['transform_graph'],
schema=schema_gen.outputs['schema'],
train_args=trainer_pb2.TrainArgs(num_steps=100),
eval_args=trainer_pb2.EvalArgs(num_steps=50))
context.run(trainer)
###Output
_____no_output_____
###Markdown
EvaluatorThe `Evaluator` component computes model performance metrics over the evaluation set. It uses the [TensorFlow Model Analysis](https://www.tensorflow.org/tfx/model_analysis/get_started) library. `Evaluator` will take as input the data from `ExampleGen`, the trained model from `Trainer`, and slicing configuration. The slicing configuration allows you to slice your metrics on feature values. See an example of this configuration below:
###Code
from google.protobuf.wrappers_pb2 import BoolValue
eval_config = tfma.EvalConfig(
model_specs=[
# This assumes a serving model with signature 'serving_default'. If
# using estimator based EvalSavedModel, add signature_name: 'eval' and
# remove the label_key.
tfma.ModelSpec(label_key="Over-50K")
],
metrics_specs=[
tfma.MetricsSpec(
# The metrics added here are in addition to those saved with the
# model (assuming either a keras model or EvalSavedModel is used).
# Any metrics added into the saved model (for example using
# model.compile(..., metrics=[...]), etc) will be computed
# automatically.
# To add validation thresholds for metrics saved with the model,
# add them keyed by metric name to the thresholds map.
metrics=[
tfma.MetricConfig(class_name='ExampleCount'),
tfma.MetricConfig(class_name='BinaryAccuracy'),
tfma.MetricConfig(class_name='FairnessIndicators',
config='{ "thresholds": [0.5] }'),
]
)
],
slicing_specs=[
# An empty slice spec means the overall slice, i.e. the whole dataset.
tfma.SlicingSpec(),
# Data can be sliced along a feature column. In this case, data is
# sliced by feature column Race and Sex.
tfma.SlicingSpec(feature_keys=['Race']),
tfma.SlicingSpec(feature_keys=['Sex']),
tfma.SlicingSpec(feature_keys=['Race', 'Sex']),
],
options = tfma.Options(compute_confidence_intervals=BoolValue(value=True))
)
# Use TFMA to compute a evaluation statistics over features of a model and
# validate them against a baseline.
evaluator = Evaluator(
examples=example_gen.outputs['examples'],
model=trainer.outputs['model'],
eval_config=eval_config)
context.run(evaluator)
evaluator.outputs
###Output
_____no_output_____
###Markdown
Using the `evaluation` output we can show the default visualization of global metrics on the entire evaluation set.
###Code
context.show(evaluator.outputs['evaluation'])
###Output
_____no_output_____
###Markdown
Populate Properties from ModelCard with Model Card ToolkitNow that we’ve set up our TFX pipeline, we will use the Model Card Toolkit to extract key artifacts from the run and populate a Model Card. Connect to the MLMD store used by the InteractiveContext
###Code
from ml_metadata.metadata_store import metadata_store
from IPython import display
mlmd_store = metadata_store.MetadataStore(context.metadata_connection_config)
model_uri = trainer.outputs["model"].get()[0].uri
###Output
_____no_output_____
###Markdown
Use Model Card Toolkit Initialize the Model Card Toolkit.
###Code
from model_card_toolkit import ModelCardToolkit
mct = ModelCardToolkit(mlmd_store=mlmd_store, model_uri=model_uri)
###Output
_____no_output_____
###Markdown
Create Model Card workspace.
###Code
model_card = mct.scaffold_assets()
###Output
_____no_output_____
###Markdown
Annotate more infomation into Model Card.It is also important to document model information that might be important to downstream users, such as its limitations, intended use cases, trade offs, and ethical considerations. For each of these sections, we can directly add new JSON objects to represent this information.
###Code
model_card.model_details.name = 'Census Income Classifier'
model_card.model_details.overview = (
'This is a wide and deep Keras model which aims to classify whether or not '
'an individual has an income of over $50,000 based on various demographic '
'features. The model is trained on the UCI Census Income Dataset. This is '
'not a production model, and this dataset has traditionally only been used '
'for research purposes. In this Model Card, you can review quantitative '
'components of the model’s performance and data, as well as information '
'about the model’s intended uses, limitations, and ethical considerations.'
)
model_card.model_details.owners = [
{'name': 'Model Cards Team', 'contact': '[email protected]'}
]
model_card.considerations.use_cases = [
'This dataset that this model was trained on was originally created to '
'support the machine learning community in conducting empirical analysis '
'of ML algorithms. The Adult Data Set can be used in fairness-related '
'studies that compare inequalities across sex and race, based on '
'people’s annual incomes.'
]
model_card.considerations.limitations = [
'This is a class-imbalanced dataset across a variety of sensitive classes.'
' The ratio of male-to-female examples is about 2:1 and there are far more'
' examples with the “white” attribute than every other race combined. '
'Furthermore, the ratio of $50,000 or less earners to $50,000 or more '
'earners is just over 3:1. Due to the imbalance across income levels, we '
'can see that our true negative rate seems quite high, while our true '
'positive rate seems quite low. This is true to an even greater degree '
'when we only look at the “female” sub-group, because there are even '
'fewer female examples in the $50,000+ earner group, causing our model to '
'overfit these examples. To avoid this, we can try various remediation '
'strategies in future iterations (e.g. undersampling, hyperparameter '
'tuning, etc), but we may not be able to fix all of the fairness issues.'
]
model_card.considerations.ethical_considerations = [{
'name':
'We risk expressing the viewpoint that the attributes in this dataset '
'are the only ones that are predictive of someone’s income, even '
'though we know this is not the case.',
'mitigation_strategy':
'As mentioned, some interventions may need to be performed to address '
'the class imbalances in the dataset.'
}]
###Output
_____no_output_____
###Markdown
Filter and Add Graphs.We can filter the graphs generated by the TFX components to include those most relevant for the Model Card using the function defined below. In this example, we filter for `race` and `sex`, two potentially sensitive attributes. Each Model Card will have up to three sections for graphs -- training dataset statistics, evaluation dataset statistics, and quantitative analysis of our model’s performance.
###Code
# These are the graphs that will appear in the Quantiative Analysis portion of
# the Model Card. Feel free to add or remove from this list.
TARGET_EVAL_GRAPH_NAMES = [
'fairness_indicators_metrics/[email protected]',
'fairness_indicators_metrics/[email protected]',
'binary_accuracy',
'example_count | Race_X_Sex',
]
# These are the graphs that will appear in both the Train Set and Eval Set
# portions of the Model Card. Feel free to add or remove from this list.
TARGET_DATASET_GRAPH_NAMES = [
'counts | Race',
'counts | Sex',
]
def filter_graphs(graphics, target_graph_names):
result = []
for graph in graphics:
for target_graph_name in target_graph_names:
if graph.name.startswith(target_graph_name):
result.append(graph)
result.sort(key=lambda g: g.name)
return result
# Populating the three different sections using the filter defined above. To
# see all the graphs available in a section, we can iterate through each of the
# different collections.
model_card.quantitative_analysis.graphics.collection = filter_graphs(
model_card.quantitative_analysis.graphics.collection, TARGET_EVAL_GRAPH_NAMES)
model_card.model_parameters.data.eval.graphics.collection = filter_graphs(
model_card.model_parameters.data.eval.graphics.collection, TARGET_DATASET_GRAPH_NAMES)
model_card.model_parameters.data.train.graphics.collection = filter_graphs(
model_card.model_parameters.data.train.graphics.collection, TARGET_DATASET_GRAPH_NAMES)
###Output
_____no_output_____
###Markdown
We then add (optional) descriptions for each of the each of the graph sections.
###Code
model_card.model_parameters.data.train.graphics.description = (
'This section includes graphs displaying the class distribution for the '
'“Race” and “Sex” attributes in our training dataset. We chose to '
'show these graphs in particular because we felt it was important that '
'users see the class imbalance.'
)
model_card.model_parameters.data.eval.graphics.description = (
'Like the training set, we provide graphs showing the class distribution '
'of the data we used to evaluate our model’s performance. '
)
model_card.quantitative_analysis.graphics.description = (
'These graphs show how the model performs for data sliced by “Race”, '
'“Sex” and the intersection of these attributes. The metrics we chose '
'to display are “Accuracy”, “False Positive Rate”, and “False '
'Negative Rate”, because we anticipated that the class imbalances might '
'cause our model to underperform for certain groups.'
)
mct.update_model_card_json(model_card)
###Output
_____no_output_____
###Markdown
Generate the Model Card.We can now display the Model Card in HTML format.
###Code
html = mct.export_format()
display.display(display.HTML(html))
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
MLMD Model Card Toolkit Demo View on TensorFlow.org Run in Google Colab View on GitHub Download notebook BackgroundThis notebook demonstrates how to generate a model card using the Model Card Toolkit with MLMD and TFX pipeline in a Jupyter/Colab environment. You can learn more about model cards at https://modelcards.withgoogle.com/about. SetupWe first need to a) install and import the necessary packages, and b) download the data. Upgrade to Pip 20.2 and Install TFX
###Code
!pip install --upgrade pip==20.2
!pip install model-card-toolkit
###Output
_____no_output_____
###Markdown
Did you restart the runtime?If you are using Google Colab, the first time that you run the cell above, you must restart the runtime (Runtime > Restart runtime ...). This is because of the way that Colab loads packages. Import packagesWe import necessary packages, including standard TFX component classes and check the library versions.
###Code
import os
import pprint
import tempfile
import urllib
import absl
import tensorflow as tf
import tensorflow_model_analysis as tfma
tf.get_logger().propagate = False
pp = pprint.PrettyPrinter()
import tfx
from tfx.components import CsvExampleGen
from tfx.components import Evaluator
from tfx.components import Pusher
from tfx.components import SchemaGen
from tfx.components import StatisticsGen
from tfx.components import Trainer
from tfx.components import Transform
from tfx.components.base import executor_spec
from tfx.components.trainer.executor import GenericExecutor
from tfx.dsl.experimental import latest_blessed_model_resolver
from tfx.orchestration import metadata
from tfx.orchestration import pipeline
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
from tfx.proto import pusher_pb2
from tfx.proto import trainer_pb2
from tfx.types import Channel
from tfx.types.standard_artifacts import Model
from tfx.types.standard_artifacts import ModelBlessing
import ml_metadata as mlmd
print('TensorFlow version: {}'.format(tf.__version__))
print('TFX version: {}'.format(tfx.version.__version__))
print('MLMD version: {}'.format(mlmd.__version__))
###Output
_____no_output_____
###Markdown
Set up pipeline paths
###Code
# This is the root directory for your TFX pip package installation.
_tfx_root = tfx.__path__
# Set up logging.
absl.logging.set_verbosity(absl.logging.INFO)
###Output
_____no_output_____
###Markdown
Download example dataWe download the example dataset for use in our TFX pipeline.
###Code
DATA_PATH = 'https://archive.ics.uci.edu/ml/machine-learning-databases/adult/' \
'adult.data'
_data_root = tempfile.mkdtemp(prefix='tfx-data')
_data_filepath = os.path.join(_data_root, "data.csv")
urllib.request.urlretrieve(DATA_PATH, _data_filepath)
columns = [
"Age", "Workclass", "fnlwgt", "Education", "Education-Num", "Marital-Status",
"Occupation", "Relationship", "Race", "Sex", "Capital-Gain", "Capital-Loss",
"Hours-per-week", "Country", "Over-50K"]
with open(_data_filepath, 'r') as f:
content = f.read()
content = content.replace(", <=50K", ', 0').replace(", >50K", ', 1')
with open(_data_filepath, 'w') as f:
f.write(','.join(columns) + '\n' + content)
###Output
_____no_output_____
###Markdown
Take a quick look at the CSV file.
###Code
!head {_data_filepath}
###Output
_____no_output_____
###Markdown
Create the InteractiveContextLast, we create an InteractiveContext, which will allow us to run TFX components interactively in this notebook.
###Code
# Here, we create an InteractiveContext using default parameters. This will
# use a temporary directory with an ephemeral ML Metadata database instance.
# To use your own pipeline root or database, the optional properties
# `pipeline_root` and `metadata_connection_config` may be passed to
# InteractiveContext. Calls to InteractiveContext are no-ops outside of the
# notebook.
context = InteractiveContext(pipeline_name="Census Income Classification Pipeline")
###Output
_____no_output_____
###Markdown
Run TFX components interactivelyIn the cells that follow, we create TFX components one-by-one, run each of them, and visualize their output artifacts. In this notebook, we won’t provide detailed explanations of each TFX component, but you can see what each does at [TFX Colab workshop](https://github.com/tensorflow/workshops/blob/master/tfx_labs/Lab_1_Pipeline_in_Colab.ipynb). ExampleGenCreate the `ExampleGen` component to split data into training and evaluation sets, convert the data into `tf.Example` format, and copy data into the `_tfx_root` directory for other components to access.
###Code
example_gen = CsvExampleGen(input_base=_data_root)
context.run(example_gen)
artifact = example_gen.outputs['examples'].get()[0]
print(artifact.split_names, artifact.uri)
###Output
_____no_output_____
###Markdown
Let’s take a look at the first three training examples:
###Code
# Get the URI of the output artifact representing the training examples, which is a directory
train_uri = os.path.join(example_gen.outputs['examples'].get()[0].uri, 'Split-train')
# Get the list of files in this directory (all compressed TFRecord files)
tfrecord_filenames = [os.path.join(train_uri, name)
for name in os.listdir(train_uri)]
# Create a `TFRecordDataset` to read these files
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
# Iterate over the first 3 records and decode them.
for tfrecord in dataset.take(3):
serialized_example = tfrecord.numpy()
example = tf.train.Example()
example.ParseFromString(serialized_example)
pp.pprint(example)
###Output
_____no_output_____
###Markdown
StatisticsGen`StatisticsGen` takes as input the dataset we just ingested using `ExampleGen` and allows you to perform some analysis of your dataset using TensorFlow Data Validation (TFDV).
###Code
statistics_gen = StatisticsGen(
examples=example_gen.outputs['examples'])
context.run(statistics_gen)
###Output
_____no_output_____
###Markdown
After `StatisticsGen` finishes running, we can visualize the outputted statistics. Try playing with the different plots!
###Code
context.show(statistics_gen.outputs['statistics'])
###Output
_____no_output_____
###Markdown
SchemaGen`SchemaGen` will take as input the statistics that we generated with `StatisticsGen`, looking at the training split by default.
###Code
schema_gen = SchemaGen(
statistics=statistics_gen.outputs['statistics'],
infer_feature_shape=False)
context.run(schema_gen)
context.show(schema_gen.outputs['schema'])
###Output
_____no_output_____
###Markdown
To learn more about schemas, see [the SchemaGen documentation](https://www.tensorflow.org/tfx/guide/schemagen). Transform`Transform` will take as input the data from `ExampleGen`, the schema from `SchemaGen`, as well as a module that contains user-defined Transform code.Let's see an example of user-defined Transform code below (for an introduction to the TensorFlow Transform APIs, [see the tutorial](https://www.tensorflow.org/tfx/tutorials/transform/simple)).
###Code
_census_income_constants_module_file = 'census_income_constants.py'
%%writefile {_census_income_constants_module_file}
# Categorical features are assumed to each have a maximum value in the dataset.
MAX_CATEGORICAL_FEATURE_VALUES = [20]
CATEGORICAL_FEATURE_KEYS = ["Education-Num"]
DENSE_FLOAT_FEATURE_KEYS = ["Capital-Gain", "Hours-per-week", "Capital-Loss"]
# Number of buckets used by tf.transform for encoding each feature.
FEATURE_BUCKET_COUNT = 10
BUCKET_FEATURE_KEYS = ["Age"]
# Number of vocabulary terms used for encoding VOCAB_FEATURES by tf.transform
VOCAB_SIZE = 200
# Count of out-of-vocab buckets in which unrecognized VOCAB_FEATURES are hashed.
OOV_SIZE = 10
VOCAB_FEATURE_KEYS = ["Workclass", "Education", "Marital-Status", "Occupation",
"Relationship", "Race", "Sex", "Country"]
# Keys
LABEL_KEY = "Over-50K"
def transformed_name(key):
return key + '_xf'
_census_income_transform_module_file = 'census_income_transform.py'
%%writefile {_census_income_transform_module_file}
import tensorflow as tf
import tensorflow_transform as tft
import census_income_constants
_DENSE_FLOAT_FEATURE_KEYS = census_income_constants.DENSE_FLOAT_FEATURE_KEYS
_VOCAB_FEATURE_KEYS = census_income_constants.VOCAB_FEATURE_KEYS
_VOCAB_SIZE = census_income_constants.VOCAB_SIZE
_OOV_SIZE = census_income_constants.OOV_SIZE
_FEATURE_BUCKET_COUNT = census_income_constants.FEATURE_BUCKET_COUNT
_BUCKET_FEATURE_KEYS = census_income_constants.BUCKET_FEATURE_KEYS
_CATEGORICAL_FEATURE_KEYS = census_income_constants.CATEGORICAL_FEATURE_KEYS
_LABEL_KEY = census_income_constants.LABEL_KEY
_transformed_name = census_income_constants.transformed_name
def preprocessing_fn(inputs):
"""tf.transform's callback function for preprocessing inputs.
Args:
inputs: map from feature keys to raw not-yet-transformed features.
Returns:
Map from string feature key to transformed feature operations.
"""
outputs = {}
for key in _DENSE_FLOAT_FEATURE_KEYS:
# Preserve this feature as a dense float, setting nan's to the mean.
outputs[_transformed_name(key)] = tft.scale_to_z_score(
_fill_in_missing(inputs[key]))
for key in _VOCAB_FEATURE_KEYS:
# Build a vocabulary for this feature.
outputs[_transformed_name(key)] = tft.compute_and_apply_vocabulary(
_fill_in_missing(inputs[key]),
top_k=_VOCAB_SIZE,
num_oov_buckets=_OOV_SIZE)
for key in _BUCKET_FEATURE_KEYS:
outputs[_transformed_name(key)] = tft.bucketize(
_fill_in_missing(inputs[key]), _FEATURE_BUCKET_COUNT)
for key in _CATEGORICAL_FEATURE_KEYS:
outputs[_transformed_name(key)] = _fill_in_missing(inputs[key])
label = _fill_in_missing(inputs[_LABEL_KEY])
outputs[_transformed_name(_LABEL_KEY)] = label
return outputs
def _fill_in_missing(x):
"""Replace missing values in a SparseTensor.
Fills in missing values of `x` with '' or 0, and converts to a dense tensor.
Args:
x: A `SparseTensor` of rank 2. Its dense shape should have size at most 1
in the second dimension.
Returns:
A rank 1 tensor where missing values of `x` have been filled in.
"""
default_value = '' if x.dtype == tf.string else 0
return tf.squeeze(
tf.sparse.to_dense(
tf.SparseTensor(x.indices, x.values, [x.dense_shape[0], 1]),
default_value),
axis=1)
transform = Transform(
examples=example_gen.outputs['examples'],
schema=schema_gen.outputs['schema'],
module_file=os.path.abspath(_census_income_transform_module_file))
context.run(transform)
transform.outputs
###Output
_____no_output_____
###Markdown
TrainerLet's see an example of user-defined model code below (for an introduction to the TensorFlow Keras APIs, [see the tutorial](https://www.tensorflow.org/guide/keras)):
###Code
_census_income_trainer_module_file = 'census_income_trainer.py'
%%writefile {_census_income_trainer_module_file}
from typing import List, Text
import os
import absl
import datetime
import tensorflow as tf
import tensorflow_transform as tft
from tfx.components.trainer.executor import TrainerFnArgs
import census_income_constants
_DENSE_FLOAT_FEATURE_KEYS = census_income_constants.DENSE_FLOAT_FEATURE_KEYS
_VOCAB_FEATURE_KEYS = census_income_constants.VOCAB_FEATURE_KEYS
_VOCAB_SIZE = census_income_constants.VOCAB_SIZE
_OOV_SIZE = census_income_constants.OOV_SIZE
_FEATURE_BUCKET_COUNT = census_income_constants.FEATURE_BUCKET_COUNT
_BUCKET_FEATURE_KEYS = census_income_constants.BUCKET_FEATURE_KEYS
_CATEGORICAL_FEATURE_KEYS = census_income_constants.CATEGORICAL_FEATURE_KEYS
_MAX_CATEGORICAL_FEATURE_VALUES = census_income_constants.MAX_CATEGORICAL_FEATURE_VALUES
_LABEL_KEY = census_income_constants.LABEL_KEY
_transformed_name = census_income_constants.transformed_name
def _transformed_names(keys):
return [_transformed_name(key) for key in keys]
def _gzip_reader_fn(filenames):
"""Small utility returning a record reader that can read gzip'ed files."""
return tf.data.TFRecordDataset(
filenames,
compression_type='GZIP')
def _get_serve_tf_examples_fn(model, tf_transform_output):
"""Returns a function that parses a serialized tf.Example and applies TFT."""
model.tft_layer = tf_transform_output.transform_features_layer()
@tf.function
def serve_tf_examples_fn(serialized_tf_examples):
"""Returns the output to be used in the serving signature."""
feature_spec = tf_transform_output.raw_feature_spec()
feature_spec.pop(_LABEL_KEY)
parsed_features = tf.io.parse_example(serialized_tf_examples, feature_spec)
transformed_features = model.tft_layer(parsed_features)
if _transformed_name(_LABEL_KEY) in transformed_features:
transformed_features.pop(_transformed_name(_LABEL_KEY))
return model(transformed_features)
return serve_tf_examples_fn
def _input_fn(file_pattern: List[Text],
tf_transform_output: tft.TFTransformOutput,
batch_size: int = 200) -> tf.data.Dataset:
"""Generates features and label for tuning/training.
Args:
file_pattern: List of paths or patterns of input tfrecord files.
tf_transform_output: A TFTransformOutput.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
"""
transformed_feature_spec = (
tf_transform_output.transformed_feature_spec().copy())
dataset = tf.data.experimental.make_batched_features_dataset(
file_pattern=file_pattern,
batch_size=batch_size,
features=transformed_feature_spec,
reader=_gzip_reader_fn,
label_key=_transformed_name(_LABEL_KEY))
return dataset
def _build_keras_model(hidden_units: List[int] = None) -> tf.keras.Model:
"""Creates a DNN Keras model.
Args:
hidden_units: [int], the layer sizes of the DNN (input layer first).
Returns:
A keras Model.
"""
real_valued_columns = [
tf.feature_column.numeric_column(key, shape=())
for key in _transformed_names(_DENSE_FLOAT_FEATURE_KEYS)
]
categorical_columns = [
tf.feature_column.categorical_column_with_identity(
key, num_buckets=_VOCAB_SIZE + _OOV_SIZE, default_value=0)
for key in _transformed_names(_VOCAB_FEATURE_KEYS)
]
categorical_columns += [
tf.feature_column.categorical_column_with_identity(
key, num_buckets=_FEATURE_BUCKET_COUNT, default_value=0)
for key in _transformed_names(_BUCKET_FEATURE_KEYS)
]
categorical_columns += [
tf.feature_column.categorical_column_with_identity( # pylint: disable=g-complex-comprehension
key,
num_buckets=num_buckets,
default_value=0) for key, num_buckets in zip(
_transformed_names(_CATEGORICAL_FEATURE_KEYS),
_MAX_CATEGORICAL_FEATURE_VALUES)
]
indicator_column = [
tf.feature_column.indicator_column(categorical_column)
for categorical_column in categorical_columns
]
model = _wide_and_deep_classifier(
# TODO(b/139668410) replace with premade wide_and_deep keras model
wide_columns=indicator_column,
deep_columns=real_valued_columns,
dnn_hidden_units=hidden_units or [100, 70, 50, 25])
return model
def _wide_and_deep_classifier(wide_columns, deep_columns, dnn_hidden_units):
"""Build a simple keras wide and deep model.
Args:
wide_columns: Feature columns wrapped in indicator_column for wide (linear)
part of the model.
deep_columns: Feature columns for deep part of the model.
dnn_hidden_units: [int], the layer sizes of the hidden DNN.
Returns:
A Wide and Deep Keras model
"""
# Following values are hard coded for simplicity in this example,
# However prefarably they should be passsed in as hparams.
# Keras needs the feature definitions at compile time.
# TODO(b/139081439): Automate generation of input layers from FeatureColumn.
input_layers = {
colname: tf.keras.layers.Input(name=colname, shape=(), dtype=tf.float32)
for colname in _transformed_names(_DENSE_FLOAT_FEATURE_KEYS)
}
input_layers.update({
colname: tf.keras.layers.Input(name=colname, shape=(), dtype='int32')
for colname in _transformed_names(_VOCAB_FEATURE_KEYS)
})
input_layers.update({
colname: tf.keras.layers.Input(name=colname, shape=(), dtype='int32')
for colname in _transformed_names(_BUCKET_FEATURE_KEYS)
})
input_layers.update({
colname: tf.keras.layers.Input(name=colname, shape=(), dtype='int32')
for colname in _transformed_names(_CATEGORICAL_FEATURE_KEYS)
})
# TODO(b/161816639): SparseFeatures for feature columns + Keras.
deep = tf.keras.layers.DenseFeatures(deep_columns)(input_layers)
for numnodes in dnn_hidden_units:
deep = tf.keras.layers.Dense(numnodes)(deep)
wide = tf.keras.layers.DenseFeatures(wide_columns)(input_layers)
output = tf.keras.layers.Dense(
1, activation='sigmoid')(
tf.keras.layers.concatenate([deep, wide]))
model = tf.keras.Model(input_layers, output)
model.compile(
loss='binary_crossentropy',
optimizer=tf.keras.optimizers.Adam(lr=0.001),
metrics=[tf.keras.metrics.BinaryAccuracy()])
model.summary(print_fn=absl.logging.info)
return model
# TFX Trainer will call this function.
def run_fn(fn_args: TrainerFnArgs):
"""Train the model based on given args.
Args:
fn_args: Holds args used to train the model as name/value pairs.
"""
# Number of nodes in the first layer of the DNN
first_dnn_layer_size = 100
num_dnn_layers = 4
dnn_decay_factor = 0.7
tf_transform_output = tft.TFTransformOutput(fn_args.transform_output)
train_dataset = _input_fn(fn_args.train_files, tf_transform_output, 40)
eval_dataset = _input_fn(fn_args.eval_files, tf_transform_output, 40)
model = _build_keras_model(
# Construct layers sizes with exponetial decay
hidden_units=[
max(2, int(first_dnn_layer_size * dnn_decay_factor**i))
for i in range(num_dnn_layers)
])
# This log path might change in the future.
log_dir = os.path.join(os.path.dirname(fn_args.serving_model_dir), 'logs')
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=log_dir, update_freq='batch')
model.fit(
train_dataset,
steps_per_epoch=fn_args.train_steps,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps,
callbacks=[tensorboard_callback])
signatures = {
'serving_default':
_get_serve_tf_examples_fn(model,
tf_transform_output).get_concrete_function(
tf.TensorSpec(
shape=[None],
dtype=tf.string,
name='examples')),
}
model.save(fn_args.serving_model_dir, save_format='tf', signatures=signatures)
trainer = Trainer(
module_file=os.path.abspath(_census_income_trainer_module_file),
custom_executor_spec=executor_spec.ExecutorClassSpec(GenericExecutor),
examples=transform.outputs['transformed_examples'],
transform_graph=transform.outputs['transform_graph'],
schema=schema_gen.outputs['schema'],
train_args=trainer_pb2.TrainArgs(num_steps=100),
eval_args=trainer_pb2.EvalArgs(num_steps=50))
context.run(trainer)
###Output
_____no_output_____
###Markdown
EvaluatorThe `Evaluator` component computes model performance metrics over the evaluation set. It uses the [TensorFlow Model Analysis](https://www.tensorflow.org/tfx/model_analysis/get_started) library. `Evaluator` will take as input the data from `ExampleGen`, the trained model from `Trainer`, and slicing configuration. The slicing configuration allows you to slice your metrics on feature values. See an example of this configuration below:
###Code
from google.protobuf.wrappers_pb2 import BoolValue
eval_config = tfma.EvalConfig(
model_specs=[
# This assumes a serving model with signature 'serving_default'. If
# using estimator based EvalSavedModel, add signature_name: 'eval' and
# remove the label_key.
tfma.ModelSpec(label_key="Over-50K")
],
metrics_specs=[
tfma.MetricsSpec(
# The metrics added here are in addition to those saved with the
# model (assuming either a keras model or EvalSavedModel is used).
# Any metrics added into the saved model (for example using
# model.compile(..., metrics=[...]), etc) will be computed
# automatically.
# To add validation thresholds for metrics saved with the model,
# add them keyed by metric name to the thresholds map.
metrics=[
tfma.MetricConfig(class_name='ExampleCount'),
tfma.MetricConfig(class_name='BinaryAccuracy'),
tfma.MetricConfig(class_name='FairnessIndicators',
config='{ "thresholds": [0.5] }'),
]
)
],
slicing_specs=[
# An empty slice spec means the overall slice, i.e. the whole dataset.
tfma.SlicingSpec(),
# Data can be sliced along a feature column. In this case, data is
# sliced by feature column Race and Sex.
tfma.SlicingSpec(feature_keys=['Race']),
tfma.SlicingSpec(feature_keys=['Sex']),
tfma.SlicingSpec(feature_keys=['Race', 'Sex']),
],
options = tfma.Options(compute_confidence_intervals=BoolValue(value=True))
)
# Use TFMA to compute a evaluation statistics over features of a model and
# validate them against a baseline.
evaluator = Evaluator(
examples=example_gen.outputs['examples'],
model=trainer.outputs['model'],
eval_config=eval_config)
context.run(evaluator)
evaluator.outputs
###Output
_____no_output_____
###Markdown
Using the `evaluation` output we can show the default visualization of global metrics on the entire evaluation set.
###Code
context.show(evaluator.outputs['evaluation'])
###Output
_____no_output_____
###Markdown
Populate Properties from ModelCard with Model Card ToolkitNow that we’ve set up our TFX pipeline, we will use the Model Card Toolkit to extract key artifacts from the run and populate a Model Card. Connect to the MLMD store used by the InteractiveContext
###Code
from ml_metadata.metadata_store import metadata_store
from IPython import display
mlmd_store = metadata_store.MetadataStore(context.metadata_connection_config)
model_uri = trainer.outputs["model"].get()[0].uri
###Output
_____no_output_____
###Markdown
Use Model Card Toolkit Initialize the Model Card Toolkit.
###Code
import model_card_toolkit as mctlib
mlmd_source = mctlib.utils.MlmdSource(store=mlmd_store, model_uri=model_uri)
mct = mctlib.ModelCardToolkit(mlmd_source=mlmd_source)
###Output
_____no_output_____
###Markdown
Create Model Card workspace.
###Code
model_card = mct.scaffold_assets()
###Output
_____no_output_____
###Markdown
Annotate more information into Model Card.It is also important to document model information that might be important to downstream users, such as its limitations, intended use cases, trade offs, and ethical considerations. For each of these sections, we can directly add new JSON objects to represent this information.
###Code
model_card.model_details.name = 'Census Income Classifier'
model_card.model_details.overview = (
'This is a wide and deep Keras model which aims to classify whether or not '
'an individual has an income of over $50,000 based on various demographic '
'features. The model is trained on the UCI Census Income Dataset. This is '
'not a production model, and this dataset has traditionally only been used '
'for research purposes. In this Model Card, you can review quantitative '
'components of the model’s performance and data, as well as information '
'about the model’s intended uses, limitations, and ethical considerations.'
)
model_card.model_details.owners = [
mctlib.Owner(name='Model Cards Team',
contact='[email protected]')
]
model_card.considerations.use_cases = [mctlib.UseCase(description=
'This dataset that this model was trained on was originally created to '
'support the machine learning community in conducting empirical analysis '
'of ML algorithms. The Adult Data Set can be used in fairness-related '
'studies that compare inequalities across sex and race, based on '
'people’s annual incomes.')
]
model_card.considerations.limitations = [mctlib.Limitation(description=
'This is a class-imbalanced dataset across a variety of sensitive classes.'
' The ratio of male-to-female examples is about 2:1 and there are far more'
' examples with the “white” attribute than every other race combined. '
'Furthermore, the ratio of $50,000 or less earners to $50,000 or more '
'earners is just over 3:1. Due to the imbalance across income levels, we '
'can see that our true negative rate seems quite high, while our true '
'positive rate seems quite low. This is true to an even greater degree '
'when we only look at the “female” sub-group, because there are even '
'fewer female examples in the $50,000+ earner group, causing our model to '
'overfit these examples. To avoid this, we can try various remediation '
'strategies in future iterations (e.g. undersampling, hyperparameter '
'tuning, etc), but we may not be able to fix all of the fairness issues.')
]
model_card.considerations.ethical_considerations = [mctlib.Risk(
name= 'We risk expressing the viewpoint that the attributes in this dataset '
'are the only ones that are predictive of someone’s income, even '
'though we know this is not the case.',
mitigation_strategy= 'As mentioned, some interventions may need to be '
'performed to address the class imbalances in the dataset.'
)
]
###Output
_____no_output_____
###Markdown
Filter and Add Graphs.We can filter the graphs generated by the TFX components to include those most relevant for the Model Card using the function defined below. In this example, we filter for `race` and `sex`, two potentially sensitive attributes. Each Model Card will have up to three sections for graphs -- training dataset statistics, evaluation dataset statistics, and quantitative analysis of our model’s performance.
###Code
# These are the graphs that will appear in the Quantiative Analysis portion of
# the Model Card. Feel free to add or remove from this list.
TARGET_EVAL_GRAPH_NAMES = [
'fairness_indicators_metrics/[email protected]',
'fairness_indicators_metrics/[email protected]',
'binary_accuracy',
'example_count | Race_X_Sex',
]
# These are the graphs that will appear in both the Train Set and Eval Set
# portions of the Model Card. Feel free to add or remove from this list.
TARGET_DATASET_GRAPH_NAMES = [
'counts | Race',
'counts | Sex',
]
def filter_graphs(graphics, target_graph_names):
result = []
for graph in graphics:
for target_graph_name in target_graph_names:
if graph.name.startswith(target_graph_name):
result.append(graph)
result.sort(key=lambda g: g.name)
return result
# Populating the three different sections using the filter defined above. To
# see all the graphs available in a section, we can iterate through each of the
# different collections.
model_card.quantitative_analysis.graphics.collection = filter_graphs(
model_card.quantitative_analysis.graphics.collection, TARGET_EVAL_GRAPH_NAMES)
model_card.model_parameters.data[0].graphics.collection = filter_graphs(
model_card.model_parameters.data[0].graphics.collection, TARGET_DATASET_GRAPH_NAMES)
model_card.model_parameters.data[1].graphics.collection = filter_graphs(
model_card.model_parameters.data[1].graphics.collection, TARGET_DATASET_GRAPH_NAMES)
###Output
_____no_output_____
###Markdown
We then add (optional) descriptions for each of the each of the graph sections.
###Code
model_card.model_parameters.data[0].name = 'train_set'
model_card.model_parameters.data[0].graphics.description = (
'This section includes graphs displaying the class distribution for the '
'“Race” and “Sex” attributes in our training dataset. We chose to '
'show these graphs in particular because we felt it was important that '
'users see the class imbalance.'
)
model_card.model_parameters.data[1].name = 'eval_set'
model_card.model_parameters.data[1].graphics.description = (
'Like the training set, we provide graphs showing the class distribution '
'of the data we used to evaluate our model’s performance. '
)
model_card.quantitative_analysis.graphics.description = (
'These graphs show how the model performs for data sliced by “Race”, '
'“Sex” and the intersection of these attributes. The metrics we chose '
'to display are “Accuracy”, “False Positive Rate”, and “False '
'Negative Rate”, because we anticipated that the class imbalances might '
'cause our model to underperform for certain groups.'
)
mct.update_model_card(model_card)
###Output
_____no_output_____
###Markdown
Generate the Model Card.We can now display the Model Card in HTML format.
###Code
html = mct.export_format()
display.display(display.HTML(html))
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
MLMD Model Card Toolkit Demo View on TensorFlow.org Run in Google Colab View on GitHub Download notebook BackgroundThis notebook demonstrates how to generate a model card using the Model Card Toolkit with MLMD and TFX pipeline in a Jupyter/Colab environment. You can learn more about model cards at https://modelcards.withgoogle.com/about. SetupWe first need to a) install and import the necessary packages, and b) download the data. Upgrade to Pip 21 (or later) and Install Model Card Toolkit
###Code
# This is just a fancy way of doing
# !pip install --upgrade pip==21.3
# !pip install model-card-toolkit
import os
import time
restart_required = False
pip_version = !pip --version
if pip_version[0] < "pip 21":
!pip install --upgrade pip==21.3
restart_required = True
try:
import model_card_toolkit
except ImportError:
!pip install model-card-toolkit
restart_required = True
if restart_required:
print('\n\nRestarting the runtime due to updated installs. Please run again.')
print('You can ignore the [Your session crashed for an unknown reason] error.')
time.sleep(1) # Ensure that the "Restarting..." message is printed
os.kill(os.getpid(), 9)
else:
print('Already installed.')
###Output
_____no_output_____
###Markdown
If you are using Google Colab, the runtime must be restarted after installing new packages. The above cell will do so automatically if required. Unfortunately, you must manually restart any bulk commands (e.g., Runtime > Run all). Import packagesWe import necessary packages, including standard TFX component classes and check the library versions.
###Code
import os
import pprint
import tempfile
import urllib
import absl
import tensorflow as tf
import tensorflow_model_analysis as tfma
tf.get_logger().propagate = False
pp = pprint.PrettyPrinter()
import tfx
from tfx.components import CsvExampleGen
from tfx.components import Evaluator
from tfx.components import Pusher
from tfx.components import SchemaGen
from tfx.components import StatisticsGen
from tfx.components import Trainer
from tfx.components import Transform
from tfx.components.trainer.executor import GenericExecutor
from tfx.dsl.components.base import executor_spec
from tfx.dsl.experimental import latest_blessed_model_resolver
from tfx.orchestration import metadata
from tfx.orchestration import pipeline
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
from tfx.proto import pusher_pb2
from tfx.proto import trainer_pb2
from tfx.types import Channel
from tfx.types.standard_artifacts import Model
from tfx.types.standard_artifacts import ModelBlessing
import ml_metadata as mlmd
print('TensorFlow version: {}'.format(tf.__version__))
print('TFX version: {}'.format(tfx.version.__version__))
print('MLMD version: {}'.format(mlmd.__version__))
###Output
_____no_output_____
###Markdown
Set up pipeline paths
###Code
# This is the root directory for your TFX pip package installation.
_tfx_root = tfx.__path__
# Set up logging.
absl.logging.set_verbosity(absl.logging.INFO)
###Output
_____no_output_____
###Markdown
Download example dataWe download the example dataset for use in our TFX pipeline.
###Code
DATA_PATH = 'https://archive.ics.uci.edu/ml/machine-learning-databases/adult/' \
'adult.data'
_data_root = tempfile.mkdtemp(prefix='tfx-data')
_data_filepath = os.path.join(_data_root, "data.csv")
urllib.request.urlretrieve(DATA_PATH, _data_filepath)
columns = [
"Age", "Workclass", "fnlwgt", "Education", "Education-Num", "Marital-Status",
"Occupation", "Relationship", "Race", "Sex", "Capital-Gain", "Capital-Loss",
"Hours-per-week", "Country", "Over-50K"]
with open(_data_filepath, 'r') as f:
content = f.read()
content = content.replace(", <=50K", ', 0').replace(", >50K", ', 1')
with open(_data_filepath, 'w') as f:
f.write(','.join(columns) + '\n' + content)
###Output
_____no_output_____
###Markdown
Take a quick look at the CSV file.
###Code
!head {_data_filepath}
###Output
_____no_output_____
###Markdown
Create the InteractiveContextLast, we create an InteractiveContext, which will allow us to run TFX components interactively in this notebook.
###Code
# Here, we create an InteractiveContext using default parameters. This will
# use a temporary directory with an ephemeral ML Metadata database instance.
# To use your own pipeline root or database, the optional properties
# `pipeline_root` and `metadata_connection_config` may be passed to
# InteractiveContext. Calls to InteractiveContext are no-ops outside of the
# notebook.
context = InteractiveContext(pipeline_name="Census Income Classification Pipeline")
###Output
_____no_output_____
###Markdown
Run TFX components interactivelyIn the cells that follow, we create TFX components one-by-one, run each of them, and visualize their output artifacts. In this notebook, we won’t provide detailed explanations of each TFX component, but you can see what each does at [TFX Colab workshop](https://github.com/tensorflow/workshops/blob/master/tfx_labs/Lab_1_Pipeline_in_Colab.ipynb). ExampleGenCreate the `ExampleGen` component to split data into training and evaluation sets, convert the data into `tf.Example` format, and copy data into the `_tfx_root` directory for other components to access.
###Code
example_gen = CsvExampleGen(input_base=_data_root)
context.run(example_gen)
artifact = example_gen.outputs['examples'].get()[0]
print(artifact.split_names, artifact.uri)
###Output
_____no_output_____
###Markdown
Let’s take a look at the first three training examples:
###Code
# Get the URI of the output artifact representing the training examples, which is a directory
train_uri = os.path.join(example_gen.outputs['examples'].get()[0].uri, 'Split-train')
# Get the list of files in this directory (all compressed TFRecord files)
tfrecord_filenames = [os.path.join(train_uri, name)
for name in os.listdir(train_uri)]
# Create a `TFRecordDataset` to read these files
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
# Iterate over the first 3 records and decode them.
for tfrecord in dataset.take(3):
serialized_example = tfrecord.numpy()
example = tf.train.Example()
example.ParseFromString(serialized_example)
pp.pprint(example)
###Output
_____no_output_____
###Markdown
StatisticsGen`StatisticsGen` takes as input the dataset we just ingested using `ExampleGen` and allows you to perform some analysis of your dataset using TensorFlow Data Validation (TFDV).
###Code
statistics_gen = StatisticsGen(
examples=example_gen.outputs['examples'])
context.run(statistics_gen)
###Output
_____no_output_____
###Markdown
After `StatisticsGen` finishes running, we can visualize the outputted statistics. Try playing with the different plots!
###Code
context.show(statistics_gen.outputs['statistics'])
###Output
_____no_output_____
###Markdown
SchemaGen`SchemaGen` will take as input the statistics that we generated with `StatisticsGen`, looking at the training split by default.
###Code
schema_gen = SchemaGen(
statistics=statistics_gen.outputs['statistics'],
infer_feature_shape=False)
context.run(schema_gen)
context.show(schema_gen.outputs['schema'])
###Output
_____no_output_____
###Markdown
To learn more about schemas, see [the SchemaGen documentation](https://www.tensorflow.org/tfx/guide/schemagen). Transform`Transform` will take as input the data from `ExampleGen`, the schema from `SchemaGen`, as well as a module that contains user-defined Transform code.Let's see an example of user-defined Transform code below (for an introduction to the TensorFlow Transform APIs, [see the tutorial](https://www.tensorflow.org/tfx/tutorials/transform/simple)).
###Code
_census_income_constants_module_file = 'census_income_constants.py'
%%writefile {_census_income_constants_module_file}
# Categorical features are assumed to each have a maximum value in the dataset.
MAX_CATEGORICAL_FEATURE_VALUES = [20]
CATEGORICAL_FEATURE_KEYS = ["Education-Num"]
DENSE_FLOAT_FEATURE_KEYS = ["Capital-Gain", "Hours-per-week", "Capital-Loss"]
# Number of buckets used by tf.transform for encoding each feature.
FEATURE_BUCKET_COUNT = 10
BUCKET_FEATURE_KEYS = ["Age"]
# Number of vocabulary terms used for encoding VOCAB_FEATURES by tf.transform
VOCAB_SIZE = 200
# Count of out-of-vocab buckets in which unrecognized VOCAB_FEATURES are hashed.
OOV_SIZE = 10
VOCAB_FEATURE_KEYS = ["Workclass", "Education", "Marital-Status", "Occupation",
"Relationship", "Race", "Sex", "Country"]
# Keys
LABEL_KEY = "Over-50K"
def transformed_name(key):
return key + '_xf'
_census_income_transform_module_file = 'census_income_transform.py'
%%writefile {_census_income_transform_module_file}
import tensorflow as tf
import tensorflow_transform as tft
import census_income_constants
_DENSE_FLOAT_FEATURE_KEYS = census_income_constants.DENSE_FLOAT_FEATURE_KEYS
_VOCAB_FEATURE_KEYS = census_income_constants.VOCAB_FEATURE_KEYS
_VOCAB_SIZE = census_income_constants.VOCAB_SIZE
_OOV_SIZE = census_income_constants.OOV_SIZE
_FEATURE_BUCKET_COUNT = census_income_constants.FEATURE_BUCKET_COUNT
_BUCKET_FEATURE_KEYS = census_income_constants.BUCKET_FEATURE_KEYS
_CATEGORICAL_FEATURE_KEYS = census_income_constants.CATEGORICAL_FEATURE_KEYS
_LABEL_KEY = census_income_constants.LABEL_KEY
_transformed_name = census_income_constants.transformed_name
def preprocessing_fn(inputs):
"""tf.transform's callback function for preprocessing inputs.
Args:
inputs: map from feature keys to raw not-yet-transformed features.
Returns:
Map from string feature key to transformed feature operations.
"""
outputs = {}
for key in _DENSE_FLOAT_FEATURE_KEYS:
# Preserve this feature as a dense float, setting nan's to the mean.
outputs[_transformed_name(key)] = tft.scale_to_z_score(
_fill_in_missing(inputs[key]))
for key in _VOCAB_FEATURE_KEYS:
# Build a vocabulary for this feature.
outputs[_transformed_name(key)] = tft.compute_and_apply_vocabulary(
_fill_in_missing(inputs[key]),
top_k=_VOCAB_SIZE,
num_oov_buckets=_OOV_SIZE)
for key in _BUCKET_FEATURE_KEYS:
outputs[_transformed_name(key)] = tft.bucketize(
_fill_in_missing(inputs[key]), _FEATURE_BUCKET_COUNT)
for key in _CATEGORICAL_FEATURE_KEYS:
outputs[_transformed_name(key)] = _fill_in_missing(inputs[key])
label = _fill_in_missing(inputs[_LABEL_KEY])
outputs[_transformed_name(_LABEL_KEY)] = label
return outputs
def _fill_in_missing(x):
"""Replace missing values in a SparseTensor.
Fills in missing values of `x` with '' or 0, and converts to a dense tensor.
Args:
x: A `SparseTensor` of rank 2. Its dense shape should have size at most 1
in the second dimension.
Returns:
A rank 1 tensor where missing values of `x` have been filled in.
"""
default_value = '' if x.dtype == tf.string else 0
return tf.squeeze(
tf.sparse.to_dense(
tf.SparseTensor(x.indices, x.values, [x.dense_shape[0], 1]),
default_value),
axis=1)
transform = Transform(
examples=example_gen.outputs['examples'],
schema=schema_gen.outputs['schema'],
module_file=os.path.abspath(_census_income_transform_module_file))
context.run(transform)
transform.outputs
###Output
_____no_output_____
###Markdown
TrainerLet's see an example of user-defined model code below (for an introduction to the TensorFlow Keras APIs, [see the tutorial](https://www.tensorflow.org/guide/keras)):
###Code
_census_income_trainer_module_file = 'census_income_trainer.py'
%%writefile {_census_income_trainer_module_file}
from typing import List, Text
import os
import absl
import datetime
import tensorflow as tf
import tensorflow_transform as tft
from tfx.components.trainer.executor import TrainerFnArgs
import census_income_constants
_DENSE_FLOAT_FEATURE_KEYS = census_income_constants.DENSE_FLOAT_FEATURE_KEYS
_VOCAB_FEATURE_KEYS = census_income_constants.VOCAB_FEATURE_KEYS
_VOCAB_SIZE = census_income_constants.VOCAB_SIZE
_OOV_SIZE = census_income_constants.OOV_SIZE
_FEATURE_BUCKET_COUNT = census_income_constants.FEATURE_BUCKET_COUNT
_BUCKET_FEATURE_KEYS = census_income_constants.BUCKET_FEATURE_KEYS
_CATEGORICAL_FEATURE_KEYS = census_income_constants.CATEGORICAL_FEATURE_KEYS
_MAX_CATEGORICAL_FEATURE_VALUES = census_income_constants.MAX_CATEGORICAL_FEATURE_VALUES
_LABEL_KEY = census_income_constants.LABEL_KEY
_transformed_name = census_income_constants.transformed_name
def _transformed_names(keys):
return [_transformed_name(key) for key in keys]
def _gzip_reader_fn(filenames):
"""Small utility returning a record reader that can read gzip'ed files."""
return tf.data.TFRecordDataset(
filenames,
compression_type='GZIP')
def _get_serve_tf_examples_fn(model, tf_transform_output):
"""Returns a function that parses a serialized tf.Example and applies TFT."""
model.tft_layer = tf_transform_output.transform_features_layer()
@tf.function
def serve_tf_examples_fn(serialized_tf_examples):
"""Returns the output to be used in the serving signature."""
feature_spec = tf_transform_output.raw_feature_spec()
feature_spec.pop(_LABEL_KEY)
parsed_features = tf.io.parse_example(serialized_tf_examples, feature_spec)
transformed_features = model.tft_layer(parsed_features)
if _transformed_name(_LABEL_KEY) in transformed_features:
transformed_features.pop(_transformed_name(_LABEL_KEY))
return model(transformed_features)
return serve_tf_examples_fn
def _input_fn(file_pattern: List[Text],
tf_transform_output: tft.TFTransformOutput,
batch_size: int = 200) -> tf.data.Dataset:
"""Generates features and label for tuning/training.
Args:
file_pattern: List of paths or patterns of input tfrecord files.
tf_transform_output: A TFTransformOutput.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
"""
transformed_feature_spec = (
tf_transform_output.transformed_feature_spec().copy())
dataset = tf.data.experimental.make_batched_features_dataset(
file_pattern=file_pattern,
batch_size=batch_size,
features=transformed_feature_spec,
reader=_gzip_reader_fn,
label_key=_transformed_name(_LABEL_KEY))
return dataset
def _build_keras_model(hidden_units: List[int] = None) -> tf.keras.Model:
"""Creates a DNN Keras model.
Args:
hidden_units: [int], the layer sizes of the DNN (input layer first).
Returns:
A keras Model.
"""
real_valued_columns = [
tf.feature_column.numeric_column(key, shape=())
for key in _transformed_names(_DENSE_FLOAT_FEATURE_KEYS)
]
categorical_columns = [
tf.feature_column.categorical_column_with_identity(
key, num_buckets=_VOCAB_SIZE + _OOV_SIZE, default_value=0)
for key in _transformed_names(_VOCAB_FEATURE_KEYS)
]
categorical_columns += [
tf.feature_column.categorical_column_with_identity(
key, num_buckets=_FEATURE_BUCKET_COUNT, default_value=0)
for key in _transformed_names(_BUCKET_FEATURE_KEYS)
]
categorical_columns += [
tf.feature_column.categorical_column_with_identity( # pylint: disable=g-complex-comprehension
key,
num_buckets=num_buckets,
default_value=0) for key, num_buckets in zip(
_transformed_names(_CATEGORICAL_FEATURE_KEYS),
_MAX_CATEGORICAL_FEATURE_VALUES)
]
indicator_column = [
tf.feature_column.indicator_column(categorical_column)
for categorical_column in categorical_columns
]
model = _wide_and_deep_classifier(
# TODO(b/139668410) replace with premade wide_and_deep keras model
wide_columns=indicator_column,
deep_columns=real_valued_columns,
dnn_hidden_units=hidden_units or [100, 70, 50, 25])
return model
def _wide_and_deep_classifier(wide_columns, deep_columns, dnn_hidden_units):
"""Build a simple keras wide and deep model.
Args:
wide_columns: Feature columns wrapped in indicator_column for wide (linear)
part of the model.
deep_columns: Feature columns for deep part of the model.
dnn_hidden_units: [int], the layer sizes of the hidden DNN.
Returns:
A Wide and Deep Keras model
"""
# Following values are hard coded for simplicity in this example,
# However prefarably they should be passsed in as hparams.
# Keras needs the feature definitions at compile time.
# TODO(b/139081439): Automate generation of input layers from FeatureColumn.
input_layers = {
colname: tf.keras.layers.Input(name=colname, shape=(), dtype=tf.float32)
for colname in _transformed_names(_DENSE_FLOAT_FEATURE_KEYS)
}
input_layers.update({
colname: tf.keras.layers.Input(name=colname, shape=(), dtype='int32')
for colname in _transformed_names(_VOCAB_FEATURE_KEYS)
})
input_layers.update({
colname: tf.keras.layers.Input(name=colname, shape=(), dtype='int32')
for colname in _transformed_names(_BUCKET_FEATURE_KEYS)
})
input_layers.update({
colname: tf.keras.layers.Input(name=colname, shape=(), dtype='int32')
for colname in _transformed_names(_CATEGORICAL_FEATURE_KEYS)
})
# TODO(b/161816639): SparseFeatures for feature columns + Keras.
deep = tf.keras.layers.DenseFeatures(deep_columns)(input_layers)
for numnodes in dnn_hidden_units:
deep = tf.keras.layers.Dense(numnodes)(deep)
wide = tf.keras.layers.DenseFeatures(wide_columns)(input_layers)
output = tf.keras.layers.Dense(
1, activation='sigmoid')(
tf.keras.layers.concatenate([deep, wide]))
model = tf.keras.Model(input_layers, output)
model.compile(
loss='binary_crossentropy',
optimizer=tf.keras.optimizers.Adam(lr=0.001),
metrics=[tf.keras.metrics.BinaryAccuracy()])
model.summary(print_fn=absl.logging.info)
return model
# TFX Trainer will call this function.
def run_fn(fn_args: TrainerFnArgs):
"""Train the model based on given args.
Args:
fn_args: Holds args used to train the model as name/value pairs.
"""
# Number of nodes in the first layer of the DNN
first_dnn_layer_size = 100
num_dnn_layers = 4
dnn_decay_factor = 0.7
tf_transform_output = tft.TFTransformOutput(fn_args.transform_output)
train_dataset = _input_fn(fn_args.train_files, tf_transform_output, 40)
eval_dataset = _input_fn(fn_args.eval_files, tf_transform_output, 40)
model = _build_keras_model(
# Construct layers sizes with exponetial decay
hidden_units=[
max(2, int(first_dnn_layer_size * dnn_decay_factor**i))
for i in range(num_dnn_layers)
])
# This log path might change in the future.
log_dir = os.path.join(os.path.dirname(fn_args.serving_model_dir), 'logs')
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=log_dir, update_freq='batch')
model.fit(
train_dataset,
steps_per_epoch=fn_args.train_steps,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps,
callbacks=[tensorboard_callback])
signatures = {
'serving_default':
_get_serve_tf_examples_fn(model,
tf_transform_output).get_concrete_function(
tf.TensorSpec(
shape=[None],
dtype=tf.string,
name='examples')),
}
model.save(fn_args.serving_model_dir, save_format='tf', signatures=signatures)
trainer = Trainer(
module_file=os.path.abspath(_census_income_trainer_module_file),
custom_executor_spec=executor_spec.ExecutorClassSpec(GenericExecutor),
examples=transform.outputs['transformed_examples'],
transform_graph=transform.outputs['transform_graph'],
schema=schema_gen.outputs['schema'],
train_args=trainer_pb2.TrainArgs(num_steps=100),
eval_args=trainer_pb2.EvalArgs(num_steps=50))
context.run(trainer)
###Output
_____no_output_____
###Markdown
EvaluatorThe `Evaluator` component computes model performance metrics over the evaluation set. It uses the [TensorFlow Model Analysis](https://www.tensorflow.org/tfx/model_analysis/get_started) library. `Evaluator` will take as input the data from `ExampleGen`, the trained model from `Trainer`, and slicing configuration. The slicing configuration allows you to slice your metrics on feature values. See an example of this configuration below:
###Code
from google.protobuf.wrappers_pb2 import BoolValue
eval_config = tfma.EvalConfig(
model_specs=[
# This assumes a serving model with signature 'serving_default'. If
# using estimator based EvalSavedModel, add signature_name: 'eval' and
# remove the label_key.
tfma.ModelSpec(label_key="Over-50K")
],
metrics_specs=[
tfma.MetricsSpec(
# The metrics added here are in addition to those saved with the
# model (assuming either a keras model or EvalSavedModel is used).
# Any metrics added into the saved model (for example using
# model.compile(..., metrics=[...]), etc) will be computed
# automatically.
# To add validation thresholds for metrics saved with the model,
# add them keyed by metric name to the thresholds map.
metrics=[
tfma.MetricConfig(class_name='ExampleCount'),
tfma.MetricConfig(class_name='BinaryAccuracy'),
tfma.MetricConfig(class_name='FairnessIndicators',
config='{ "thresholds": [0.5] }'),
]
)
],
slicing_specs=[
# An empty slice spec means the overall slice, i.e. the whole dataset.
tfma.SlicingSpec(),
# Data can be sliced along a feature column. In this case, data is
# sliced by feature column Race and Sex.
tfma.SlicingSpec(feature_keys=['Race']),
tfma.SlicingSpec(feature_keys=['Sex']),
tfma.SlicingSpec(feature_keys=['Race', 'Sex']),
],
options = tfma.Options(compute_confidence_intervals=BoolValue(value=True))
)
# Use TFMA to compute a evaluation statistics over features of a model and
# validate them against a baseline.
evaluator = Evaluator(
examples=example_gen.outputs['examples'],
model=trainer.outputs['model'],
eval_config=eval_config)
context.run(evaluator)
evaluator.outputs
###Output
_____no_output_____
###Markdown
Using the `evaluation` output we can show the default visualization of global metrics on the entire evaluation set.
###Code
context.show(evaluator.outputs['evaluation'])
###Output
_____no_output_____
###Markdown
Populate Properties from ModelCard with Model Card ToolkitNow that we’ve set up our TFX pipeline, we will use the Model Card Toolkit to extract key artifacts from the run and populate a Model Card. Connect to the MLMD store used by the InteractiveContext
###Code
from ml_metadata.metadata_store import metadata_store
from IPython import display
mlmd_store = metadata_store.MetadataStore(context.metadata_connection_config)
model_uri = trainer.outputs["model"].get()[0].uri
###Output
_____no_output_____
###Markdown
Use Model Card Toolkit Initialize the Model Card Toolkit.
###Code
import model_card_toolkit as mctlib
mlmd_source = mctlib.utils.MlmdSource(store=mlmd_store, model_uri=model_uri)
mct = mctlib.ModelCardToolkit(mlmd_source=mlmd_source)
###Output
_____no_output_____
###Markdown
Create Model Card workspace.
###Code
model_card = mct.scaffold_assets()
###Output
_____no_output_____
###Markdown
Annotate more information into Model Card.It is also important to document model information that might be important to downstream users, such as its limitations, intended use cases, trade offs, and ethical considerations. For each of these sections, we can directly add new JSON objects to represent this information.
###Code
model_card.model_details.name = 'Census Income Classifier'
model_card.model_details.overview = (
'This is a wide and deep Keras model which aims to classify whether or not '
'an individual has an income of over $50,000 based on various demographic '
'features. The model is trained on the UCI Census Income Dataset. This is '
'not a production model, and this dataset has traditionally only been used '
'for research purposes. In this Model Card, you can review quantitative '
'components of the model’s performance and data, as well as information '
'about the model’s intended uses, limitations, and ethical considerations.'
)
model_card.model_details.owners = [
mctlib.Owner(name='Model Cards Team',
contact='[email protected]')
]
model_card.considerations.use_cases = [mctlib.UseCase(description=
'This dataset that this model was trained on was originally created to '
'support the machine learning community in conducting empirical analysis '
'of ML algorithms. The Adult Data Set can be used in fairness-related '
'studies that compare inequalities across sex and race, based on '
'people’s annual incomes.')
]
model_card.considerations.limitations = [mctlib.Limitation(description=
'This is a class-imbalanced dataset across a variety of sensitive classes.'
' The ratio of male-to-female examples is about 2:1 and there are far more'
' examples with the “white” attribute than every other race combined. '
'Furthermore, the ratio of $50,000 or less earners to $50,000 or more '
'earners is just over 3:1. Due to the imbalance across income levels, we '
'can see that our true negative rate seems quite high, while our true '
'positive rate seems quite low. This is true to an even greater degree '
'when we only look at the “female” sub-group, because there are even '
'fewer female examples in the $50,000+ earner group, causing our model to '
'overfit these examples. To avoid this, we can try various remediation '
'strategies in future iterations (e.g. undersampling, hyperparameter '
'tuning, etc), but we may not be able to fix all of the fairness issues.')
]
model_card.considerations.ethical_considerations = [mctlib.Risk(
name= 'We risk expressing the viewpoint that the attributes in this dataset '
'are the only ones that are predictive of someone’s income, even '
'though we know this is not the case.',
mitigation_strategy= 'As mentioned, some interventions may need to be '
'performed to address the class imbalances in the dataset.'
)
]
###Output
_____no_output_____
###Markdown
Filter and Add Graphs.We can filter the graphs generated by the TFX components to include those most relevant for the Model Card using the function defined below. In this example, we filter for `race` and `sex`, two potentially sensitive attributes. Each Model Card will have up to three sections for graphs -- training dataset statistics, evaluation dataset statistics, and quantitative analysis of our model’s performance.
###Code
# These are the graphs that will appear in the Quantiative Analysis portion of
# the Model Card. Feel free to add or remove from this list.
TARGET_EVAL_GRAPH_NAMES = [
'fairness_indicators_metrics/[email protected]',
'fairness_indicators_metrics/[email protected]',
'binary_accuracy',
'example_count | Race_X_Sex',
]
# These are the graphs that will appear in both the Train Set and Eval Set
# portions of the Model Card. Feel free to add or remove from this list.
TARGET_DATASET_GRAPH_NAMES = [
'counts | Race',
'counts | Sex',
]
def filter_graphs(graphics, target_graph_names):
result = []
for graph in graphics:
for target_graph_name in target_graph_names:
if graph.name.startswith(target_graph_name):
result.append(graph)
result.sort(key=lambda g: g.name)
return result
# Populating the three different sections using the filter defined above. To
# see all the graphs available in a section, we can iterate through each of the
# different collections.
model_card.quantitative_analysis.graphics.collection = filter_graphs(
model_card.quantitative_analysis.graphics.collection, TARGET_EVAL_GRAPH_NAMES)
model_card.model_parameters.data[0].graphics.collection = filter_graphs(
model_card.model_parameters.data[0].graphics.collection, TARGET_DATASET_GRAPH_NAMES)
model_card.model_parameters.data[1].graphics.collection = filter_graphs(
model_card.model_parameters.data[1].graphics.collection, TARGET_DATASET_GRAPH_NAMES)
###Output
_____no_output_____
###Markdown
We then add (optional) descriptions for each of the each of the graph sections.
###Code
model_card.model_parameters.data[0].name = 'train_set'
model_card.model_parameters.data[0].graphics.description = (
'This section includes graphs displaying the class distribution for the '
'“Race” and “Sex” attributes in our training dataset. We chose to '
'show these graphs in particular because we felt it was important that '
'users see the class imbalance.'
)
model_card.model_parameters.data[1].name = 'eval_set'
model_card.model_parameters.data[1].graphics.description = (
'Like the training set, we provide graphs showing the class distribution '
'of the data we used to evaluate our model’s performance. '
)
model_card.quantitative_analysis.graphics.description = (
'These graphs show how the model performs for data sliced by “Race”, '
'“Sex” and the intersection of these attributes. The metrics we chose '
'to display are “Accuracy”, “False Positive Rate”, and “False '
'Negative Rate”, because we anticipated that the class imbalances might '
'cause our model to underperform for certain groups.'
)
mct.update_model_card(model_card)
###Output
_____no_output_____
###Markdown
Generate the Model Card.We can now display the Model Card in HTML format.
###Code
html = mct.export_format()
display.display(display.HTML(html))
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
MLMD Model Card Toolkit Demo <!-- TODO(b/157683787) uncomment when MCT tutorial is shown in tf site. View on TensorFlow.org --> Run in Google Colab View on GitHub <!-- TODO(b/157683787) uncomment when MCT tutorial is uploaded to tf docs. Download notebook --> BackgroundThis notebook demonstrates how to generate a model card using the Model Card Toolkit with MLMD and TFX pipeline in a Jupyter/Colab environment. You can learn more about model cards at https://modelcards.withgoogle.com/about. SetupWe first need to a) install and import the necessary packages, and b) download the data. Upgrade to Pip 20.2 and Install TFX
###Code
try:
import colab
!pip install --upgrade pip==20.2
except:
pass
!pip install "tfx>=0.21.1,<0.22"
!pip install model-card-toolkit
###Output
_____no_output_____
###Markdown
Did you restart the runtime?If you are using Google Colab, the first time that you run the cell above, you must restart the runtime (Runtime > Restart runtime ...). This is because of the way that Colab loads packages. Import packagesWe import necessary packages, including standard TFX component classes and check the library versions.
###Code
import os
import pprint
import tempfile
import urllib
import absl
import tensorflow as tf
import tensorflow_model_analysis as tfma
tf.get_logger().propagate = False
pp = pprint.PrettyPrinter()
import tfx
from tfx.components import CsvExampleGen
from tfx.components import Evaluator
from tfx.components import ExampleValidator
from tfx.components import Pusher
from tfx.components import ResolverNode
from tfx.components import SchemaGen
from tfx.components import StatisticsGen
from tfx.components import Trainer
from tfx.components import Transform
from tfx.components.base import executor_spec
from tfx.components.trainer.executor import GenericExecutor
from tfx.dsl.experimental import latest_blessed_model_resolver
from tfx.orchestration import metadata
from tfx.orchestration import pipeline
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
from tfx.proto import pusher_pb2
from tfx.proto import trainer_pb2
from tfx.types import Channel
from tfx.types.standard_artifacts import Model
from tfx.types.standard_artifacts import ModelBlessing
from tfx.utils.dsl_utils import external_input
import ml_metadata as mlmd
print('TensorFlow version: {}'.format(tf.__version__))
print('TFX version: {}'.format(tfx.version.__version__))
print('MLMD version: {}'.format(mlmd.__version__))
###Output
_____no_output_____
###Markdown
Set up pipeline paths
###Code
# This is the root directory for your TFX pip package installation.
_tfx_root = tfx.__path__[0]
# Set up logging.
absl.logging.set_verbosity(absl.logging.INFO)
###Output
_____no_output_____
###Markdown
Download example dataWe download the example dataset for use in our TFX pipeline.
###Code
DATA_PATH = 'https://archive.ics.uci.edu/ml/machine-learning-databases/adult/' \
'adult.data'
_data_root = tempfile.mkdtemp(prefix='tfx-data')
_data_filepath = os.path.join(_data_root, "data.csv")
urllib.request.urlretrieve(DATA_PATH, _data_filepath)
columns = [
"Age", "Workclass", "fnlwgt", "Education", "Education-Num", "Marital-Status",
"Occupation", "Relationship", "Race", "Sex", "Capital-Gain", "Capital-Loss",
"Hours-per-week", "Country", "Over-50K"]
with open(_data_filepath, 'r') as f:
content = f.read()
content = content.replace(", <=50K", ', 0').replace(", >50K", ', 1')
with open(_data_filepath, 'w') as f:
f.write(','.join(columns) + '\n' + content)
###Output
_____no_output_____
###Markdown
Take a quick look at the CSV file.
###Code
!head {_data_filepath}
###Output
_____no_output_____
###Markdown
Create the InteractiveContextLast, we create an InteractiveContext, which will allow us to run TFX components interactively in this notebook.
###Code
# Here, we create an InteractiveContext using default parameters. This will
# use a temporary directory with an ephemeral ML Metadata database instance.
# To use your own pipeline root or database, the optional properties
# `pipeline_root` and `metadata_connection_config` may be passed to
# InteractiveContext. Calls to InteractiveContext are no-ops outside of the
# notebook.
context = InteractiveContext()
###Output
_____no_output_____
###Markdown
Run TFX components interactivelyIn the cells that follow, we create TFX components one-by-one, run each of them, and visualize their output artifacts. In this notebook, we won’t provide detailed explanations of each TFX component, but you can see what each does at [TFX Colab workshop](https://github.com/tensorflow/workshops/blob/master/tfx_labs/Lab_1_Pipeline_in_Colab.ipynb). ExampleGenCreate the `ExampleGen` component to split data into training and evaluation sets, convert the data into `tf.Example` format, and copy data into the `_tfx_root` directory for other components to access.
###Code
example_gen = CsvExampleGen(input=external_input(_data_root))
context.run(example_gen)
artifact = example_gen.outputs['examples'].get()[0]
print(artifact.split_names, artifact.uri)
###Output
_____no_output_____
###Markdown
Let’s take a look at the first three training examples:
###Code
# Get the URI of the output artifact representing the training examples, which is a directory
train_uri = os.path.join(example_gen.outputs['examples'].get()[0].uri, 'train')
# Get the list of files in this directory (all compressed TFRecord files)
tfrecord_filenames = [os.path.join(train_uri, name)
for name in os.listdir(train_uri)]
# Create a `TFRecordDataset` to read these files
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
# Iterate over the first 3 records and decode them.
for tfrecord in dataset.take(3):
serialized_example = tfrecord.numpy()
example = tf.train.Example()
example.ParseFromString(serialized_example)
pp.pprint(example)
###Output
_____no_output_____
###Markdown
StatisticsGen`StatisticsGen` takes as input the dataset we just ingested using `ExampleGen` and allows you to perform some analysis of your dataset using TensorFlow Data Validation (TFDV).
###Code
statistics_gen = StatisticsGen(
examples=example_gen.outputs['examples'])
context.run(statistics_gen)
###Output
_____no_output_____
###Markdown
After `StatisticsGen` finishes running, we can visualize the outputted statistics. Try playing with the different plots!
###Code
context.show(statistics_gen.outputs['statistics'])
###Output
_____no_output_____
###Markdown
SchemaGen`SchemaGen` will take as input the statistics that we generated with `StatisticsGen`, looking at the training split by default.
###Code
schema_gen = SchemaGen(
statistics=statistics_gen.outputs['statistics'],
infer_feature_shape=False)
context.run(schema_gen)
context.show(schema_gen.outputs['schema'])
###Output
_____no_output_____
###Markdown
To learn more about schemas, see [the SchemaGen documentation](https://www.tensorflow.org/tfx/guide/schemagen). ExampleValidator`ExampleValidator` will take as input the statistics from `StatisticsGen`, and the schema from `SchemaGen`.By default, it compares the statistics from the evaluation split to the schema from the training split.
###Code
example_validator = ExampleValidator(
statistics=statistics_gen.outputs['statistics'],
schema=schema_gen.outputs['schema'])
context.run(example_validator)
context.show(example_validator.outputs['anomalies'])
###Output
_____no_output_____
###Markdown
Transform`Transform` will take as input the data from `ExampleGen`, the schema from `SchemaGen`, as well as a module that contains user-defined Transform code.Let's see an example of user-defined Transform code below (for an introduction to the TensorFlow Transform APIs, [see the tutorial](https://www.tensorflow.org/tfx/tutorials/transform/simple)).
###Code
_census_income_constants_module_file = 'census_income_constants.py'
%%writefile {_census_income_constants_module_file}
# Categorical features are assumed to each have a maximum value in the dataset.
MAX_CATEGORICAL_FEATURE_VALUES = [20]
CATEGORICAL_FEATURE_KEYS = ["Education-Num"]
DENSE_FLOAT_FEATURE_KEYS = ["Capital-Gain", "Hours-per-week", "Capital-Loss"]
# Number of buckets used by tf.transform for encoding each feature.
FEATURE_BUCKET_COUNT = 10
BUCKET_FEATURE_KEYS = ["Age"]
# Number of vocabulary terms used for encoding VOCAB_FEATURES by tf.transform
VOCAB_SIZE = 200
# Count of out-of-vocab buckets in which unrecognized VOCAB_FEATURES are hashed.
OOV_SIZE = 10
VOCAB_FEATURE_KEYS = ["Workclass", "Education", "Marital-Status", "Occupation",
"Relationship", "Race", "Sex", "Country"]
# Keys
LABEL_KEY = "Over-50K"
def transformed_name(key):
return key + '_xf'
_census_income_transform_module_file = 'census_income_transform.py'
%%writefile {_census_income_transform_module_file}
import tensorflow as tf
import tensorflow_transform as tft
import census_income_constants
_DENSE_FLOAT_FEATURE_KEYS = census_income_constants.DENSE_FLOAT_FEATURE_KEYS
_VOCAB_FEATURE_KEYS = census_income_constants.VOCAB_FEATURE_KEYS
_VOCAB_SIZE = census_income_constants.VOCAB_SIZE
_OOV_SIZE = census_income_constants.OOV_SIZE
_FEATURE_BUCKET_COUNT = census_income_constants.FEATURE_BUCKET_COUNT
_BUCKET_FEATURE_KEYS = census_income_constants.BUCKET_FEATURE_KEYS
_CATEGORICAL_FEATURE_KEYS = census_income_constants.CATEGORICAL_FEATURE_KEYS
_LABEL_KEY = census_income_constants.LABEL_KEY
_transformed_name = census_income_constants.transformed_name
def preprocessing_fn(inputs):
"""tf.transform's callback function for preprocessing inputs.
Args:
inputs: map from feature keys to raw not-yet-transformed features.
Returns:
Map from string feature key to transformed feature operations.
"""
outputs = {}
for key in _DENSE_FLOAT_FEATURE_KEYS:
# Preserve this feature as a dense float, setting nan's to the mean.
outputs[_transformed_name(key)] = tft.scale_to_z_score(
_fill_in_missing(inputs[key]))
for key in _VOCAB_FEATURE_KEYS:
# Build a vocabulary for this feature.
outputs[_transformed_name(key)] = tft.compute_and_apply_vocabulary(
_fill_in_missing(inputs[key]),
top_k=_VOCAB_SIZE,
num_oov_buckets=_OOV_SIZE)
for key in _BUCKET_FEATURE_KEYS:
outputs[_transformed_name(key)] = tft.bucketize(
_fill_in_missing(inputs[key]), _FEATURE_BUCKET_COUNT)
for key in _CATEGORICAL_FEATURE_KEYS:
outputs[_transformed_name(key)] = _fill_in_missing(inputs[key])
label = _fill_in_missing(inputs[_LABEL_KEY])
outputs[_transformed_name(_LABEL_KEY)] = label
return outputs
def _fill_in_missing(x):
"""Replace missing values in a SparseTensor.
Fills in missing values of `x` with '' or 0, and converts to a dense tensor.
Args:
x: A `SparseTensor` of rank 2. Its dense shape should have size at most 1
in the second dimension.
Returns:
A rank 1 tensor where missing values of `x` have been filled in.
"""
default_value = '' if x.dtype == tf.string else 0
return tf.squeeze(
tf.sparse.to_dense(
tf.SparseTensor(x.indices, x.values, [x.dense_shape[0], 1]),
default_value),
axis=1)
transform = Transform(
examples=example_gen.outputs['examples'],
schema=schema_gen.outputs['schema'],
module_file=os.path.abspath(_census_income_transform_module_file))
context.run(transform)
transform.outputs
###Output
_____no_output_____
###Markdown
TrainerLet's see an example of user-defined model code below (for an introduction to the TensorFlow Keras APIs, [see the tutorial](https://www.tensorflow.org/guide/keras)):
###Code
_census_income_trainer_module_file = 'census_income_trainer.py'
%%writefile {_census_income_trainer_module_file}
from typing import List, Text
import os
import absl
import datetime
import tensorflow as tf
import tensorflow_transform as tft
from tfx.components.trainer.executor import TrainerFnArgs
import census_income_constants
_DENSE_FLOAT_FEATURE_KEYS = census_income_constants.DENSE_FLOAT_FEATURE_KEYS
_VOCAB_FEATURE_KEYS = census_income_constants.VOCAB_FEATURE_KEYS
_VOCAB_SIZE = census_income_constants.VOCAB_SIZE
_OOV_SIZE = census_income_constants.OOV_SIZE
_FEATURE_BUCKET_COUNT = census_income_constants.FEATURE_BUCKET_COUNT
_BUCKET_FEATURE_KEYS = census_income_constants.BUCKET_FEATURE_KEYS
_CATEGORICAL_FEATURE_KEYS = census_income_constants.CATEGORICAL_FEATURE_KEYS
_MAX_CATEGORICAL_FEATURE_VALUES = census_income_constants.MAX_CATEGORICAL_FEATURE_VALUES
_LABEL_KEY = census_income_constants.LABEL_KEY
_transformed_name = census_income_constants.transformed_name
def _transformed_names(keys):
return [_transformed_name(key) for key in keys]
def _gzip_reader_fn(filenames):
"""Small utility returning a record reader that can read gzip'ed files."""
return tf.data.TFRecordDataset(
filenames,
compression_type='GZIP')
def _get_serve_tf_examples_fn(model, tf_transform_output):
"""Returns a function that parses a serialized tf.Example and applies TFT."""
model.tft_layer = tf_transform_output.transform_features_layer()
@tf.function
def serve_tf_examples_fn(serialized_tf_examples):
"""Returns the output to be used in the serving signature."""
feature_spec = tf_transform_output.raw_feature_spec()
feature_spec.pop(_LABEL_KEY)
parsed_features = tf.io.parse_example(serialized_tf_examples, feature_spec)
transformed_features = model.tft_layer(parsed_features)
transformed_features.pop(_transformed_name(_LABEL_KEY))
return model(transformed_features)
return serve_tf_examples_fn
def _input_fn(file_pattern: List[Text],
tf_transform_output: tft.TFTransformOutput,
batch_size: int = 200) -> tf.data.Dataset:
"""Generates features and label for tuning/training.
Args:
file_pattern: List of paths or patterns of input tfrecord files.
tf_transform_output: A TFTransformOutput.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
"""
transformed_feature_spec = (
tf_transform_output.transformed_feature_spec().copy())
dataset = tf.data.experimental.make_batched_features_dataset(
file_pattern=file_pattern,
batch_size=batch_size,
features=transformed_feature_spec,
reader=_gzip_reader_fn,
label_key=_transformed_name(_LABEL_KEY))
return dataset
def _build_keras_model(hidden_units: List[int] = None) -> tf.keras.Model:
"""Creates a DNN Keras model.
Args:
hidden_units: [int], the layer sizes of the DNN (input layer first).
Returns:
A keras Model.
"""
real_valued_columns = [
tf.feature_column.numeric_column(key, shape=())
for key in _transformed_names(_DENSE_FLOAT_FEATURE_KEYS)
]
categorical_columns = [
tf.feature_column.categorical_column_with_identity(
key, num_buckets=_VOCAB_SIZE + _OOV_SIZE, default_value=0)
for key in _transformed_names(_VOCAB_FEATURE_KEYS)
]
categorical_columns += [
tf.feature_column.categorical_column_with_identity(
key, num_buckets=_FEATURE_BUCKET_COUNT, default_value=0)
for key in _transformed_names(_BUCKET_FEATURE_KEYS)
]
categorical_columns += [
tf.feature_column.categorical_column_with_identity( # pylint: disable=g-complex-comprehension
key,
num_buckets=num_buckets,
default_value=0) for key, num_buckets in zip(
_transformed_names(_CATEGORICAL_FEATURE_KEYS),
_MAX_CATEGORICAL_FEATURE_VALUES)
]
indicator_column = [
tf.feature_column.indicator_column(categorical_column)
for categorical_column in categorical_columns
]
model = _wide_and_deep_classifier(
# TODO(b/139668410) replace with premade wide_and_deep keras model
wide_columns=indicator_column,
deep_columns=real_valued_columns,
dnn_hidden_units=hidden_units or [100, 70, 50, 25])
return model
def _wide_and_deep_classifier(wide_columns, deep_columns, dnn_hidden_units):
"""Build a simple keras wide and deep model.
Args:
wide_columns: Feature columns wrapped in indicator_column for wide (linear)
part of the model.
deep_columns: Feature columns for deep part of the model.
dnn_hidden_units: [int], the layer sizes of the hidden DNN.
Returns:
A Wide and Deep Keras model
"""
# Following values are hard coded for simplicity in this example,
# However prefarably they should be passsed in as hparams.
# Keras needs the feature definitions at compile time.
# TODO(b/139081439): Automate generation of input layers from FeatureColumn.
input_layers = {
colname: tf.keras.layers.Input(name=colname, shape=(), dtype=tf.float32)
for colname in _transformed_names(_DENSE_FLOAT_FEATURE_KEYS)
}
input_layers.update({
colname: tf.keras.layers.Input(name=colname, shape=(), dtype='int32')
for colname in _transformed_names(_VOCAB_FEATURE_KEYS)
})
input_layers.update({
colname: tf.keras.layers.Input(name=colname, shape=(), dtype='int32')
for colname in _transformed_names(_BUCKET_FEATURE_KEYS)
})
input_layers.update({
colname: tf.keras.layers.Input(name=colname, shape=(), dtype='int32')
for colname in _transformed_names(_CATEGORICAL_FEATURE_KEYS)
})
# TODO(b/161816639): SparseFeatures for feature columns + Keras.
deep = tf.keras.layers.DenseFeatures(deep_columns)(input_layers)
for numnodes in dnn_hidden_units:
deep = tf.keras.layers.Dense(numnodes)(deep)
wide = tf.keras.layers.DenseFeatures(wide_columns)(input_layers)
output = tf.keras.layers.Dense(
1, activation='sigmoid')(
tf.keras.layers.concatenate([deep, wide]))
model = tf.keras.Model(input_layers, output)
model.compile(
loss='binary_crossentropy',
optimizer=tf.keras.optimizers.Adam(lr=0.001),
metrics=[tf.keras.metrics.BinaryAccuracy()])
model.summary(print_fn=absl.logging.info)
return model
# TFX Trainer will call this function.
def run_fn(fn_args: TrainerFnArgs):
"""Train the model based on given args.
Args:
fn_args: Holds args used to train the model as name/value pairs.
"""
# Number of nodes in the first layer of the DNN
first_dnn_layer_size = 100
num_dnn_layers = 4
dnn_decay_factor = 0.7
tf_transform_output = tft.TFTransformOutput(fn_args.transform_output)
train_dataset = _input_fn(fn_args.train_files, tf_transform_output, 40)
eval_dataset = _input_fn(fn_args.eval_files, tf_transform_output, 40)
model = _build_keras_model(
# Construct layers sizes with exponetial decay
hidden_units=[
max(2, int(first_dnn_layer_size * dnn_decay_factor**i))
for i in range(num_dnn_layers)
])
# This log path might change in the future.
log_dir = os.path.join(os.path.dirname(fn_args.serving_model_dir), 'logs')
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=log_dir, update_freq='batch')
model.fit(
train_dataset,
steps_per_epoch=fn_args.train_steps,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps,
callbacks=[tensorboard_callback])
signatures = {
'serving_default':
_get_serve_tf_examples_fn(model,
tf_transform_output).get_concrete_function(
tf.TensorSpec(
shape=[None],
dtype=tf.string,
name='examples')),
}
model.save(fn_args.serving_model_dir, save_format='tf', signatures=signatures)
trainer = Trainer(
module_file=os.path.abspath(_census_income_trainer_module_file),
custom_executor_spec=executor_spec.ExecutorClassSpec(GenericExecutor),
examples=transform.outputs['transformed_examples'],
transform_graph=transform.outputs['transform_graph'],
schema=schema_gen.outputs['schema'],
train_args=trainer_pb2.TrainArgs(num_steps=100),
eval_args=trainer_pb2.EvalArgs(num_steps=50))
context.run(trainer)
###Output
_____no_output_____
###Markdown
EvaluatorThe `Evaluator` component computes model performance metrics over the evaluation set. It uses the [TensorFlow Model Analysis](https://www.tensorflow.org/tfx/model_analysis/get_started) library. `Evaluator` will take as input the data from `ExampleGen`, the trained model from `Trainer`, and slicing configuration. The slicing configuration allows you to slice your metrics on feature values. See an example of this configuration below:
###Code
from google.protobuf.wrappers_pb2 import BoolValue
eval_config = tfma.EvalConfig(
model_specs=[
# This assumes a serving model with signature 'serving_default'. If
# using estimator based EvalSavedModel, add signature_name: 'eval' and
# remove the label_key.
tfma.ModelSpec(label_key="Over-50K")
],
metrics_specs=[
tfma.MetricsSpec(
# The metrics added here are in addition to those saved with the
# model (assuming either a keras model or EvalSavedModel is used).
# Any metrics added into the saved model (for example using
# model.compile(..., metrics=[...]), etc) will be computed
# automatically.
# To add validation thresholds for metrics saved with the model,
# add them keyed by metric name to the thresholds map.
metrics=[
tfma.MetricConfig(class_name='ExampleCount'),
tfma.MetricConfig(class_name='BinaryAccuracy'),
tfma.MetricConfig(class_name='FairnessIndicators',
config='{ "thresholds": [0.5] }'),
]
)
],
slicing_specs=[
# An empty slice spec means the overall slice, i.e. the whole dataset.
tfma.SlicingSpec(),
# Data can be sliced along a feature column. In this case, data is
# sliced by feature column Race and Sex.
tfma.SlicingSpec(feature_keys=['Race']),
tfma.SlicingSpec(feature_keys=['Sex']),
tfma.SlicingSpec(feature_keys=['Race', 'Sex']),
],
options = tfma.Options(compute_confidence_intervals=BoolValue(value=True))
)
# Use TFMA to compute a evaluation statistics over features of a model and
# validate them against a baseline.
evaluator = Evaluator(
examples=example_gen.outputs['examples'],
model=trainer.outputs['model'],
eval_config=eval_config)
context.run(evaluator)
evaluator.outputs
###Output
_____no_output_____
###Markdown
Using the `evaluation` output we can show the default visualization of global metrics on the entire evaluation set.
###Code
context.show(evaluator.outputs['evaluation'])
###Output
_____no_output_____
###Markdown
Populate Properties from ModelCard with Model Card ToolkitNow that we’ve set up our TFX pipeline, we will use the Model Card Toolkit to extract key artifacts from the run and populate a Model Card. Connect to the MLMD store used by the InteractiveContext
###Code
from ml_metadata.metadata_store import metadata_store
from IPython import display
mlmd_store = metadata_store.MetadataStore(context.metadata_connection_config)
model_uri = trainer.outputs["model"].get()[0].uri
###Output
_____no_output_____
###Markdown
Use Model Card Toolkit Initialize the Model Card Toolkit.
###Code
from model_card_toolkit import ModelCardToolkit
mct = ModelCardToolkit(mlmd_store=mlmd_store, model_uri=model_uri)
###Output
_____no_output_____
###Markdown
Create Model Card workspace.
###Code
model_card = mct.scaffold_assets()
###Output
_____no_output_____
###Markdown
Annotate more infomation into Model Card.It is also important to document model information that might be important to downstream users, such as its limitations, intended use cases, trade offs, and ethical considerations. For each of these sections, we can directly add new JSON objects to represent this information.
###Code
model_card.model_details.name = 'Census Income Classifier'
model_card.model_details.overview = (
'This is a wide and deep Keras model which aims to classify whether or not '
'an individual has an income of over $50,000 based on various demographic '
'features. The model is trained on the UCI Census Income Dataset. This is '
'not a production model, and this dataset has traditionally only been used '
'for research purposes. In this Model Card, you can review quantitative '
'components of the model’s performance and data, as well as information '
'about the model’s intended uses, limitations, and ethical considerations.'
)
model_card.model_details.owners = [
{'name': 'Model Cards Team', 'contact': '[email protected]'}
]
model_card.considerations.use_cases = [
'This dataset that this model was trained on was originally created to '
'support the machine learning community in conducting empirical analysis '
'of ML algorithms. The Adult Data Set can be used in fairness-related '
'studies that compare inequalities across sex and race, based on '
'people’s annual incomes.'
]
model_card.considerations.limitations = [
'This is a class-imbalanced dataset across a variety of sensitive classes.'
' The ratio of male-to-female examples is about 2:1 and there are far more'
' examples with the “white” attribute than every other race combined. '
'Furthermore, the ratio of $50,000 or less earners to $50,000 or more '
'earners is just over 3:1. Due to the imbalance across income levels, we '
'can see that our true negative rate seems quite high, while our true '
'positive rate seems quite low. This is true to an even greater degree '
'when we only look at the “female” sub-group, because there are even '
'fewer female examples in the $50,000+ earner group, causing our model to '
'overfit these examples. To avoid this, we can try various remediation '
'strategies in future iterations (e.g. undersampling, hyperparameter '
'tuning, etc), but we may not be able to fix all of the fairness issues.'
]
model_card.considerations.ethical_considerations = [{
'name':
'We risk expressing the viewpoint that the attributes in this dataset '
'are the only ones that are predictive of someone’s income, even '
'though we know this is not the case.',
'mitigation_strategy':
'As mentioned, some interventions may need to be performed to address '
'the class imbalances in the dataset.'
}]
###Output
_____no_output_____
###Markdown
Filter and Add Graphs.We can filter the graphs generated by the TFX components to include those most relevant for the Model Card using the function defined below. In this example, we filter for `race` and `sex`, two potentially sensitive attributes. Each Model Card will have up to three sections for graphs -- training dataset statistics, evaluation dataset statistics, and quantitative analysis of our model’s performance.
###Code
# These are the graphs that will appear in the Quantiative Analysis portion of
# the Model Card. Feel free to add or remove from this list.
TARGET_EVAL_GRAPH_NAMES = [
'fairness_indicators_metrics/[email protected]',
'fairness_indicators_metrics/[email protected]',
'binary_accuracy',
'example_count | Race_X_Sex',
]
# These are the graphs that will appear in both the Train Set and Eval Set
# portions of the Model Card. Feel free to add or remove from this list.
TARGET_DATASET_GRAPH_NAMES = [
'counts | Race',
'counts | Sex',
]
def filter_graphs(graphics, target_graph_names):
result = []
for graph in graphics:
for target_graph_name in target_graph_names:
if graph.name.startswith(target_graph_name):
result.append(graph)
result.sort(key=lambda g: g.name)
return result
# Populating the three different sections using the filter defined above. To
# see all the graphs available in a section, we can iterate through each of the
# different collections.
model_card.quantitative_analysis.graphics.collection = filter_graphs(
model_card.quantitative_analysis.graphics.collection, TARGET_EVAL_GRAPH_NAMES)
model_card.model_parameters.data.eval.graphics.collection = filter_graphs(
model_card.model_parameters.data.eval.graphics.collection, TARGET_DATASET_GRAPH_NAMES)
model_card.model_parameters.data.train.graphics.collection = filter_graphs(
model_card.model_parameters.data.train.graphics.collection, TARGET_DATASET_GRAPH_NAMES)
###Output
_____no_output_____
###Markdown
We then add (optional) descriptions for each of the each of the graph sections.
###Code
model_card.model_parameters.data.train.graphics.description = (
'This section includes graphs displaying the class distribution for the '
'“Race” and “Sex” attributes in our training dataset. We chose to '
'show these graphs in particular because we felt it was important that '
'users see the class imbalance.'
)
model_card.model_parameters.data.eval.graphics.description = (
'Like the training set, we provide graphs showing the class distribution '
'of the data we used to evaluate our model’s performance. '
)
model_card.quantitative_analysis.graphics.description = (
'These graphs show how the model performs for data sliced by “Race”, '
'“Sex” and the intersection of these attributes. The metrics we chose '
'to display are “Accuracy”, “False Positive Rate”, and “False '
'Negative Rate”, because we anticipated that the class imbalances might '
'cause our model to underperform for certain groups.'
)
mct.update_model_card_json(model_card)
###Output
_____no_output_____
###Markdown
Generate the Model Card.We can now display the Model Card in HTML format.
###Code
html = mct.export_format()
display.display(display.HTML(html))
###Output
_____no_output_____
###Markdown
Copyright © 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
MLMD Model Card Toolkit Demo BackgroundThis notebook demonstrates how to generate a model card usinge Model Card Toolkits with MLMD and TFX pipeline in a Jupyter/Colab environment. You can learn more about model cards at https://modelcards.withgoogle.com/about. SetupFirst, we install and import the necessary packages and download data. Upgrade Pip and Install TFX
###Code
try:
import colab
!pip install --upgrade pip
except:
pass
!pip install "tfx>=0.21.1,<0.22"
!pip install model-card-toolkit
###Output
_____no_output_____
###Markdown
Did you restart the runtime?If you are using Google Colab, the first time that you run the cell above, you must restart the runtime (Runtime > Restart runtime ...). This is because of the way that Colab loads packages. Import packagesWe import necessary packages, including standard TFX component classes and check the library versions.
###Code
import os
import pprint
import tempfile
import urllib
import absl
import tensorflow as tf
import tensorflow_model_analysis as tfma
tf.get_logger().propagate = False
pp = pprint.PrettyPrinter()
import tfx
from tfx.components import CsvExampleGen
from tfx.components import Evaluator
from tfx.components import ExampleValidator
from tfx.components import Pusher
from tfx.components import ResolverNode
from tfx.components import SchemaGen
from tfx.components import StatisticsGen
from tfx.components import Trainer
from tfx.components import Transform
from tfx.components.base import executor_spec
from tfx.components.trainer.executor import GenericExecutor
from tfx.dsl.experimental import latest_blessed_model_resolver
from tfx.orchestration import metadata
from tfx.orchestration import pipeline
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
from tfx.proto import pusher_pb2
from tfx.proto import trainer_pb2
from tfx.types import Channel
from tfx.types.standard_artifacts import Model
from tfx.types.standard_artifacts import ModelBlessing
from tfx.utils.dsl_utils import external_input
import ml_metadata
print('TensorFlow version: {}'.format(tf.__version__))
print('TFX version: {}'.format(tfx.__version__))
print('MLMD version: {}'.format(ml_metadata.__version__))
###Output
_____no_output_____
###Markdown
Set up pipeline paths
###Code
# This is the root directory for your TFX pip package installation.
_tfx_root = tfx.__path__[0]
# Set up logging.
absl.logging.set_verbosity(absl.logging.INFO)
###Output
_____no_output_____
###Markdown
Download example dataWe download the example dataset for use in our TFX pipeline.
###Code
DATA_PATH = 'https://archive.ics.uci.edu/ml/machine-learning-databases/adult/' \
'adult.data'
_data_root = tempfile.mkdtemp(prefix='tfx-data')
_data_filepath = os.path.join(_data_root, "data.csv")
urllib.request.urlretrieve(DATA_PATH, _data_filepath)
columns = [
"Age", "Workclass", "fnlwgt", "Education", "Education-Num", "Marital-Status",
"Occupation", "Relationship", "Race", "Sex", "Capital-Gain", "Capital-Loss",
"Hours-per-week", "Country", "Over-50K"]
with open(_data_filepath, 'r') as f:
content = f.read()
content = content.replace(", <=50K", ', 0').replace(", >50K", ', 1')
with open(_data_filepath, 'w') as f:
f.write(','.join(columns) + '\n' + content)
###Output
_____no_output_____
###Markdown
Take a quick look at the CSV file.
###Code
!head {_data_filepath}
###Output
_____no_output_____
###Markdown
Create the InteractiveContextLast, we create an InteractiveContext, which will allow us to run TFX components interactively in this notebook.
###Code
# Here, we create an InteractiveContext using default parameters. This will
# use a temporary directory with an ephemeral ML Metadata database instance.
# To use your own pipeline root or database, the optional properties
# `pipeline_root` and `metadata_connection_config` may be passed to
# InteractiveContext. Calls to InteractiveContext are no-ops outside of the
# notebook.
context = InteractiveContext()
###Output
_____no_output_____
###Markdown
Run TFX components interactivelyIn the cells that follow, we create TFX components one-by-one, run each of them, and visualize their output artifacts. In this notebook, we won’t provide detailed explanations of each TFX component, but you can see what each does at [TFX Colab workshop](https://github.com/tensorflow/workshops/blob/master/tfx_labs/Lab_1_Pipeline_in_Colab.ipynb). ExampleGenCreate the `ExampleGen` component to split data into training and evaluation sets, convert the data into `tf.Example` format, and copy data into the `_tfx_root` directory for other components to access.
###Code
example_gen = CsvExampleGen(input=external_input(_data_root))
context.run(example_gen)
artifact = example_gen.outputs['examples'].get()[0]
print(artifact.split_names, artifact.uri)
###Output
_____no_output_____
###Markdown
Let’s take a look at the first three training examples:
###Code
# Get the URI of the output artifact representing the training examples, which is a directory
train_uri = os.path.join(example_gen.outputs['examples'].get()[0].uri, 'train')
# Get the list of files in this directory (all compressed TFRecord files)
tfrecord_filenames = [os.path.join(train_uri, name)
for name in os.listdir(train_uri)]
# Create a `TFRecordDataset` to read these files
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
# Iterate over the first 3 records and decode them.
for tfrecord in dataset.take(3):
serialized_example = tfrecord.numpy()
example = tf.train.Example()
example.ParseFromString(serialized_example)
pp.pprint(example)
###Output
_____no_output_____
###Markdown
StatisticsGen`StatisticsGen` takes as input the dataset we just ingested using `ExampleGen` and allows you to perform some analysis of your dataset using TensorFlow Data Validation (TFDV).
###Code
statistics_gen = StatisticsGen(
examples=example_gen.outputs['examples'])
context.run(statistics_gen)
###Output
_____no_output_____
###Markdown
After `StatisticsGen` finishes running, we can visualize the outputted statistics. Try playing with the different plots!
###Code
context.show(statistics_gen.outputs['statistics'])
###Output
_____no_output_____
###Markdown
SchemaGen`SchemaGen` will take as input the statistics that we generated with `StatisticsGen`, looking at the training split by default.
###Code
schema_gen = SchemaGen(
statistics=statistics_gen.outputs['statistics'],
infer_feature_shape=False)
context.run(schema_gen)
context.show(schema_gen.outputs['schema'])
###Output
_____no_output_____
###Markdown
To learn more about schemas, see [the SchemaGen documentation](https://www.tensorflow.org/tfx/guide/schemagen). ExampleValidator`ExampleValidator` will take as input the statistics from `StatisticsGen`, and the schema from `SchemaGen`.By default, it compares the statistics from the evaluation split to the schema from the training split.
###Code
example_validator = ExampleValidator(
statistics=statistics_gen.outputs['statistics'],
schema=schema_gen.outputs['schema'])
context.run(example_validator)
context.show(example_validator.outputs['anomalies'])
###Output
_____no_output_____
###Markdown
Transform`Transform` will take as input the data from `ExampleGen`, the schema from `SchemaGen`, as well as a module that contains user-defined Transform code.Let's see an example of user-defined Transform code below (for an introduction to the TensorFlow Transform APIs, [see the tutorial](https://www.tensorflow.org/tfx/tutorials/transform/simple)).
###Code
_census_income_constants_module_file = 'census_income_constants.py'
%%writefile {_census_income_constants_module_file}
# Categorical features are assumed to each have a maximum value in the dataset.
MAX_CATEGORICAL_FEATURE_VALUES = [20]
CATEGORICAL_FEATURE_KEYS = ["Education-Num"]
DENSE_FLOAT_FEATURE_KEYS = ["Capital-Gain", "Hours-per-week", "Capital-Loss"]
# Number of buckets used by tf.transform for encoding each feature.
FEATURE_BUCKET_COUNT = 10
BUCKET_FEATURE_KEYS = ["Age"]
# Number of vocabulary terms used for encoding VOCAB_FEATURES by tf.transform
VOCAB_SIZE = 200
# Count of out-of-vocab buckets in which unrecognized VOCAB_FEATURES are hashed.
OOV_SIZE = 10
VOCAB_FEATURE_KEYS = ["Workclass", "Education", "Marital-Status", "Occupation",
"Relationship", "Race", "Sex", "Country"]
# Keys
LABEL_KEY = "Over-50K"
def transformed_name(key):
return key + '_xf'
_census_income_transform_module_file = 'census_income_transform.py'
%%writefile {_census_income_transform_module_file}
import tensorflow as tf
import tensorflow_transform as tft
import census_income_constants
_DENSE_FLOAT_FEATURE_KEYS = census_income_constants.DENSE_FLOAT_FEATURE_KEYS
_VOCAB_FEATURE_KEYS = census_income_constants.VOCAB_FEATURE_KEYS
_VOCAB_SIZE = census_income_constants.VOCAB_SIZE
_OOV_SIZE = census_income_constants.OOV_SIZE
_FEATURE_BUCKET_COUNT = census_income_constants.FEATURE_BUCKET_COUNT
_BUCKET_FEATURE_KEYS = census_income_constants.BUCKET_FEATURE_KEYS
_CATEGORICAL_FEATURE_KEYS = census_income_constants.CATEGORICAL_FEATURE_KEYS
_LABEL_KEY = census_income_constants.LABEL_KEY
_transformed_name = census_income_constants.transformed_name
def preprocessing_fn(inputs):
"""tf.transform's callback function for preprocessing inputs.
Args:
inputs: map from feature keys to raw not-yet-transformed features.
Returns:
Map from string feature key to transformed feature operations.
"""
outputs = {}
for key in _DENSE_FLOAT_FEATURE_KEYS:
# Preserve this feature as a dense float, setting nan's to the mean.
outputs[_transformed_name(key)] = tft.scale_to_z_score(
_fill_in_missing(inputs[key]))
for key in _VOCAB_FEATURE_KEYS:
# Build a vocabulary for this feature.
outputs[_transformed_name(key)] = tft.compute_and_apply_vocabulary(
_fill_in_missing(inputs[key]),
top_k=_VOCAB_SIZE,
num_oov_buckets=_OOV_SIZE)
for key in _BUCKET_FEATURE_KEYS:
outputs[_transformed_name(key)] = tft.bucketize(
_fill_in_missing(inputs[key]), _FEATURE_BUCKET_COUNT)
for key in _CATEGORICAL_FEATURE_KEYS:
outputs[_transformed_name(key)] = _fill_in_missing(inputs[key])
label = _fill_in_missing(inputs[_LABEL_KEY])
outputs[_transformed_name(_LABEL_KEY)] = label
return outputs
def _fill_in_missing(x):
"""Replace missing values in a SparseTensor.
Fills in missing values of `x` with '' or 0, and converts to a dense tensor.
Args:
x: A `SparseTensor` of rank 2. Its dense shape should have size at most 1
in the second dimension.
Returns:
A rank 1 tensor where missing values of `x` have been filled in.
"""
default_value = '' if x.dtype == tf.string else 0
return tf.squeeze(
tf.sparse.to_dense(
tf.SparseTensor(x.indices, x.values, [x.dense_shape[0], 1]),
default_value),
axis=1)
transform = Transform(
examples=example_gen.outputs['examples'],
schema=schema_gen.outputs['schema'],
module_file=os.path.abspath(_census_income_transform_module_file))
context.run(transform)
transform.outputs
###Output
_____no_output_____
###Markdown
TrainerLet's see an example of user-defined model code below (for an introduction to the TensorFlow Keras APIs, [see the tutorial](https://www.tensorflow.org/guide/keras)):
###Code
_census_income_trainer_module_file = 'census_income_trainer.py'
%%writefile {_census_income_trainer_module_file}
from typing import List, Text
import os
import absl
import datetime
import tensorflow as tf
import tensorflow_transform as tft
from tfx.components.trainer.executor import TrainerFnArgs
import census_income_constants
_DENSE_FLOAT_FEATURE_KEYS = census_income_constants.DENSE_FLOAT_FEATURE_KEYS
_VOCAB_FEATURE_KEYS = census_income_constants.VOCAB_FEATURE_KEYS
_VOCAB_SIZE = census_income_constants.VOCAB_SIZE
_OOV_SIZE = census_income_constants.OOV_SIZE
_FEATURE_BUCKET_COUNT = census_income_constants.FEATURE_BUCKET_COUNT
_BUCKET_FEATURE_KEYS = census_income_constants.BUCKET_FEATURE_KEYS
_CATEGORICAL_FEATURE_KEYS = census_income_constants.CATEGORICAL_FEATURE_KEYS
_MAX_CATEGORICAL_FEATURE_VALUES = census_income_constants.MAX_CATEGORICAL_FEATURE_VALUES
_LABEL_KEY = census_income_constants.LABEL_KEY
_transformed_name = census_income_constants.transformed_name
def _transformed_names(keys):
return [_transformed_name(key) for key in keys]
def _gzip_reader_fn(filenames):
"""Small utility returning a record reader that can read gzip'ed files."""
return tf.data.TFRecordDataset(
filenames,
compression_type='GZIP')
def _get_serve_tf_examples_fn(model, tf_transform_output):
"""Returns a function that parses a serialized tf.Example and applies TFT."""
model.tft_layer = tf_transform_output.transform_features_layer()
@tf.function
def serve_tf_examples_fn(serialized_tf_examples):
"""Returns the output to be used in the serving signature."""
feature_spec = tf_transform_output.raw_feature_spec()
feature_spec.pop(_LABEL_KEY)
parsed_features = tf.io.parse_example(serialized_tf_examples, feature_spec)
transformed_features = model.tft_layer(parsed_features)
transformed_features.pop(_transformed_name(_LABEL_KEY))
return model(transformed_features)
return serve_tf_examples_fn
def _input_fn(file_pattern: List[Text],
tf_transform_output: tft.TFTransformOutput,
batch_size: int = 200) -> tf.data.Dataset:
"""Generates features and label for tuning/training.
Args:
file_pattern: List of paths or patterns of input tfrecord files.
tf_transform_output: A TFTransformOutput.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
"""
transformed_feature_spec = (
tf_transform_output.transformed_feature_spec().copy())
dataset = tf.data.experimental.make_batched_features_dataset(
file_pattern=file_pattern,
batch_size=batch_size,
features=transformed_feature_spec,
reader=_gzip_reader_fn,
label_key=_transformed_name(_LABEL_KEY))
return dataset
def _build_keras_model(hidden_units: List[int] = None) -> tf.keras.Model:
"""Creates a DNN Keras model.
Args:
hidden_units: [int], the layer sizes of the DNN (input layer first).
Returns:
A keras Model.
"""
real_valued_columns = [
tf.feature_column.numeric_column(key, shape=())
for key in _transformed_names(_DENSE_FLOAT_FEATURE_KEYS)
]
categorical_columns = [
tf.feature_column.categorical_column_with_identity(
key, num_buckets=_VOCAB_SIZE + _OOV_SIZE, default_value=0)
for key in _transformed_names(_VOCAB_FEATURE_KEYS)
]
categorical_columns += [
tf.feature_column.categorical_column_with_identity(
key, num_buckets=_FEATURE_BUCKET_COUNT, default_value=0)
for key in _transformed_names(_BUCKET_FEATURE_KEYS)
]
categorical_columns += [
tf.feature_column.categorical_column_with_identity( # pylint: disable=g-complex-comprehension
key,
num_buckets=num_buckets,
default_value=0) for key, num_buckets in zip(
_transformed_names(_CATEGORICAL_FEATURE_KEYS),
_MAX_CATEGORICAL_FEATURE_VALUES)
]
indicator_column = [
tf.feature_column.indicator_column(categorical_column)
for categorical_column in categorical_columns
]
model = _wide_and_deep_classifier(
# TODO(b/139668410) replace with premade wide_and_deep keras model
wide_columns=indicator_column,
deep_columns=real_valued_columns,
dnn_hidden_units=hidden_units or [100, 70, 50, 25])
return model
def _wide_and_deep_classifier(wide_columns, deep_columns, dnn_hidden_units):
"""Build a simple keras wide and deep model.
Args:
wide_columns: Feature columns wrapped in indicator_column for wide (linear)
part of the model.
deep_columns: Feature columns for deep part of the model.
dnn_hidden_units: [int], the layer sizes of the hidden DNN.
Returns:
A Wide and Deep Keras model
"""
# Following values are hard coded for simplicity in this example,
# However prefarably they should be passsed in as hparams.
# Keras needs the feature definitions at compile time.
# TODO(b/139081439): Automate generation of input layers from FeatureColumn.
input_layers = {
colname: tf.keras.layers.Input(name=colname, shape=(), dtype=tf.float32)
for colname in _transformed_names(_DENSE_FLOAT_FEATURE_KEYS)
}
input_layers.update({
colname: tf.keras.layers.Input(name=colname, shape=(), dtype='int32')
for colname in _transformed_names(_VOCAB_FEATURE_KEYS)
})
input_layers.update({
colname: tf.keras.layers.Input(name=colname, shape=(), dtype='int32')
for colname in _transformed_names(_BUCKET_FEATURE_KEYS)
})
input_layers.update({
colname: tf.keras.layers.Input(name=colname, shape=(), dtype='int32')
for colname in _transformed_names(_CATEGORICAL_FEATURE_KEYS)
})
# TODO(b/161816639): SparseFeatures for feature columns + Keras.
deep = tf.keras.layers.DenseFeatures(deep_columns)(input_layers)
for numnodes in dnn_hidden_units:
deep = tf.keras.layers.Dense(numnodes)(deep)
wide = tf.keras.layers.DenseFeatures(wide_columns)(input_layers)
output = tf.keras.layers.Dense(
1, activation='sigmoid')(
tf.keras.layers.concatenate([deep, wide]))
model = tf.keras.Model(input_layers, output)
model.compile(
loss='binary_crossentropy',
optimizer=tf.keras.optimizers.Adam(lr=0.001),
metrics=[tf.keras.metrics.BinaryAccuracy()])
model.summary(print_fn=absl.logging.info)
return model
# TFX Trainer will call this function.
def run_fn(fn_args: TrainerFnArgs):
"""Train the model based on given args.
Args:
fn_args: Holds args used to train the model as name/value pairs.
"""
# Number of nodes in the first layer of the DNN
first_dnn_layer_size = 100
num_dnn_layers = 4
dnn_decay_factor = 0.7
tf_transform_output = tft.TFTransformOutput(fn_args.transform_output)
train_dataset = _input_fn(fn_args.train_files, tf_transform_output, 40)
eval_dataset = _input_fn(fn_args.eval_files, tf_transform_output, 40)
model = _build_keras_model(
# Construct layers sizes with exponetial decay
hidden_units=[
max(2, int(first_dnn_layer_size * dnn_decay_factor**i))
for i in range(num_dnn_layers)
])
# This log path might change in the future.
log_dir = os.path.join(os.path.dirname(fn_args.serving_model_dir), 'logs')
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=log_dir, update_freq='batch')
model.fit(
train_dataset,
steps_per_epoch=fn_args.train_steps,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps,
callbacks=[tensorboard_callback])
signatures = {
'serving_default':
_get_serve_tf_examples_fn(model,
tf_transform_output).get_concrete_function(
tf.TensorSpec(
shape=[None],
dtype=tf.string,
name='examples')),
}
model.save(fn_args.serving_model_dir, save_format='tf', signatures=signatures)
trainer = Trainer(
module_file=os.path.abspath(_census_income_trainer_module_file),
custom_executor_spec=executor_spec.ExecutorClassSpec(GenericExecutor),
examples=transform.outputs['transformed_examples'],
transform_graph=transform.outputs['transform_graph'],
schema=schema_gen.outputs['schema'],
train_args=trainer_pb2.TrainArgs(num_steps=100),
eval_args=trainer_pb2.EvalArgs(num_steps=50))
context.run(trainer)
###Output
_____no_output_____
###Markdown
EvaluatorThe `Evaluator` component computes model performance metrics over the evaluation set. It uses the [TensorFlow Model Analysis](https://www.tensorflow.org/tfx/model_analysis/get_started) library. `Evaluator` will take as input the data from `ExampleGen`, the trained model from `Trainer`, and slicing configuration. The slicing configuration allows you to slice your metrics on feature values. See an example of this configuration below:
###Code
from google.protobuf.wrappers_pb2 import BoolValue
eval_config = tfma.EvalConfig(
model_specs=[
# This assumes a serving model with signature 'serving_default'. If
# using estimator based EvalSavedModel, add signature_name: 'eval' and
# remove the label_key.
tfma.ModelSpec(label_key="Over-50K")
],
metrics_specs=[
tfma.MetricsSpec(
# The metrics added here are in addition to those saved with the
# model (assuming either a keras model or EvalSavedModel is used).
# Any metrics added into the saved model (for example using
# model.compile(..., metrics=[...]), etc) will be computed
# automatically.
# To add validation thresholds for metrics saved with the model,
# add them keyed by metric name to the thresholds map.
metrics=[
tfma.MetricConfig(class_name='ExampleCount'),
tfma.MetricConfig(class_name='BinaryAccuracy'),
tfma.MetricConfig(class_name='FairnessIndicators',
config='{ "thresholds": [0.5] }'),
]
)
],
slicing_specs=[
# An empty slice spec means the overall slice, i.e. the whole dataset.
tfma.SlicingSpec(),
# Data can be sliced along a feature column. In this case, data is
# sliced by feature column Race and Sex.
tfma.SlicingSpec(feature_keys=['Race']),
tfma.SlicingSpec(feature_keys=['Sex']),
tfma.SlicingSpec(feature_keys=['Race', 'Sex']),
],
options = tfma.Options(compute_confidence_intervals=BoolValue(value=True))
)
# Use TFMA to compute a evaluation statistics over features of a model and
# validate them against a baseline.
evaluator = Evaluator(
examples=example_gen.outputs['examples'],
model=trainer.outputs['model'],
eval_config=eval_config)
context.run(evaluator)
evaluator.outputs
###Output
_____no_output_____
###Markdown
Using the `evaluation` output we can show the default visualization of global metrics on the entire evaluation set.
###Code
context.show(evaluator.outputs['evaluation'])
###Output
_____no_output_____
###Markdown
Populate Properties from ModelCard with Model Card ToolkitNow that we’ve set up our TFX pipeline, we will use the Model Card Toolkit to extract key artifacts from the run and populate a Model Card. Connect to the MLMD store used by the InteractiveContext
###Code
from ml_metadata.metadata_store import metadata_store
from IPython import display
mlmd_store = metadata_store.MetadataStore(context.metadata_connection_config)
model_uri = trainer.outputs["model"].get()[0].uri
###Output
_____no_output_____
###Markdown
Use Model Card Toolkit Initialize the Model Card Toolkit.
###Code
from model_card_toolkit import ModelCardToolkit
mct = ModelCardToolkit(mlmd_store=mlmd_store, model_uri=model_uri)
###Output
_____no_output_____
###Markdown
Create Model Card workspace.
###Code
model_card = mct.scaffold_assets()
###Output
_____no_output_____
###Markdown
Annotate more infomation into Model Card.It is also important to document model information that might be important to downstream users, such as its limitations, intended use cases, trade offs, and ethical considerations. For each of these sections, we can directly add new JSON objects to represent this information.
###Code
model_card.model_details.name = 'Census Income Classifier'
model_card.model_details.overview = (
'This is a wide and deep Keras model which aims to classify whether or not '
'an individual has an income of over $50,000 based on various demographic '
'features. The model is trained on the UCI Census Income Dataset. This is '
'not a production model, and this dataset has traditionally only been used '
'for research purposes. In this Model Card, you can review quantitative '
'components of the model’s performance and data, as well as information '
'about the model’s intended uses, limitations, and ethical considerations.'
)
model_card.model_details.owners = [
{'name': 'Model Cards Team', 'contact': '[email protected]'}
]
model_card.considerations.use_cases = [
'This dataset that this model was trained on was originally created to '
'support the machine learning community in conducting empirical analysis '
'of ML algorithms. The Adult Data Set can be used in fairness-related '
'studies that compare inequalities across sex and race, based on '
'people’s annual incomes.'
]
model_card.considerations.limitations = [
'This is a class-imbalanced dataset across a variety of sensitive classes.'
' The ratio of male-to-female examples is about 2:1 and there are far more'
' examples with the “white” attribute than every other race combined. '
'Furthermore, the ratio of $50,000 or less earners to $50,000 or more '
'earners is just over 3:1. Due to the imbalance across income levels, we '
'can see that our true negative rate seems quite high, while our true '
'positive rate seems quite low. This is true to an even greater degree '
'when we only look at the “female” sub-group, because there are even '
'fewer female examples in the $50,000+ earner group, causing our model to '
'overfit these examples. To avoid this, we can try various remediation '
'strategies in future iterations (e.g. undersampling, hyperparameter '
'tuning, etc), but we may not be able to fix all of the fairness issues.'
]
model_card.considerations.ethical_considerations = [{
'name':
'We risk expressing the viewpoint that the attributes in this dataset '
'are the only ones that are predictive of someone’s income, even '
'though we know this is not the case.',
'mitigation_strategy':
'As mentioned, some interventions may need to be performed to address '
'the class imbalances in the dataset.'
}]
###Output
_____no_output_____
###Markdown
Filter and Add Graphs.We can filter the graphs generated by the TFX components to include those most relevant for the Model Card using the function defined below. In this example, we filter for `race` and `sex`, two potentially sensitive attributes. Each Model Card will have up to three sections for graphs -- training dataset statistics, evaluation dataset statistics, and quantitative analysis of our model’s performance.
###Code
# These are the graphs that will appear in the Quantiative Analysis portion of
# the Model Card. Feel free to add or remove from this list.
TARGET_EVAL_GRAPH_NAMES = [
'fairness_indicators_metrics/[email protected]',
'fairness_indicators_metrics/[email protected]',
'binary_accuracy',
'example_count | Race_X_Sex',
]
# These are the graphs that will appear in both the Train Set and Eval Set
# portions of the Model Card. Feel free to add or remove from this list.
TARGET_DATASET_GRAPH_NAMES = [
'counts | Race',
'counts | Sex',
]
def filter_graphs(graphics, target_graph_names):
result = []
for graph in graphics:
for target_graph_name in target_graph_names:
if graph.name.startswith(target_graph_name):
result.append(graph)
result.sort(key=lambda g: g.name)
return result
# Populating the three different sections using the filter defined above. To
# see all the graphs available in a section, we can iterate through each of the
# different collections.
model_card.quantitative_analysis.graphics.collection = filter_graphs(
model_card.quantitative_analysis.graphics.collection, TARGET_EVAL_GRAPH_NAMES)
model_card.model_parameters.data.eval.graphics.collection = filter_graphs(
model_card.model_parameters.data.eval.graphics.collection, TARGET_DATASET_GRAPH_NAMES)
model_card.model_parameters.data.train.graphics.collection = filter_graphs(
model_card.model_parameters.data.train.graphics.collection, TARGET_DATASET_GRAPH_NAMES)
###Output
_____no_output_____
###Markdown
We then add (optional) descriptions for each of the each of the graph sections.
###Code
model_card.model_parameters.data.train.graphics.description = (
'This section includes graphs displaying the class distribution for the '
'“Race” and “Sex” attributes in our training dataset. We chose to '
'show these graphs in particular because we felt it was important that '
'users see the class imbalance.'
)
model_card.model_parameters.data.eval.graphics.description = (
'Like the training set, we provide graphs showing the class distribution '
'of the data we used to evaluate our model’s performance. '
)
model_card.quantitative_analysis.graphics.description = (
'These graphs show how the model performs for data sliced by “Race”, '
'“Sex” and the intersection of these attributes. The metrics we chose '
'to display are “Accuracy”, “False Positive Rate”, and “False '
'Negative Rate”, because we anticipated that the class imbalances might '
'cause our model to underperform for certain groups.'
)
mct.update_model_card_json(model_card)
###Output
_____no_output_____
###Markdown
Generate the Model Card.We can now display the Model Card in HTML format.
###Code
html = mct.export_format()
display.display(display.HTML(html))
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
MLMD Model Card Toolkit Demo View on TensorFlow.org Run in Google Colab View on GitHub Download notebook BackgroundThis notebook demonstrates how to generate a model card using the Model Card Toolkit with MLMD and TFX pipeline in a Jupyter/Colab environment. You can learn more about model cards at https://modelcards.withgoogle.com/about. SetupWe first need to a) install and import the necessary packages, and b) download the data. Upgrade to Pip 20.2 and Install TFX
###Code
!pip install --upgrade pip==20.2
!pip install "tfx==1.2.0"
!pip install model-card-toolkit
###Output
_____no_output_____
###Markdown
Did you restart the runtime?If you are using Google Colab, the first time that you run the cell above, you must restart the runtime (Runtime > Restart runtime ...). This is because of the way that Colab loads packages. Import packagesWe import necessary packages, including standard TFX component classes and check the library versions.
###Code
import os
import pprint
import tempfile
import urllib
import absl
import tensorflow as tf
import tensorflow_model_analysis as tfma
tf.get_logger().propagate = False
pp = pprint.PrettyPrinter()
import tfx
from tfx.components import CsvExampleGen
from tfx.components import Evaluator
from tfx.components import Pusher
from tfx.components import SchemaGen
from tfx.components import StatisticsGen
from tfx.components import Trainer
from tfx.components import Transform
from tfx.components.base import executor_spec
from tfx.components.trainer.executor import GenericExecutor
from tfx.dsl.experimental import latest_blessed_model_resolver
from tfx.orchestration import metadata
from tfx.orchestration import pipeline
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
from tfx.proto import pusher_pb2
from tfx.proto import trainer_pb2
from tfx.types import Channel
from tfx.types.standard_artifacts import Model
from tfx.types.standard_artifacts import ModelBlessing
import ml_metadata as mlmd
print('TensorFlow version: {}'.format(tf.__version__))
print('TFX version: {}'.format(tfx.version.__version__))
print('MLMD version: {}'.format(mlmd.__version__))
###Output
_____no_output_____
###Markdown
Set up pipeline paths
###Code
# This is the root directory for your TFX pip package installation.
_tfx_root = tfx.__path__
# Set up logging.
absl.logging.set_verbosity(absl.logging.INFO)
###Output
_____no_output_____
###Markdown
Download example dataWe download the example dataset for use in our TFX pipeline.
###Code
DATA_PATH = 'https://archive.ics.uci.edu/ml/machine-learning-databases/adult/' \
'adult.data'
_data_root = tempfile.mkdtemp(prefix='tfx-data')
_data_filepath = os.path.join(_data_root, "data.csv")
urllib.request.urlretrieve(DATA_PATH, _data_filepath)
columns = [
"Age", "Workclass", "fnlwgt", "Education", "Education-Num", "Marital-Status",
"Occupation", "Relationship", "Race", "Sex", "Capital-Gain", "Capital-Loss",
"Hours-per-week", "Country", "Over-50K"]
with open(_data_filepath, 'r') as f:
content = f.read()
content = content.replace(", <=50K", ', 0').replace(", >50K", ', 1')
with open(_data_filepath, 'w') as f:
f.write(','.join(columns) + '\n' + content)
###Output
_____no_output_____
###Markdown
Take a quick look at the CSV file.
###Code
!head {_data_filepath}
###Output
_____no_output_____
###Markdown
Create the InteractiveContextLast, we create an InteractiveContext, which will allow us to run TFX components interactively in this notebook.
###Code
# Here, we create an InteractiveContext using default parameters. This will
# use a temporary directory with an ephemeral ML Metadata database instance.
# To use your own pipeline root or database, the optional properties
# `pipeline_root` and `metadata_connection_config` may be passed to
# InteractiveContext. Calls to InteractiveContext are no-ops outside of the
# notebook.
context = InteractiveContext(pipeline_name="Census Income Classification Pipeline")
###Output
_____no_output_____
###Markdown
Run TFX components interactivelyIn the cells that follow, we create TFX components one-by-one, run each of them, and visualize their output artifacts. In this notebook, we won’t provide detailed explanations of each TFX component, but you can see what each does at [TFX Colab workshop](https://github.com/tensorflow/workshops/blob/master/tfx_labs/Lab_1_Pipeline_in_Colab.ipynb). ExampleGenCreate the `ExampleGen` component to split data into training and evaluation sets, convert the data into `tf.Example` format, and copy data into the `_tfx_root` directory for other components to access.
###Code
example_gen = CsvExampleGen(input_base=_data_root)
context.run(example_gen)
artifact = example_gen.outputs['examples'].get()[0]
print(artifact.split_names, artifact.uri)
###Output
_____no_output_____
###Markdown
Let’s take a look at the first three training examples:
###Code
# Get the URI of the output artifact representing the training examples, which is a directory
train_uri = os.path.join(example_gen.outputs['examples'].get()[0].uri, 'Split-train')
# Get the list of files in this directory (all compressed TFRecord files)
tfrecord_filenames = [os.path.join(train_uri, name)
for name in os.listdir(train_uri)]
# Create a `TFRecordDataset` to read these files
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
# Iterate over the first 3 records and decode them.
for tfrecord in dataset.take(3):
serialized_example = tfrecord.numpy()
example = tf.train.Example()
example.ParseFromString(serialized_example)
pp.pprint(example)
###Output
_____no_output_____
###Markdown
StatisticsGen`StatisticsGen` takes as input the dataset we just ingested using `ExampleGen` and allows you to perform some analysis of your dataset using TensorFlow Data Validation (TFDV).
###Code
statistics_gen = StatisticsGen(
examples=example_gen.outputs['examples'])
context.run(statistics_gen)
###Output
_____no_output_____
###Markdown
After `StatisticsGen` finishes running, we can visualize the outputted statistics. Try playing with the different plots!
###Code
context.show(statistics_gen.outputs['statistics'])
###Output
_____no_output_____
###Markdown
SchemaGen`SchemaGen` will take as input the statistics that we generated with `StatisticsGen`, looking at the training split by default.
###Code
schema_gen = SchemaGen(
statistics=statistics_gen.outputs['statistics'],
infer_feature_shape=False)
context.run(schema_gen)
context.show(schema_gen.outputs['schema'])
###Output
_____no_output_____
###Markdown
To learn more about schemas, see [the SchemaGen documentation](https://www.tensorflow.org/tfx/guide/schemagen). Transform`Transform` will take as input the data from `ExampleGen`, the schema from `SchemaGen`, as well as a module that contains user-defined Transform code.Let's see an example of user-defined Transform code below (for an introduction to the TensorFlow Transform APIs, [see the tutorial](https://www.tensorflow.org/tfx/tutorials/transform/simple)).
###Code
_census_income_constants_module_file = 'census_income_constants.py'
%%writefile {_census_income_constants_module_file}
# Categorical features are assumed to each have a maximum value in the dataset.
MAX_CATEGORICAL_FEATURE_VALUES = [20]
CATEGORICAL_FEATURE_KEYS = ["Education-Num"]
DENSE_FLOAT_FEATURE_KEYS = ["Capital-Gain", "Hours-per-week", "Capital-Loss"]
# Number of buckets used by tf.transform for encoding each feature.
FEATURE_BUCKET_COUNT = 10
BUCKET_FEATURE_KEYS = ["Age"]
# Number of vocabulary terms used for encoding VOCAB_FEATURES by tf.transform
VOCAB_SIZE = 200
# Count of out-of-vocab buckets in which unrecognized VOCAB_FEATURES are hashed.
OOV_SIZE = 10
VOCAB_FEATURE_KEYS = ["Workclass", "Education", "Marital-Status", "Occupation",
"Relationship", "Race", "Sex", "Country"]
# Keys
LABEL_KEY = "Over-50K"
def transformed_name(key):
return key + '_xf'
_census_income_transform_module_file = 'census_income_transform.py'
%%writefile {_census_income_transform_module_file}
import tensorflow as tf
import tensorflow_transform as tft
import census_income_constants
_DENSE_FLOAT_FEATURE_KEYS = census_income_constants.DENSE_FLOAT_FEATURE_KEYS
_VOCAB_FEATURE_KEYS = census_income_constants.VOCAB_FEATURE_KEYS
_VOCAB_SIZE = census_income_constants.VOCAB_SIZE
_OOV_SIZE = census_income_constants.OOV_SIZE
_FEATURE_BUCKET_COUNT = census_income_constants.FEATURE_BUCKET_COUNT
_BUCKET_FEATURE_KEYS = census_income_constants.BUCKET_FEATURE_KEYS
_CATEGORICAL_FEATURE_KEYS = census_income_constants.CATEGORICAL_FEATURE_KEYS
_LABEL_KEY = census_income_constants.LABEL_KEY
_transformed_name = census_income_constants.transformed_name
def preprocessing_fn(inputs):
"""tf.transform's callback function for preprocessing inputs.
Args:
inputs: map from feature keys to raw not-yet-transformed features.
Returns:
Map from string feature key to transformed feature operations.
"""
outputs = {}
for key in _DENSE_FLOAT_FEATURE_KEYS:
# Preserve this feature as a dense float, setting nan's to the mean.
outputs[_transformed_name(key)] = tft.scale_to_z_score(
_fill_in_missing(inputs[key]))
for key in _VOCAB_FEATURE_KEYS:
# Build a vocabulary for this feature.
outputs[_transformed_name(key)] = tft.compute_and_apply_vocabulary(
_fill_in_missing(inputs[key]),
top_k=_VOCAB_SIZE,
num_oov_buckets=_OOV_SIZE)
for key in _BUCKET_FEATURE_KEYS:
outputs[_transformed_name(key)] = tft.bucketize(
_fill_in_missing(inputs[key]), _FEATURE_BUCKET_COUNT)
for key in _CATEGORICAL_FEATURE_KEYS:
outputs[_transformed_name(key)] = _fill_in_missing(inputs[key])
label = _fill_in_missing(inputs[_LABEL_KEY])
outputs[_transformed_name(_LABEL_KEY)] = label
return outputs
def _fill_in_missing(x):
"""Replace missing values in a SparseTensor.
Fills in missing values of `x` with '' or 0, and converts to a dense tensor.
Args:
x: A `SparseTensor` of rank 2. Its dense shape should have size at most 1
in the second dimension.
Returns:
A rank 1 tensor where missing values of `x` have been filled in.
"""
default_value = '' if x.dtype == tf.string else 0
return tf.squeeze(
tf.sparse.to_dense(
tf.SparseTensor(x.indices, x.values, [x.dense_shape[0], 1]),
default_value),
axis=1)
transform = Transform(
examples=example_gen.outputs['examples'],
schema=schema_gen.outputs['schema'],
module_file=os.path.abspath(_census_income_transform_module_file))
context.run(transform)
transform.outputs
###Output
_____no_output_____
###Markdown
TrainerLet's see an example of user-defined model code below (for an introduction to the TensorFlow Keras APIs, [see the tutorial](https://www.tensorflow.org/guide/keras)):
###Code
_census_income_trainer_module_file = 'census_income_trainer.py'
%%writefile {_census_income_trainer_module_file}
from typing import List, Text
import os
import absl
import datetime
import tensorflow as tf
import tensorflow_transform as tft
from tfx.components.trainer.executor import TrainerFnArgs
import census_income_constants
_DENSE_FLOAT_FEATURE_KEYS = census_income_constants.DENSE_FLOAT_FEATURE_KEYS
_VOCAB_FEATURE_KEYS = census_income_constants.VOCAB_FEATURE_KEYS
_VOCAB_SIZE = census_income_constants.VOCAB_SIZE
_OOV_SIZE = census_income_constants.OOV_SIZE
_FEATURE_BUCKET_COUNT = census_income_constants.FEATURE_BUCKET_COUNT
_BUCKET_FEATURE_KEYS = census_income_constants.BUCKET_FEATURE_KEYS
_CATEGORICAL_FEATURE_KEYS = census_income_constants.CATEGORICAL_FEATURE_KEYS
_MAX_CATEGORICAL_FEATURE_VALUES = census_income_constants.MAX_CATEGORICAL_FEATURE_VALUES
_LABEL_KEY = census_income_constants.LABEL_KEY
_transformed_name = census_income_constants.transformed_name
def _transformed_names(keys):
return [_transformed_name(key) for key in keys]
def _gzip_reader_fn(filenames):
"""Small utility returning a record reader that can read gzip'ed files."""
return tf.data.TFRecordDataset(
filenames,
compression_type='GZIP')
def _get_serve_tf_examples_fn(model, tf_transform_output):
"""Returns a function that parses a serialized tf.Example and applies TFT."""
model.tft_layer = tf_transform_output.transform_features_layer()
@tf.function
def serve_tf_examples_fn(serialized_tf_examples):
"""Returns the output to be used in the serving signature."""
feature_spec = tf_transform_output.raw_feature_spec()
feature_spec.pop(_LABEL_KEY)
parsed_features = tf.io.parse_example(serialized_tf_examples, feature_spec)
transformed_features = model.tft_layer(parsed_features)
if _transformed_name(_LABEL_KEY) in transformed_features:
transformed_features.pop(_transformed_name(_LABEL_KEY))
return model(transformed_features)
return serve_tf_examples_fn
def _input_fn(file_pattern: List[Text],
tf_transform_output: tft.TFTransformOutput,
batch_size: int = 200) -> tf.data.Dataset:
"""Generates features and label for tuning/training.
Args:
file_pattern: List of paths or patterns of input tfrecord files.
tf_transform_output: A TFTransformOutput.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
"""
transformed_feature_spec = (
tf_transform_output.transformed_feature_spec().copy())
dataset = tf.data.experimental.make_batched_features_dataset(
file_pattern=file_pattern,
batch_size=batch_size,
features=transformed_feature_spec,
reader=_gzip_reader_fn,
label_key=_transformed_name(_LABEL_KEY))
return dataset
def _build_keras_model(hidden_units: List[int] = None) -> tf.keras.Model:
"""Creates a DNN Keras model.
Args:
hidden_units: [int], the layer sizes of the DNN (input layer first).
Returns:
A keras Model.
"""
real_valued_columns = [
tf.feature_column.numeric_column(key, shape=())
for key in _transformed_names(_DENSE_FLOAT_FEATURE_KEYS)
]
categorical_columns = [
tf.feature_column.categorical_column_with_identity(
key, num_buckets=_VOCAB_SIZE + _OOV_SIZE, default_value=0)
for key in _transformed_names(_VOCAB_FEATURE_KEYS)
]
categorical_columns += [
tf.feature_column.categorical_column_with_identity(
key, num_buckets=_FEATURE_BUCKET_COUNT, default_value=0)
for key in _transformed_names(_BUCKET_FEATURE_KEYS)
]
categorical_columns += [
tf.feature_column.categorical_column_with_identity( # pylint: disable=g-complex-comprehension
key,
num_buckets=num_buckets,
default_value=0) for key, num_buckets in zip(
_transformed_names(_CATEGORICAL_FEATURE_KEYS),
_MAX_CATEGORICAL_FEATURE_VALUES)
]
indicator_column = [
tf.feature_column.indicator_column(categorical_column)
for categorical_column in categorical_columns
]
model = _wide_and_deep_classifier(
# TODO(b/139668410) replace with premade wide_and_deep keras model
wide_columns=indicator_column,
deep_columns=real_valued_columns,
dnn_hidden_units=hidden_units or [100, 70, 50, 25])
return model
def _wide_and_deep_classifier(wide_columns, deep_columns, dnn_hidden_units):
"""Build a simple keras wide and deep model.
Args:
wide_columns: Feature columns wrapped in indicator_column for wide (linear)
part of the model.
deep_columns: Feature columns for deep part of the model.
dnn_hidden_units: [int], the layer sizes of the hidden DNN.
Returns:
A Wide and Deep Keras model
"""
# Following values are hard coded for simplicity in this example,
# However prefarably they should be passsed in as hparams.
# Keras needs the feature definitions at compile time.
# TODO(b/139081439): Automate generation of input layers from FeatureColumn.
input_layers = {
colname: tf.keras.layers.Input(name=colname, shape=(), dtype=tf.float32)
for colname in _transformed_names(_DENSE_FLOAT_FEATURE_KEYS)
}
input_layers.update({
colname: tf.keras.layers.Input(name=colname, shape=(), dtype='int32')
for colname in _transformed_names(_VOCAB_FEATURE_KEYS)
})
input_layers.update({
colname: tf.keras.layers.Input(name=colname, shape=(), dtype='int32')
for colname in _transformed_names(_BUCKET_FEATURE_KEYS)
})
input_layers.update({
colname: tf.keras.layers.Input(name=colname, shape=(), dtype='int32')
for colname in _transformed_names(_CATEGORICAL_FEATURE_KEYS)
})
# TODO(b/161816639): SparseFeatures for feature columns + Keras.
deep = tf.keras.layers.DenseFeatures(deep_columns)(input_layers)
for numnodes in dnn_hidden_units:
deep = tf.keras.layers.Dense(numnodes)(deep)
wide = tf.keras.layers.DenseFeatures(wide_columns)(input_layers)
output = tf.keras.layers.Dense(
1, activation='sigmoid')(
tf.keras.layers.concatenate([deep, wide]))
model = tf.keras.Model(input_layers, output)
model.compile(
loss='binary_crossentropy',
optimizer=tf.keras.optimizers.Adam(lr=0.001),
metrics=[tf.keras.metrics.BinaryAccuracy()])
model.summary(print_fn=absl.logging.info)
return model
# TFX Trainer will call this function.
def run_fn(fn_args: TrainerFnArgs):
"""Train the model based on given args.
Args:
fn_args: Holds args used to train the model as name/value pairs.
"""
# Number of nodes in the first layer of the DNN
first_dnn_layer_size = 100
num_dnn_layers = 4
dnn_decay_factor = 0.7
tf_transform_output = tft.TFTransformOutput(fn_args.transform_output)
train_dataset = _input_fn(fn_args.train_files, tf_transform_output, 40)
eval_dataset = _input_fn(fn_args.eval_files, tf_transform_output, 40)
model = _build_keras_model(
# Construct layers sizes with exponetial decay
hidden_units=[
max(2, int(first_dnn_layer_size * dnn_decay_factor**i))
for i in range(num_dnn_layers)
])
# This log path might change in the future.
log_dir = os.path.join(os.path.dirname(fn_args.serving_model_dir), 'logs')
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=log_dir, update_freq='batch')
model.fit(
train_dataset,
steps_per_epoch=fn_args.train_steps,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps,
callbacks=[tensorboard_callback])
signatures = {
'serving_default':
_get_serve_tf_examples_fn(model,
tf_transform_output).get_concrete_function(
tf.TensorSpec(
shape=[None],
dtype=tf.string,
name='examples')),
}
model.save(fn_args.serving_model_dir, save_format='tf', signatures=signatures)
trainer = Trainer(
module_file=os.path.abspath(_census_income_trainer_module_file),
custom_executor_spec=executor_spec.ExecutorClassSpec(GenericExecutor),
examples=transform.outputs['transformed_examples'],
transform_graph=transform.outputs['transform_graph'],
schema=schema_gen.outputs['schema'],
train_args=trainer_pb2.TrainArgs(num_steps=100),
eval_args=trainer_pb2.EvalArgs(num_steps=50))
context.run(trainer)
###Output
_____no_output_____
###Markdown
EvaluatorThe `Evaluator` component computes model performance metrics over the evaluation set. It uses the [TensorFlow Model Analysis](https://www.tensorflow.org/tfx/model_analysis/get_started) library. `Evaluator` will take as input the data from `ExampleGen`, the trained model from `Trainer`, and slicing configuration. The slicing configuration allows you to slice your metrics on feature values. See an example of this configuration below:
###Code
from google.protobuf.wrappers_pb2 import BoolValue
eval_config = tfma.EvalConfig(
model_specs=[
# This assumes a serving model with signature 'serving_default'. If
# using estimator based EvalSavedModel, add signature_name: 'eval' and
# remove the label_key.
tfma.ModelSpec(label_key="Over-50K")
],
metrics_specs=[
tfma.MetricsSpec(
# The metrics added here are in addition to those saved with the
# model (assuming either a keras model or EvalSavedModel is used).
# Any metrics added into the saved model (for example using
# model.compile(..., metrics=[...]), etc) will be computed
# automatically.
# To add validation thresholds for metrics saved with the model,
# add them keyed by metric name to the thresholds map.
metrics=[
tfma.MetricConfig(class_name='ExampleCount'),
tfma.MetricConfig(class_name='BinaryAccuracy'),
tfma.MetricConfig(class_name='FairnessIndicators',
config='{ "thresholds": [0.5] }'),
]
)
],
slicing_specs=[
# An empty slice spec means the overall slice, i.e. the whole dataset.
tfma.SlicingSpec(),
# Data can be sliced along a feature column. In this case, data is
# sliced by feature column Race and Sex.
tfma.SlicingSpec(feature_keys=['Race']),
tfma.SlicingSpec(feature_keys=['Sex']),
tfma.SlicingSpec(feature_keys=['Race', 'Sex']),
],
options = tfma.Options(compute_confidence_intervals=BoolValue(value=True))
)
# Use TFMA to compute a evaluation statistics over features of a model and
# validate them against a baseline.
evaluator = Evaluator(
examples=example_gen.outputs['examples'],
model=trainer.outputs['model'],
eval_config=eval_config)
context.run(evaluator)
evaluator.outputs
###Output
_____no_output_____
###Markdown
Using the `evaluation` output we can show the default visualization of global metrics on the entire evaluation set.
###Code
context.show(evaluator.outputs['evaluation'])
###Output
_____no_output_____
###Markdown
Populate Properties from ModelCard with Model Card ToolkitNow that we’ve set up our TFX pipeline, we will use the Model Card Toolkit to extract key artifacts from the run and populate a Model Card. Connect to the MLMD store used by the InteractiveContext
###Code
from ml_metadata.metadata_store import metadata_store
from IPython import display
mlmd_store = metadata_store.MetadataStore(context.metadata_connection_config)
model_uri = trainer.outputs["model"].get()[0].uri
###Output
_____no_output_____
###Markdown
Use Model Card Toolkit Initialize the Model Card Toolkit.
###Code
import model_card_toolkit as mctlib
mct = mctlib.ModelCardToolkit(mlmd_store=mlmd_store, model_uri=model_uri)
###Output
_____no_output_____
###Markdown
Create Model Card workspace.
###Code
model_card = mct.scaffold_assets()
###Output
_____no_output_____
###Markdown
Annotate more information into Model Card.It is also important to document model information that might be important to downstream users, such as its limitations, intended use cases, trade offs, and ethical considerations. For each of these sections, we can directly add new JSON objects to represent this information.
###Code
model_card.model_details.name = 'Census Income Classifier'
model_card.model_details.overview = (
'This is a wide and deep Keras model which aims to classify whether or not '
'an individual has an income of over $50,000 based on various demographic '
'features. The model is trained on the UCI Census Income Dataset. This is '
'not a production model, and this dataset has traditionally only been used '
'for research purposes. In this Model Card, you can review quantitative '
'components of the model’s performance and data, as well as information '
'about the model’s intended uses, limitations, and ethical considerations.'
)
model_card.model_details.owners = [
mctlib.Owner(name='Model Cards Team',
contact='[email protected]')
]
model_card.considerations.use_cases = [mctlib.UseCase(description=
'This dataset that this model was trained on was originally created to '
'support the machine learning community in conducting empirical analysis '
'of ML algorithms. The Adult Data Set can be used in fairness-related '
'studies that compare inequalities across sex and race, based on '
'people’s annual incomes.')
]
model_card.considerations.limitations = [mctlib.Limitation(description=
'This is a class-imbalanced dataset across a variety of sensitive classes.'
' The ratio of male-to-female examples is about 2:1 and there are far more'
' examples with the “white” attribute than every other race combined. '
'Furthermore, the ratio of $50,000 or less earners to $50,000 or more '
'earners is just over 3:1. Due to the imbalance across income levels, we '
'can see that our true negative rate seems quite high, while our true '
'positive rate seems quite low. This is true to an even greater degree '
'when we only look at the “female” sub-group, because there are even '
'fewer female examples in the $50,000+ earner group, causing our model to '
'overfit these examples. To avoid this, we can try various remediation '
'strategies in future iterations (e.g. undersampling, hyperparameter '
'tuning, etc), but we may not be able to fix all of the fairness issues.')
]
model_card.considerations.ethical_considerations = [mctlib.Risk(
name= 'We risk expressing the viewpoint that the attributes in this dataset '
'are the only ones that are predictive of someone’s income, even '
'though we know this is not the case.',
mitigation_strategy= 'As mentioned, some interventions may need to be '
'performed to address the class imbalances in the dataset.'
)
]
###Output
_____no_output_____
###Markdown
Filter and Add Graphs.We can filter the graphs generated by the TFX components to include those most relevant for the Model Card using the function defined below. In this example, we filter for `race` and `sex`, two potentially sensitive attributes. Each Model Card will have up to three sections for graphs -- training dataset statistics, evaluation dataset statistics, and quantitative analysis of our model’s performance.
###Code
# These are the graphs that will appear in the Quantiative Analysis portion of
# the Model Card. Feel free to add or remove from this list.
TARGET_EVAL_GRAPH_NAMES = [
'fairness_indicators_metrics/[email protected]',
'fairness_indicators_metrics/[email protected]',
'binary_accuracy',
'example_count | Race_X_Sex',
]
# These are the graphs that will appear in both the Train Set and Eval Set
# portions of the Model Card. Feel free to add or remove from this list.
TARGET_DATASET_GRAPH_NAMES = [
'counts | Race',
'counts | Sex',
]
def filter_graphs(graphics, target_graph_names):
result = []
for graph in graphics:
for target_graph_name in target_graph_names:
if graph.name.startswith(target_graph_name):
result.append(graph)
result.sort(key=lambda g: g.name)
return result
# Populating the three different sections using the filter defined above. To
# see all the graphs available in a section, we can iterate through each of the
# different collections.
model_card.quantitative_analysis.graphics.collection = filter_graphs(
model_card.quantitative_analysis.graphics.collection, TARGET_EVAL_GRAPH_NAMES)
model_card.model_parameters.data[0].graphics.collection = filter_graphs(
model_card.model_parameters.data[0].graphics.collection, TARGET_DATASET_GRAPH_NAMES)
model_card.model_parameters.data[1].graphics.collection = filter_graphs(
model_card.model_parameters.data[1].graphics.collection, TARGET_DATASET_GRAPH_NAMES)
###Output
_____no_output_____
###Markdown
We then add (optional) descriptions for each of the each of the graph sections.
###Code
model_card.model_parameters.data[0].name = 'train_set'
model_card.model_parameters.data[0].graphics.description = (
'This section includes graphs displaying the class distribution for the '
'“Race” and “Sex” attributes in our training dataset. We chose to '
'show these graphs in particular because we felt it was important that '
'users see the class imbalance.'
)
model_card.model_parameters.data[1].name = 'eval_set'
model_card.model_parameters.data[1].graphics.description = (
'Like the training set, we provide graphs showing the class distribution '
'of the data we used to evaluate our model’s performance. '
)
model_card.quantitative_analysis.graphics.description = (
'These graphs show how the model performs for data sliced by “Race”, '
'“Sex” and the intersection of these attributes. The metrics we chose '
'to display are “Accuracy”, “False Positive Rate”, and “False '
'Negative Rate”, because we anticipated that the class imbalances might '
'cause our model to underperform for certain groups.'
)
mct.update_model_card_json(model_card)
###Output
_____no_output_____
###Markdown
Generate the Model Card.We can now display the Model Card in HTML format.
###Code
html = mct.export_format()
display.display(display.HTML(html))
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
MLMD Model Card Toolkit Demo View on TensorFlow.org Run in Google Colab View on GitHub Download notebook BackgroundThis notebook demonstrates how to generate a model card using the Model Card Toolkit with MLMD and TFX pipeline in a Jupyter/Colab environment. You can learn more about model cards at https://modelcards.withgoogle.com/about. SetupWe first need to a) install and import the necessary packages, and b) download the data. Upgrade to Pip 20.2 and Install TFX
###Code
!pip install --upgrade pip==20.2
!pip install "tfx==0.26.0"
!pip install model-card-toolkit
###Output
_____no_output_____
###Markdown
Did you restart the runtime?If you are using Google Colab, the first time that you run the cell above, you must restart the runtime (Runtime > Restart runtime ...). This is because of the way that Colab loads packages. Import packagesWe import necessary packages, including standard TFX component classes and check the library versions.
###Code
import os
import pprint
import tempfile
import urllib
import absl
import tensorflow as tf
import tensorflow_model_analysis as tfma
tf.get_logger().propagate = False
pp = pprint.PrettyPrinter()
import tfx
from tfx.components import CsvExampleGen
from tfx.components import Evaluator
from tfx.components import Pusher
from tfx.components import ResolverNode
from tfx.components import SchemaGen
from tfx.components import StatisticsGen
from tfx.components import Trainer
from tfx.components import Transform
from tfx.components.base import executor_spec
from tfx.components.trainer.executor import GenericExecutor
from tfx.dsl.experimental import latest_blessed_model_resolver
from tfx.orchestration import metadata
from tfx.orchestration import pipeline
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
from tfx.proto import pusher_pb2
from tfx.proto import trainer_pb2
from tfx.types import Channel
from tfx.types.standard_artifacts import Model
from tfx.types.standard_artifacts import ModelBlessing
from tfx.utils.dsl_utils import external_input
import ml_metadata as mlmd
print('TensorFlow version: {}'.format(tf.__version__))
print('TFX version: {}'.format(tfx.version.__version__))
print('MLMD version: {}'.format(mlmd.__version__))
###Output
_____no_output_____
###Markdown
Set up pipeline paths
###Code
# This is the root directory for your TFX pip package installation.
_tfx_root = tfx.__path__
# Set up logging.
absl.logging.set_verbosity(absl.logging.INFO)
###Output
_____no_output_____
###Markdown
Download example dataWe download the example dataset for use in our TFX pipeline.
###Code
DATA_PATH = 'https://archive.ics.uci.edu/ml/machine-learning-databases/adult/' \
'adult.data'
_data_root = tempfile.mkdtemp(prefix='tfx-data')
_data_filepath = os.path.join(_data_root, "data.csv")
urllib.request.urlretrieve(DATA_PATH, _data_filepath)
columns = [
"Age", "Workclass", "fnlwgt", "Education", "Education-Num", "Marital-Status",
"Occupation", "Relationship", "Race", "Sex", "Capital-Gain", "Capital-Loss",
"Hours-per-week", "Country", "Over-50K"]
with open(_data_filepath, 'r') as f:
content = f.read()
content = content.replace(", <=50K", ', 0').replace(", >50K", ', 1')
with open(_data_filepath, 'w') as f:
f.write(','.join(columns) + '\n' + content)
###Output
_____no_output_____
###Markdown
Take a quick look at the CSV file.
###Code
!head {_data_filepath}
###Output
_____no_output_____
###Markdown
Create the InteractiveContextLast, we create an InteractiveContext, which will allow us to run TFX components interactively in this notebook.
###Code
# Here, we create an InteractiveContext using default parameters. This will
# use a temporary directory with an ephemeral ML Metadata database instance.
# To use your own pipeline root or database, the optional properties
# `pipeline_root` and `metadata_connection_config` may be passed to
# InteractiveContext. Calls to InteractiveContext are no-ops outside of the
# notebook.
context = InteractiveContext()
###Output
_____no_output_____
###Markdown
Run TFX components interactivelyIn the cells that follow, we create TFX components one-by-one, run each of them, and visualize their output artifacts. In this notebook, we won’t provide detailed explanations of each TFX component, but you can see what each does at [TFX Colab workshop](https://github.com/tensorflow/workshops/blob/master/tfx_labs/Lab_1_Pipeline_in_Colab.ipynb). ExampleGenCreate the `ExampleGen` component to split data into training and evaluation sets, convert the data into `tf.Example` format, and copy data into the `_tfx_root` directory for other components to access.
###Code
example_gen = CsvExampleGen(input=external_input(_data_root))
context.run(example_gen)
artifact = example_gen.outputs['examples'].get()[0]
print(artifact.split_names, artifact.uri)
###Output
_____no_output_____
###Markdown
Let’s take a look at the first three training examples:
###Code
# Get the URI of the output artifact representing the training examples, which is a directory
train_uri = os.path.join(example_gen.outputs['examples'].get()[0].uri, 'train')
# Get the list of files in this directory (all compressed TFRecord files)
tfrecord_filenames = [os.path.join(train_uri, name)
for name in os.listdir(train_uri)]
# Create a `TFRecordDataset` to read these files
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
# Iterate over the first 3 records and decode them.
for tfrecord in dataset.take(3):
serialized_example = tfrecord.numpy()
example = tf.train.Example()
example.ParseFromString(serialized_example)
pp.pprint(example)
###Output
_____no_output_____
###Markdown
StatisticsGen`StatisticsGen` takes as input the dataset we just ingested using `ExampleGen` and allows you to perform some analysis of your dataset using TensorFlow Data Validation (TFDV).
###Code
statistics_gen = StatisticsGen(
examples=example_gen.outputs['examples'])
context.run(statistics_gen)
###Output
_____no_output_____
###Markdown
After `StatisticsGen` finishes running, we can visualize the outputted statistics. Try playing with the different plots!
###Code
context.show(statistics_gen.outputs['statistics'])
###Output
_____no_output_____
###Markdown
SchemaGen`SchemaGen` will take as input the statistics that we generated with `StatisticsGen`, looking at the training split by default.
###Code
schema_gen = SchemaGen(
statistics=statistics_gen.outputs['statistics'],
infer_feature_shape=False)
context.run(schema_gen)
context.show(schema_gen.outputs['schema'])
###Output
_____no_output_____
###Markdown
To learn more about schemas, see [the SchemaGen documentation](https://www.tensorflow.org/tfx/guide/schemagen). Transform`Transform` will take as input the data from `ExampleGen`, the schema from `SchemaGen`, as well as a module that contains user-defined Transform code.Let's see an example of user-defined Transform code below (for an introduction to the TensorFlow Transform APIs, [see the tutorial](https://www.tensorflow.org/tfx/tutorials/transform/simple)).
###Code
_census_income_constants_module_file = 'census_income_constants.py'
%%writefile {_census_income_constants_module_file}
# Categorical features are assumed to each have a maximum value in the dataset.
MAX_CATEGORICAL_FEATURE_VALUES = [20]
CATEGORICAL_FEATURE_KEYS = ["Education-Num"]
DENSE_FLOAT_FEATURE_KEYS = ["Capital-Gain", "Hours-per-week", "Capital-Loss"]
# Number of buckets used by tf.transform for encoding each feature.
FEATURE_BUCKET_COUNT = 10
BUCKET_FEATURE_KEYS = ["Age"]
# Number of vocabulary terms used for encoding VOCAB_FEATURES by tf.transform
VOCAB_SIZE = 200
# Count of out-of-vocab buckets in which unrecognized VOCAB_FEATURES are hashed.
OOV_SIZE = 10
VOCAB_FEATURE_KEYS = ["Workclass", "Education", "Marital-Status", "Occupation",
"Relationship", "Race", "Sex", "Country"]
# Keys
LABEL_KEY = "Over-50K"
def transformed_name(key):
return key + '_xf'
_census_income_transform_module_file = 'census_income_transform.py'
%%writefile {_census_income_transform_module_file}
import tensorflow as tf
import tensorflow_transform as tft
import census_income_constants
_DENSE_FLOAT_FEATURE_KEYS = census_income_constants.DENSE_FLOAT_FEATURE_KEYS
_VOCAB_FEATURE_KEYS = census_income_constants.VOCAB_FEATURE_KEYS
_VOCAB_SIZE = census_income_constants.VOCAB_SIZE
_OOV_SIZE = census_income_constants.OOV_SIZE
_FEATURE_BUCKET_COUNT = census_income_constants.FEATURE_BUCKET_COUNT
_BUCKET_FEATURE_KEYS = census_income_constants.BUCKET_FEATURE_KEYS
_CATEGORICAL_FEATURE_KEYS = census_income_constants.CATEGORICAL_FEATURE_KEYS
_LABEL_KEY = census_income_constants.LABEL_KEY
_transformed_name = census_income_constants.transformed_name
def preprocessing_fn(inputs):
"""tf.transform's callback function for preprocessing inputs.
Args:
inputs: map from feature keys to raw not-yet-transformed features.
Returns:
Map from string feature key to transformed feature operations.
"""
outputs = {}
for key in _DENSE_FLOAT_FEATURE_KEYS:
# Preserve this feature as a dense float, setting nan's to the mean.
outputs[_transformed_name(key)] = tft.scale_to_z_score(
_fill_in_missing(inputs[key]))
for key in _VOCAB_FEATURE_KEYS:
# Build a vocabulary for this feature.
outputs[_transformed_name(key)] = tft.compute_and_apply_vocabulary(
_fill_in_missing(inputs[key]),
top_k=_VOCAB_SIZE,
num_oov_buckets=_OOV_SIZE)
for key in _BUCKET_FEATURE_KEYS:
outputs[_transformed_name(key)] = tft.bucketize(
_fill_in_missing(inputs[key]), _FEATURE_BUCKET_COUNT)
for key in _CATEGORICAL_FEATURE_KEYS:
outputs[_transformed_name(key)] = _fill_in_missing(inputs[key])
label = _fill_in_missing(inputs[_LABEL_KEY])
outputs[_transformed_name(_LABEL_KEY)] = label
return outputs
def _fill_in_missing(x):
"""Replace missing values in a SparseTensor.
Fills in missing values of `x` with '' or 0, and converts to a dense tensor.
Args:
x: A `SparseTensor` of rank 2. Its dense shape should have size at most 1
in the second dimension.
Returns:
A rank 1 tensor where missing values of `x` have been filled in.
"""
default_value = '' if x.dtype == tf.string else 0
return tf.squeeze(
tf.sparse.to_dense(
tf.SparseTensor(x.indices, x.values, [x.dense_shape[0], 1]),
default_value),
axis=1)
transform = Transform(
examples=example_gen.outputs['examples'],
schema=schema_gen.outputs['schema'],
module_file=os.path.abspath(_census_income_transform_module_file))
context.run(transform)
transform.outputs
###Output
_____no_output_____
###Markdown
TrainerLet's see an example of user-defined model code below (for an introduction to the TensorFlow Keras APIs, [see the tutorial](https://www.tensorflow.org/guide/keras)):
###Code
_census_income_trainer_module_file = 'census_income_trainer.py'
%%writefile {_census_income_trainer_module_file}
from typing import List, Text
import os
import absl
import datetime
import tensorflow as tf
import tensorflow_transform as tft
from tfx.components.trainer.executor import TrainerFnArgs
import census_income_constants
_DENSE_FLOAT_FEATURE_KEYS = census_income_constants.DENSE_FLOAT_FEATURE_KEYS
_VOCAB_FEATURE_KEYS = census_income_constants.VOCAB_FEATURE_KEYS
_VOCAB_SIZE = census_income_constants.VOCAB_SIZE
_OOV_SIZE = census_income_constants.OOV_SIZE
_FEATURE_BUCKET_COUNT = census_income_constants.FEATURE_BUCKET_COUNT
_BUCKET_FEATURE_KEYS = census_income_constants.BUCKET_FEATURE_KEYS
_CATEGORICAL_FEATURE_KEYS = census_income_constants.CATEGORICAL_FEATURE_KEYS
_MAX_CATEGORICAL_FEATURE_VALUES = census_income_constants.MAX_CATEGORICAL_FEATURE_VALUES
_LABEL_KEY = census_income_constants.LABEL_KEY
_transformed_name = census_income_constants.transformed_name
def _transformed_names(keys):
return [_transformed_name(key) for key in keys]
def _gzip_reader_fn(filenames):
"""Small utility returning a record reader that can read gzip'ed files."""
return tf.data.TFRecordDataset(
filenames,
compression_type='GZIP')
def _get_serve_tf_examples_fn(model, tf_transform_output):
"""Returns a function that parses a serialized tf.Example and applies TFT."""
model.tft_layer = tf_transform_output.transform_features_layer()
@tf.function
def serve_tf_examples_fn(serialized_tf_examples):
"""Returns the output to be used in the serving signature."""
feature_spec = tf_transform_output.raw_feature_spec()
feature_spec.pop(_LABEL_KEY)
parsed_features = tf.io.parse_example(serialized_tf_examples, feature_spec)
transformed_features = model.tft_layer(parsed_features)
if _transformed_name(_LABEL_KEY) in transformed_features:
transformed_features.pop(_transformed_name(_LABEL_KEY))
return model(transformed_features)
return serve_tf_examples_fn
def _input_fn(file_pattern: List[Text],
tf_transform_output: tft.TFTransformOutput,
batch_size: int = 200) -> tf.data.Dataset:
"""Generates features and label for tuning/training.
Args:
file_pattern: List of paths or patterns of input tfrecord files.
tf_transform_output: A TFTransformOutput.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
"""
transformed_feature_spec = (
tf_transform_output.transformed_feature_spec().copy())
dataset = tf.data.experimental.make_batched_features_dataset(
file_pattern=file_pattern,
batch_size=batch_size,
features=transformed_feature_spec,
reader=_gzip_reader_fn,
label_key=_transformed_name(_LABEL_KEY))
return dataset
def _build_keras_model(hidden_units: List[int] = None) -> tf.keras.Model:
"""Creates a DNN Keras model.
Args:
hidden_units: [int], the layer sizes of the DNN (input layer first).
Returns:
A keras Model.
"""
real_valued_columns = [
tf.feature_column.numeric_column(key, shape=())
for key in _transformed_names(_DENSE_FLOAT_FEATURE_KEYS)
]
categorical_columns = [
tf.feature_column.categorical_column_with_identity(
key, num_buckets=_VOCAB_SIZE + _OOV_SIZE, default_value=0)
for key in _transformed_names(_VOCAB_FEATURE_KEYS)
]
categorical_columns += [
tf.feature_column.categorical_column_with_identity(
key, num_buckets=_FEATURE_BUCKET_COUNT, default_value=0)
for key in _transformed_names(_BUCKET_FEATURE_KEYS)
]
categorical_columns += [
tf.feature_column.categorical_column_with_identity( # pylint: disable=g-complex-comprehension
key,
num_buckets=num_buckets,
default_value=0) for key, num_buckets in zip(
_transformed_names(_CATEGORICAL_FEATURE_KEYS),
_MAX_CATEGORICAL_FEATURE_VALUES)
]
indicator_column = [
tf.feature_column.indicator_column(categorical_column)
for categorical_column in categorical_columns
]
model = _wide_and_deep_classifier(
# TODO(b/139668410) replace with premade wide_and_deep keras model
wide_columns=indicator_column,
deep_columns=real_valued_columns,
dnn_hidden_units=hidden_units or [100, 70, 50, 25])
return model
def _wide_and_deep_classifier(wide_columns, deep_columns, dnn_hidden_units):
"""Build a simple keras wide and deep model.
Args:
wide_columns: Feature columns wrapped in indicator_column for wide (linear)
part of the model.
deep_columns: Feature columns for deep part of the model.
dnn_hidden_units: [int], the layer sizes of the hidden DNN.
Returns:
A Wide and Deep Keras model
"""
# Following values are hard coded for simplicity in this example,
# However prefarably they should be passsed in as hparams.
# Keras needs the feature definitions at compile time.
# TODO(b/139081439): Automate generation of input layers from FeatureColumn.
input_layers = {
colname: tf.keras.layers.Input(name=colname, shape=(), dtype=tf.float32)
for colname in _transformed_names(_DENSE_FLOAT_FEATURE_KEYS)
}
input_layers.update({
colname: tf.keras.layers.Input(name=colname, shape=(), dtype='int32')
for colname in _transformed_names(_VOCAB_FEATURE_KEYS)
})
input_layers.update({
colname: tf.keras.layers.Input(name=colname, shape=(), dtype='int32')
for colname in _transformed_names(_BUCKET_FEATURE_KEYS)
})
input_layers.update({
colname: tf.keras.layers.Input(name=colname, shape=(), dtype='int32')
for colname in _transformed_names(_CATEGORICAL_FEATURE_KEYS)
})
# TODO(b/161816639): SparseFeatures for feature columns + Keras.
deep = tf.keras.layers.DenseFeatures(deep_columns)(input_layers)
for numnodes in dnn_hidden_units:
deep = tf.keras.layers.Dense(numnodes)(deep)
wide = tf.keras.layers.DenseFeatures(wide_columns)(input_layers)
output = tf.keras.layers.Dense(
1, activation='sigmoid')(
tf.keras.layers.concatenate([deep, wide]))
model = tf.keras.Model(input_layers, output)
model.compile(
loss='binary_crossentropy',
optimizer=tf.keras.optimizers.Adam(lr=0.001),
metrics=[tf.keras.metrics.BinaryAccuracy()])
model.summary(print_fn=absl.logging.info)
return model
# TFX Trainer will call this function.
def run_fn(fn_args: TrainerFnArgs):
"""Train the model based on given args.
Args:
fn_args: Holds args used to train the model as name/value pairs.
"""
# Number of nodes in the first layer of the DNN
first_dnn_layer_size = 100
num_dnn_layers = 4
dnn_decay_factor = 0.7
tf_transform_output = tft.TFTransformOutput(fn_args.transform_output)
train_dataset = _input_fn(fn_args.train_files, tf_transform_output, 40)
eval_dataset = _input_fn(fn_args.eval_files, tf_transform_output, 40)
model = _build_keras_model(
# Construct layers sizes with exponetial decay
hidden_units=[
max(2, int(first_dnn_layer_size * dnn_decay_factor**i))
for i in range(num_dnn_layers)
])
# This log path might change in the future.
log_dir = os.path.join(os.path.dirname(fn_args.serving_model_dir), 'logs')
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=log_dir, update_freq='batch')
model.fit(
train_dataset,
steps_per_epoch=fn_args.train_steps,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps,
callbacks=[tensorboard_callback])
signatures = {
'serving_default':
_get_serve_tf_examples_fn(model,
tf_transform_output).get_concrete_function(
tf.TensorSpec(
shape=[None],
dtype=tf.string,
name='examples')),
}
model.save(fn_args.serving_model_dir, save_format='tf', signatures=signatures)
trainer = Trainer(
module_file=os.path.abspath(_census_income_trainer_module_file),
custom_executor_spec=executor_spec.ExecutorClassSpec(GenericExecutor),
examples=transform.outputs['transformed_examples'],
transform_graph=transform.outputs['transform_graph'],
schema=schema_gen.outputs['schema'],
train_args=trainer_pb2.TrainArgs(num_steps=100),
eval_args=trainer_pb2.EvalArgs(num_steps=50))
context.run(trainer)
###Output
_____no_output_____
###Markdown
EvaluatorThe `Evaluator` component computes model performance metrics over the evaluation set. It uses the [TensorFlow Model Analysis](https://www.tensorflow.org/tfx/model_analysis/get_started) library. `Evaluator` will take as input the data from `ExampleGen`, the trained model from `Trainer`, and slicing configuration. The slicing configuration allows you to slice your metrics on feature values. See an example of this configuration below:
###Code
from google.protobuf.wrappers_pb2 import BoolValue
eval_config = tfma.EvalConfig(
model_specs=[
# This assumes a serving model with signature 'serving_default'. If
# using estimator based EvalSavedModel, add signature_name: 'eval' and
# remove the label_key.
tfma.ModelSpec(label_key="Over-50K")
],
metrics_specs=[
tfma.MetricsSpec(
# The metrics added here are in addition to those saved with the
# model (assuming either a keras model or EvalSavedModel is used).
# Any metrics added into the saved model (for example using
# model.compile(..., metrics=[...]), etc) will be computed
# automatically.
# To add validation thresholds for metrics saved with the model,
# add them keyed by metric name to the thresholds map.
metrics=[
tfma.MetricConfig(class_name='ExampleCount'),
tfma.MetricConfig(class_name='BinaryAccuracy'),
tfma.MetricConfig(class_name='FairnessIndicators',
config='{ "thresholds": [0.5] }'),
]
)
],
slicing_specs=[
# An empty slice spec means the overall slice, i.e. the whole dataset.
tfma.SlicingSpec(),
# Data can be sliced along a feature column. In this case, data is
# sliced by feature column Race and Sex.
tfma.SlicingSpec(feature_keys=['Race']),
tfma.SlicingSpec(feature_keys=['Sex']),
tfma.SlicingSpec(feature_keys=['Race', 'Sex']),
],
options = tfma.Options(compute_confidence_intervals=BoolValue(value=True))
)
# Use TFMA to compute a evaluation statistics over features of a model and
# validate them against a baseline.
evaluator = Evaluator(
examples=example_gen.outputs['examples'],
model=trainer.outputs['model'],
eval_config=eval_config)
context.run(evaluator)
evaluator.outputs
###Output
_____no_output_____
###Markdown
Using the `evaluation` output we can show the default visualization of global metrics on the entire evaluation set.
###Code
context.show(evaluator.outputs['evaluation'])
###Output
_____no_output_____
###Markdown
Populate Properties from ModelCard with Model Card ToolkitNow that we’ve set up our TFX pipeline, we will use the Model Card Toolkit to extract key artifacts from the run and populate a Model Card. Connect to the MLMD store used by the InteractiveContext
###Code
from ml_metadata.metadata_store import metadata_store
from IPython import display
mlmd_store = metadata_store.MetadataStore(context.metadata_connection_config)
model_uri = trainer.outputs["model"].get()[0].uri
###Output
_____no_output_____
###Markdown
Use Model Card Toolkit Initialize the Model Card Toolkit.
###Code
from model_card_toolkit import ModelCardToolkit
mct = ModelCardToolkit(mlmd_store=mlmd_store, model_uri=model_uri)
###Output
_____no_output_____
###Markdown
Create Model Card workspace.
###Code
model_card = mct.scaffold_assets()
###Output
_____no_output_____
###Markdown
Annotate more information into Model Card.It is also important to document model information that might be important to downstream users, such as its limitations, intended use cases, trade offs, and ethical considerations. For each of these sections, we can directly add new JSON objects to represent this information.
###Code
model_card.model_details.name = 'Census Income Classifier'
model_card.model_details.overview = (
'This is a wide and deep Keras model which aims to classify whether or not '
'an individual has an income of over $50,000 based on various demographic '
'features. The model is trained on the UCI Census Income Dataset. This is '
'not a production model, and this dataset has traditionally only been used '
'for research purposes. In this Model Card, you can review quantitative '
'components of the model’s performance and data, as well as information '
'about the model’s intended uses, limitations, and ethical considerations.'
)
model_card.model_details.owners = [
{'name': 'Model Cards Team', 'contact': '[email protected]'}
]
model_card.considerations.use_cases = [
'This dataset that this model was trained on was originally created to '
'support the machine learning community in conducting empirical analysis '
'of ML algorithms. The Adult Data Set can be used in fairness-related '
'studies that compare inequalities across sex and race, based on '
'people’s annual incomes.'
]
model_card.considerations.limitations = [
'This is a class-imbalanced dataset across a variety of sensitive classes.'
' The ratio of male-to-female examples is about 2:1 and there are far more'
' examples with the “white” attribute than every other race combined. '
'Furthermore, the ratio of $50,000 or less earners to $50,000 or more '
'earners is just over 3:1. Due to the imbalance across income levels, we '
'can see that our true negative rate seems quite high, while our true '
'positive rate seems quite low. This is true to an even greater degree '
'when we only look at the “female” sub-group, because there are even '
'fewer female examples in the $50,000+ earner group, causing our model to '
'overfit these examples. To avoid this, we can try various remediation '
'strategies in future iterations (e.g. undersampling, hyperparameter '
'tuning, etc), but we may not be able to fix all of the fairness issues.'
]
model_card.considerations.ethical_considerations = [{
'name':
'We risk expressing the viewpoint that the attributes in this dataset '
'are the only ones that are predictive of someone’s income, even '
'though we know this is not the case.',
'mitigation_strategy':
'As mentioned, some interventions may need to be performed to address '
'the class imbalances in the dataset.'
}]
###Output
_____no_output_____
###Markdown
Filter and Add Graphs.We can filter the graphs generated by the TFX components to include those most relevant for the Model Card using the function defined below. In this example, we filter for `race` and `sex`, two potentially sensitive attributes. Each Model Card will have up to three sections for graphs -- training dataset statistics, evaluation dataset statistics, and quantitative analysis of our model’s performance.
###Code
# These are the graphs that will appear in the Quantiative Analysis portion of
# the Model Card. Feel free to add or remove from this list.
TARGET_EVAL_GRAPH_NAMES = [
'fairness_indicators_metrics/[email protected]',
'fairness_indicators_metrics/[email protected]',
'binary_accuracy',
'example_count | Race_X_Sex',
]
# These are the graphs that will appear in both the Train Set and Eval Set
# portions of the Model Card. Feel free to add or remove from this list.
TARGET_DATASET_GRAPH_NAMES = [
'counts | Race',
'counts | Sex',
]
def filter_graphs(graphics, target_graph_names):
result = []
for graph in graphics:
for target_graph_name in target_graph_names:
if graph.name.startswith(target_graph_name):
result.append(graph)
result.sort(key=lambda g: g.name)
return result
# Populating the three different sections using the filter defined above. To
# see all the graphs available in a section, we can iterate through each of the
# different collections.
model_card.quantitative_analysis.graphics.collection = filter_graphs(
model_card.quantitative_analysis.graphics.collection, TARGET_EVAL_GRAPH_NAMES)
model_card.model_parameters.data.eval.graphics.collection = filter_graphs(
model_card.model_parameters.data.eval.graphics.collection, TARGET_DATASET_GRAPH_NAMES)
model_card.model_parameters.data.train.graphics.collection = filter_graphs(
model_card.model_parameters.data.train.graphics.collection, TARGET_DATASET_GRAPH_NAMES)
###Output
_____no_output_____
###Markdown
We then add (optional) descriptions for each of the each of the graph sections.
###Code
model_card.model_parameters.data.train.graphics.description = (
'This section includes graphs displaying the class distribution for the '
'“Race” and “Sex” attributes in our training dataset. We chose to '
'show these graphs in particular because we felt it was important that '
'users see the class imbalance.'
)
model_card.model_parameters.data.eval.graphics.description = (
'Like the training set, we provide graphs showing the class distribution '
'of the data we used to evaluate our model’s performance. '
)
model_card.quantitative_analysis.graphics.description = (
'These graphs show how the model performs for data sliced by “Race”, '
'“Sex” and the intersection of these attributes. The metrics we chose '
'to display are “Accuracy”, “False Positive Rate”, and “False '
'Negative Rate”, because we anticipated that the class imbalances might '
'cause our model to underperform for certain groups.'
)
mct.update_model_card_json(model_card)
###Output
_____no_output_____
###Markdown
Generate the Model Card.We can now display the Model Card in HTML format.
###Code
html = mct.export_format()
display.display(display.HTML(html))
###Output
_____no_output_____ |
2. Intermediate Challenge.ipynb | ###Markdown
Challenge questionsEasy questions: 1. How many total pings are in the Ocearch shark data? 2. How many unique species of sharks are in the data set? 3. What is the name, weight, and species of the heaviest shark(s)? 4. When and where was the very first ping? 5. Excluding results with 0 distance traveled: what's the minimum, average, and maximum travel distances? Intermediate questions: 1. Which shark had the most pings? 2. Which shark has been pinging the longest, and how long has that been? 3. Which shark species has the most individual sharks tagged? 4. What is the average length and weight of each shark species? 5. Which shark has the biggest geographic box (largest distance from min lat/lon to max lat/lon, not dist_traveled)? Load data
###Code
import pandas as pd
df = pd.read_csv('data/sharks.csv')
df.shape
###Output
_____no_output_____
###Markdown
Clean data Explore data
###Code
df.info()
df.describe()
df.head()
###Output
_____no_output_____ |
Insee_formatting.ipynb | ###Markdown
Formatting the INSEE data filesINSEE provides per decade a zip file containing a series of ten csv files each covering a single year. Visit https://www.insee.fr/fr/information/4190491 to obtain the zip(s) of interest.The code here-in-below merges all the files together and return a single csv covering the whole decade.Dr. Morgane FORTIN, Sept. 2020 Importing libraries
###Code
import pandas as pd
import datetime as dt
import numpy as np
from zipfile import ZipFile
###Output
_____no_output_____
###Markdown
Choosing the INSEE zip file to format
###Code
zip_file = ZipFile('deces-1970-1979-csv.zip')
###Output
_____no_output_____
###Markdown
Formatting the data
###Code
# Read all the files
insee = pd.DataFrame()
for text_file in zip_file.infolist():
if text_file.filename.endswith('.csv'):
df=pd.read_csv(zip_file.open(text_file.filename),delimiter=';',dtype=str)
insee = insee.append(df,ignore_index = True)
insee
###Output
_____no_output_____
###Markdown
Columns:1. nomprenom: family name: *nom* and first name(s): *prenom(s)* 2. sexe: gender 1 for male and 2 for female3. datenaiss: birthdate; format YYYYMMDD4. lieunaiss: postcode of the place of birth5. commnaiss: place of birth6. paysnaiss: country of birth7. datedeces: deathdate; format YYYYMMDD8. lieudeces: postcode of the place of death9. actedeces: reference of the death record
###Code
# Split the date of birth into year (anneenaiss), month (moisnaiss) and day (journaiss)
insee['anneenaiss'] = insee.datenaiss.str[0:4]
insee['moisnaiss'] = insee.datenaiss.str[4:6]
insee['journaiss'] = insee.datenaiss.str[6:]
# Similarly for the date of death (anneedeces, moisdeces, jourdeces)
insee['anneedeces'] = insee.datedeces.str[0:4]
insee['moisdeces'] = insee.datedeces.str[4:6]
insee['jourdeces'] = insee.datedeces.str[6:]
# Drop the columns with the original date of birth and death
insee.drop(columns=['datedeces', 'datenaiss']);
# When the country of birth (paysnaiss) is NaN replace by FRANCE
insee.paysnaiss=insee.paysnaiss.replace(np.NaN,'FRANCE')
# Split the family and first names column (nomprenom) into two: one with the family name (nom) and another one with the first name(s) (prenom(s))
insee[["nom","prenoms"]]=insee["nomprenom"].str.replace('/','').str.split("*", n = 1, expand = True)
# Rearrange the dataframe:
# 1-family name,
# 2-first names,
# 3-gender 1 for male and 2 for female,
# 4-day of birth,
# 5-month of birth,
# 6-year of birth,
# 7-postcode of the place of birth,
# 8-place of birth,
# 9-country of birth,
# 10-day of death,
# 11-month of death,
# 12-year of death,
# 13-postcode of the place of death,
# 14-reference of the death record
insee=insee[["nom","prenoms", "sexe", "journaiss","moisnaiss","anneenaiss","lieunaiss","commnaiss","paysnaiss","jourdeces","moisdeces","anneedeces","lieudeces","actedeces"]]
insee
# Export the file to a single csv
insee.to_csv('Insee_70s.csv')
###Output
_____no_output_____ |
R/18-3 Congestion - Hard.ipynb | ###Markdown
Congestion Charges - HardYou may need to create views to complete these questions - but you do not have permission to create tables or views in the default schema. Your SQL commands are executed by user scott in schema gisq - you may create or drop views and tables in schema scott but not in gisq.
###Code
library(tidyverse)
library(DBI)
library(getPass)
drv <- switch(Sys.info()['sysname'],
Windows="PostgreSQL Unicode(x64)",
Darwin="/usr/local/lib/psqlodbcw.so",
Linux="PostgreSQL")
con <- dbConnect(
odbc::odbc(),
driver = drv,
Server = "localhost",
Database = "sqlzoo",
UID = "postgres",
PWD = getPass("Password?"),
Port = 5432
)
options(repr.matrix.max.rows=20)
###Output
_____no_output_____ |
course_5/module_2.ipynb | ###Markdown
Lecture: The (Py)Tesseract Library
###Code
# We're going to start experimenting with tesseract using just a simple image of nice clean text.
# Lets first import Image from PIL and display the image text.png.
from PIL import Image
image = Image.open("readonly/text.png")
display(image)
# Great, we have a base image of some big clear text
# Lets import pytesseract and use the dir() fundtion to get a sense of what might be some interesting
# functions to play with
import pytesseract
dir(pytesseract)
# It looks like there are just a handful of interesting functions, and I think image_to_string
# is probably our best bet. Lets use the help() function to interrogate this a bit more
help(pytesseract.image_to_string)
# So this function takes an image as the first parameter, then there are a bunch of optional parameters,
# and it will return the results of the OCR. I think it's worth comparing this documentation string
# with the documentation we were receiving from the PILLOW module. Lets run the help command on the
# Image resize function()
help(Image.Image.resize)
# Notice how the PILLOW function has a bit more information in it. First it's using a specific format
# called reStructuredText, which is similar in intent to document markups such as HTML, the language of
# the web. The intent is to embed semantics in the documentation itself. For instance, in the resize()
# function we see the words "param size" with colons surrounding it. This allows documentation engines
# which create web docs from source code to link the parameter to the extended docs about that parameter.
# In this case the extended docs tell us that the size should be passed as a tuple of width and height.
# Notice how the docs for image_to_string, for instance, indicate that there is a "lang" parameter we can
# use, but then fail to say anything about what that parameter is for or what its format is.
#
# What this really means is that we need to dig deeper. Here's a quick hack if you want to look at the
# source code of a function -- you can use the inspect getsource() command and print the results
import inspect
src = inspect.getsource(pytesseract.image_to_string)
print(src)
# There's actually another way in jupyter, and that's to append *two* question marks to the end of
# a given function or module. Other editors have similar features, and is a great reason to use a
# software development environment
pytesseract.image_to_string??
# We can see from the source code that there really isn't much more information about what the parameters
# are for this image_to_string function. This is because underneath the pytesseract library is calling a C++
# library which does all of the hard work, and the author just passes through all of the calls to the
# underlying tesseract executable. This is a common issue when working with python libraries, and it means
# we need to do some web sleuthing in order to understand how we can interact with tesseract.
#
# In a case like this I just googled "tesseract command line parameters" and the first hit was what I was
# looking for, here's the URL: https://github.com/tesseract-ocr/tesseract/wiki/Command-Line-Usage
#
# This goes to a wiki page which describes how to call the tesseract executable, and as we read down we see
# that we can actually have tesseract use multiple languages in its detection, such as English and Hindi, by
# passing them in as "eng+hin". Very cool.
# One last thing to mention - the image_to_string() function takes in an "image", but the docs don't
# really describe what this image is underneath. Is it a string to an image file? A PILLOW image?
# Something else?
#
# Again we have to sleuth (and/or experiment) to understand what we should do. If we look at the source
# code for the pytesseract library, we see that there is a function called run_and_get_output(). Here's
# a link to that function on the author's github account:
# https://github.com/madmaze/pytesseract/blob/d1596f7f59a517ad814b7d810ccdef7d33763221/src/pytesseract.py#L199
#
# In this function we see that one of the first things which happens is the image is saved through
# the save_image() function. Here's that line of code:
# https://github.com/madmaze/pytesseract/blob/d1596f7f59a517ad814b7d810ccdef7d33763221/src/pytesseract.py#L116
#
# And we see there that another function is called, prepare(image), which actually loads the image as a
# PILLOW image file. So yes, sending a PIL image file is appropriate use for this function! It sure would
# have been useful for the author to have included this information in reStructuredText to help us not have
# to dig through the implementation. But, this is an open source project -- maybe you would like to contribute
# back better documentation?
#
# Hint: The doc line we needed was :param image: A PIL Image.Image file or an ndarray of bytes
#
# In the end, we often don't do this full level of investigation, and we just experiment and try things. It
# seems likely that a PIL Image.Image would work, given how well known PIL is in the python world. But still,
# as you explore and use different libraries you'll see a breadth of different documentation norms, so it's
# useful to know how to explore the source code. And now that you're at the end of this course, you've got
# the skills to do so!
#
# Ok, lets try and run tesseract on this image
text = pytesseract.image_to_string(image)
print(text)
# Looks great! We see that the output includes new line characters, and faithfully represents the text
# but doesn't include any special formatting. Lets go on and look at something with a bit more nuance to it.
###Output
_____no_output_____
###Markdown
More Tesseract
###Code
# In the previous example, we were using a clear, unambiguous image for conversion. Sometimes there will
# be noise in images you want to OCR, making it difficult to extract the text. Luckily, there are
# techniques we can use to increase the efficacy of OCR with pytesseract and Pillow.
#
# Let's use a different image this time, with the same text as before but with added noise in the picture.
# We can view this image using the following code.
from PIL import Image
img = Image.open("readonly/Noisy_OCR.PNG")
display(img)
# As you can see, this image had shapes of different opacities behind the text, which can confuse
# the tesseract engine. Let's see if OCR will work on this noisy image
import pytesseract
text = pytesseract.image_to_string(Image.open("readonly/Noisy_OCR.PNG"))
print(text)
# This is a bit surprising given how nicely tesseract worked previously! Let's experiment on the image
# using techniqes that will allow for more effective image analysis. First up, lets change the size of
# the image
# First we will import PIL
import PIL
# Then set the base width of our image
basewidth = 600
# Now lets open it
img = Image.open("readonly/Noisy_OCR.PNG")
# We want to get the correct aspect ratio, so we can do this by taking the base width and dividing
# it by the actual width of the image
wpercent = (basewidth / float(img.size[0]))
# With that ratio we can just get the appropriate height of the image.
hsize = int((float(img.size[1]) * float(wpercent)))
# Finally, lets resize the image. antialiasing is a specific way of resizing lines to try and make them
# appear smooth
img = img.resize((basewidth, hsize), PIL.Image.ANTIALIAS)
# Now lets save this to a file
img.save('resized_nois.png') # save the image as a jpg
# And finally, lets display it
display(img)
# and run OCR
text = pytesseract.image_to_string(Image.open('resized_nois.png'))
print(text)
# hrm, no improvement for resizing the image. Let's convert the image to greyscale. Converting images
# can be done in many different ways. If we poke around in the PILLOW documentation we find that one of
# the easiest ways to do this is to use the convert() function and pass in the string 'L'
img = Image.open('readonly/Noisy_OCR.PNG')
img = img.convert('L')
# Now lets save that image
img.save('greyscale_noise.jpg')
# And run OCR on the greyscale image
text = pytesseract.image_to_string(Image.open('greyscale_noise.jpg'))
print(text)
# Wow, that worked really well! If we look at the help documentation using the help function
# as in help(img.convert) we see that the conversion mechanism is the ITU-R 601-2 luma transform.
# There's more information about this out there, but this method essentially takes a three channel image,
# where there is information for the amount of red, green, and blue (R, G, and B), and reduces it
# to a single channel to represent luminosity. This method actually comes from how standard
# definition television sets encoded color onto black and while images. If you get really interested
# in image manipulation and recognition, learning about color spaces and how we represent color, both
# computationally and through human perception, is really an interesting field.
# Even though we have now the complete text of the image, there are a few other techniques
# we could use to help improve OCR detection in the event that the above two don't help.
# The next approach I would use is called binarization, which means to separate into two
# distinct parts - in this case, black and white. Binarization is enacted through a process
# called thresholding. If a pixel value is greater than a threshold value, it will be converted
# to a black pixel; if it is lower than the threshold it will be converted to a white pixel.
# This process eliminates noise in the OCR process allowing greater image recognition accuracy.
# With Pillow, this process is straightforward.
# Lets open the noisy impage and convert it using binarization
img = Image.open('readonly/Noisy_OCR.PNG').convert('1')
# Now lets save and display that image
img.save('black_white_noise.jpg')
display(img)
# So, that was a bit magical, and really required a fine reading of the docs to figure out
# that the number "1" is a string parameter to the convert function actually does the binarization.
# But you actually have all of the skills you need to write this functionality yourself.
# Lets walk through an example. First, lets define a function called binarize, which takes in
# an image and a threshold value:
def binarize(image_to_transform, threshold):
# now, lets convert that image to a single greyscale image using convert()
output_image=image_to_transform.convert("L")
# the threshold value is usually provided as a number between 0 and 255, which
# is the number of bits in a byte.
# the algorithm for the binarization is pretty simple, go through every pixel in the
# image and, if it's greater than the threshold, turn it all the way up (255), and
# if it's lower than the threshold, turn it all the way down (0).
# so lets write this in code. First, we need to iterate over all of the pixels in the
# image we want to work with
for x in range(output_image.width):
for y in range(output_image.height):
# for the given pixel at w,h, lets check its value against the threshold
if output_image.getpixel((x,y))< threshold: #note that the first parameter is actually a tuple object
# lets set this to zero
output_image.putpixel( (x,y), 0 )
else:
# otherwise lets set this to 255
output_image.putpixel( (x,y), 255 )
#now we just return the new image
return output_image
# lets test this function over a range of different thresholds. Remember that you can use
# the range() function to generate a list of numbers at different step sizes. range() is called
# with a start, a stop, and a step size. So lets try range(0, 257, 64), which should generate 5
# images of different threshold values
for thresh in range(0,257,64):
print("Trying with threshold " + str(thresh))
# Lets display the binarized image inline
display(binarize(Image.open('readonly/Noisy_OCR.PNG'), thresh))
# And lets use tesseract on it. It's inefficient to binarize it twice but this is just for
# a demo
print(pytesseract.image_to_string(binarize(Image.open('readonly/Noisy_OCR.PNG'), thresh)))
# We can see from this that a threshold of 0 essentially turns everything white,
# that the text becomes more bold as we move towards a higher threshold, and that
# the shapes, which have a filled in grey color, become more evident at higher
# thresholds. In the next lecture we'll look a bit more at some of the challenges
# you can expect when doing OCR on real data
###Output
_____no_output_____
###Markdown
Tesseract and Photographs
###Code
# Lets try a new example and bring together some of the things we have learned.
# Here's an image of a storefront, lets load it and try and get the name of the
# store out of the image
from PIL import Image
import pytesseract
# Lets read in the storefront image I've loaded into the course and display it
image=Image.open('readonly/storefront.jpg')
display(image)
# Finally, lets try and run tesseract on that image and see what the results are
pytesseract.image_to_string(image)
# We see at the very bottom there is just an empty string. Tesseract is unable to take
# this image and pull out the name. But we learned how to crop the images in the
# last set of lectures, so lets try and help Tesseract by cropping out certain pieces.
#
# First, lets set the bounding box. In this image the store name is in a box
# bounded by (315, 170, 700, 270)
bounding_box=(315, 170, 700, 270)
# Now lets crop the image
title_image=image.crop(bounding_box)
# Now lets display it and pull out the text
display(title_image)
pytesseract.image_to_string(title_image)
# Great, we see how with a bit of a problem reduction we can make that work. So now we have
# been able to take an image, preprocess it where we expect to see text, and turn that text
# into a string that python can understand.
#
# If you look back up at the image though, you'll see there is a small sign inside of the
# shop that also has the shop name on it. I wonder if we're able to recognize the text on
# that sign? Let's give it a try.
#
# First, we need to determine a bounding box for that sign. I'm going to show you a short-cut
# to make this easier in an optional video in this module, but for now lets just use the bounding
# box I decided on
bounding_box=(900, 420, 940, 445)
# Now, lets crop the image
little_sign=image.crop((900, 420, 940, 445))
display(little_sign)
# All right, that is a little sign! OCR works better with higher resolution images, so
# lets increase the size of this image by using the pillow resize() function
# Lets set the width and height equal to ten times the size it is now in a (w,h) tuple
new_size=(little_sign.width*10,little_sign.height*10)
# Now lets check the docs for resize()
help(little_sign.resize)
# We can see that there are a number of different filters for resizing the image. The
# default is Image.NEAREST. Lets see what that looks like
display(little_sign.resize( new_size, Image.NEAREST))
# I think we should be able to find something better. I can read it, but it looks
# really pixelated. Lets see what all the different resize options look like
options=[Image.NEAREST, Image.BOX, Image.BILINEAR, Image.HAMMING, Image.BICUBIC, Image.LANCZOS]
for option in options:
# lets print the option name
print(option)
# lets display what this option looks like on our little sign
display(little_sign.resize( new_size, option))
# From this we can notice two things. First, when we print out one of the resampling
# values it actually just prints an integer! This is really common: that the
# API developer writes a property, such as Image.BICUBIC, and then assigns it to an
# integer value to pass it around. Some languages use enumerations of values, which is
# common in say, Java, but in python this is a pretty normal way of doing things.
# The second thing we learned is that there are a number of different algorithms for
# image resampling. In this case, the Image.LANCZOS and Image.BICUBIC filters do a good
# job. Lets see if we are able to recognize the text off of this resized image
# First lets resize to the larger size
bigger_sign=little_sign.resize(new_size, Image.BICUBIC)
# Lets print out the text
pytesseract.image_to_string(bigger_sign)
# Well, no text there. Lets try and binarize this. First, let me just bring in the
# binarization code we did earlier
def binarize(image_to_transform, threshold):
output_image=image_to_transform.convert("L")
for x in range(output_image.width):
for y in range(output_image.height):
if output_image.getpixel((x,y))< threshold:
output_image.putpixel( (x,y), 0 )
else:
output_image.putpixel( (x,y), 255 )
return output_image
# Now, lets apply binarizations with, say, a threshold of 190, and try and display that
# as well as do the OCR work
binarized_bigger_sign=binarize(bigger_sign, 190)
display(binarized_bigger_sign)
pytesseract.image_to_string(binarized_bigger_sign)
# Ok, that text is pretty useless. How should we pick the best binarization
# to use? Well, there are some methods, but lets just try something very simple to
# show how well this can work. We have an english word we are trying to detect, "FOSSIL".
# If we tried all binarizations, from 0 through 255, and looked to see if there were
# any english words in that list, this might be one way. So lets see if we can
# write a routine to do this.
#
# First, lets load a list of english words into a list. I put a copy in the readonly
# directory for you to work with
eng_dict=[]
with open ("readonly/words_alpha.txt", "r") as f:
data=f.read()
# now we want to split this into a list based on the new line characters
eng_dict=data.split("\n")
# Now lets iterate through all possible thresholds and look for an english word, printing
# it out if it exists
for i in range(150,170):
# lets binarize and convert this to s tring values
strng=pytesseract.image_to_string(binarize(bigger_sign,i))
# We want to remove non alphabetical characters, like ([%$]) from the text, here's
# a short method to do that
# first, lets convert our string to lower case only
strng=strng.lower()
# then lets import the string package - it has a nice list of lower case letters
import string
# now lets iterate over our string looking at it character by character, putting it in
# the comaprison text
comparison=''
for character in strng:
if character in string.ascii_lowercase:
comparison=comparison+character
# finally, lets search for comparison in the dictionary file
if comparison in eng_dict:
# and print it if we find it
print(comparison)
# Well, not perfect, but we see fossil there among other values which are in the dictionary.
# This is not a bad way to clean up OCR data. It can useful to use a language or domain specific
# dictionary in practice, especially if you are generating a search engine for specialized language
# such as a medical knowledge base or locations. And if you scroll up and look at the data
# we were working with - this small little wall hanging on the inside of the store - it's not
# so bad.
#
# At this point you've now learned how to manipulate images and convert them into text. In the
# next module in this course we're going to dig deeper further into a computer vision library
# which allows us to detect faces among other things. Then, on to the culminating project!
###Output
_____no_output_____
###Markdown
Jupyter Widgets (Optional)
###Code
# In this brief lecture I want to introduce you to one of the more advanced features of the
# Jupyter notebook development environment called widgets. Sometimes you want
# to interact with a function you have created and call it multiple times with different
# parameters. For instance, if we wanted to draw a red box around a portion of an
# image to try and fine tune the crop location. Widgets are one way to do this quickly
# in the browser without having to learn how to write a large desktop application.
#
# Lets check it out. First we want to import the Image and ImageDraw classes from the
# PILLOW package
from PIL import Image, ImageDraw
# Then we want to import the interact class from the widgets package
from ipywidgets import interact
# We will use interact to annotate a function. Lets bring in an image that we know we
# are interested in, like the storefront image from a previous lecture
image=Image.open('readonly/storefront.jpg')
# Ok, our setup is done. Now we're going to use the interact decorator to indicate
# that we want to wrap the python function. We do this using the @ sign. This will
# take a set of parameters which are identical to the function to be called. Then Jupyter
# will draw some sliders on the screen to let us manipulate these values. Decorators,
# which is what the @ sign is describing, are standard python statements and just a
# short hand for functions which wrap other functions. They are a bit advanced though, so
# we haven't talked about them in this course, and you might just have to have some faith
@interact(left=100, top=100, right=200, bottom=200)
# Now we just write the function we had before
def draw_border(left, top, right, bottom):
img=image.copy()
drawing_object=ImageDraw.Draw(img)
drawing_object.rectangle((left,top,right,bottom), fill = None, outline ='red')
display(img)
# Jupyter widgets is certainly advanced territory, but if you would like
# to explore more you can read about what is available here:
# https://ipywidgets.readthedocs.io/en/stable/examples/Using%20Interact.html
###Output
_____no_output_____ |
Example/Keras_Mnist_Introduce.ipynb | ###Markdown
数据准备
###Code
import numpy as np
import pandas as pd
from keras.utils import np_utils
np.random.seed(10)
from keras.datasets import mnist
(x_train_image, y_train_label), \
(x_test_image, y_test_label) = mnist.load_data()
print('train data=',len(x_train_image))
print(' test data=',len(x_test_image))
print ('x_train_image:',x_train_image.shape)
print ('y_train_label:',y_train_label.shape)
import matplotlib.pyplot as plt
def plot_image(image):
fig = plt.gcf()
fig.set_size_inches(2, 2)
plt.imshow(image, cmap='binary')
plt.show()
plot_image(x_train_image[0])
y_train_label[0]
import matplotlib.pyplot as plt
def plot_images_labels_prediction(images,labels,
prediction,idx,num=10):
fig = plt.gcf()
fig.set_size_inches(12, 14)
if num>25: num=25
for i in range(0, num):
ax=plt.subplot(5,5, 1+i)
ax.imshow(images[idx], cmap='binary')
title= "label=" +str(labels[idx])
if len(prediction)>0:
title+=",predict="+str(prediction[idx])
ax.set_title(title,fontsize=10)
ax.set_xticks([]);ax.set_yticks([])
idx+=1
plt.show()
plot_images_labels_prediction(x_train_image,y_train_label,[],0,10)
print ('x_test_image:',x_test_image.shape)
print ('y_test_label:',y_test_label.shape)
plot_images_labels_prediction(x_test_image,y_test_label,[],0,10)
###Output
_____no_output_____
###Markdown
将images进行预处理
###Code
print ('x_train_image:',x_train_image.shape)
print ('y_train_label:',y_train_label.shape)
x_Train =x_train_image.reshape(60000, 784).astype('float32')
x_Test = x_test_image.reshape(10000, 784).astype('float32')
print ('x_train:',x_Train.shape)
print ('x_test:',x_Test.shape)
x_train_image[0]
x_Train_normalize = x_Train/ 255
x_Test_normalize = x_Test/ 255
x_Train_normalize[0]
###Output
_____no_output_____
###Markdown
one hot encode outputs
###Code
y_train_label[:5]
y_TrainOne Hot = np_utils.to_categorical(y_train_label)
y_TestOne Hot = np_utils.to_categorical(y_test_label)
y_TrainOne Hot[:5]
###Output
_____no_output_____ |
JavaScripts/CodeEditor/MapCenterObject.ipynb | ###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
###Code
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
# Center the map on an image.
image = ee.Image('LANDSAT/LC08/C01/T1/LC08_044034_20130603')
Map.addLayer(image, {'bands': ['B4', 'B3', 'B2'], 'max': 20000})
Map.centerObject(image)
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as geemap
except:
import geemap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google MapS`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
###Code
Map = geemap.Map(center=[40,-100], zoom=4)
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
# Center the map on an image.
image = ee.Image('LANDSAT/LC08/C01/T1/LC08_044034_20130603')
Map.addLayer(image, {'bands': ['B4', 'B3', 'B2'], 'max': 20000})
Map.centerObject(image)
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
###Code
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
# Center the map on an image.
image = ee.Image('LANDSAT/LC08/C01/T1/LC08_044034_20130603')
Map.addLayer(image, {'bands': ['B4', 'B3', 'B2'], 'max': 20000})
Map.centerObject(image)
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____ |
11_Tutorial.ipynb | ###Markdown
Day 11: Seating SystemYour plane lands with plenty of time to spare. The final leg of your journey is a ferry that goes directly to the tropical island where you can finally start your vacation. As you reach the waiting area to board the ferry, you realize you're so early, nobody else has even arrived yet!By modeling the process people use to choose (or abandon) their seat in the waiting area, you're pretty sure you can predict the best place to sit. You make a quick map of the seat layout (your puzzle input).The seat layout fits neatly on a grid. Each position is either floor (.), an empty seat (L), or an occupied seat (). For example, the initial seat layout might look like this:```L.LL.LL.LLLLLLLLL.LLL.L.L..L..LLLL.LL.LLL.LL.LL.LLL.LLLLL.LL..L.L.....LLLLLLLLLLL.LLLLLL.LL.LLLLL.LL```Now, you just need to model the people who will be arriving shortly. Fortunately, people are entirely predictable and always follow a simple set of rules. All decisions are based on the number of occupied seats adjacent to a given seat (one of the eight positions immediately up, down, left, right, or diagonal from the seat). The following rules are applied to every seat simultaneously:If a seat is empty (L) and there are no occupied seats adjacent to it, the seat becomes occupied.If a seat is occupied () and four or more seats adjacent to it are also occupied, the seat becomes empty.Otherwise, the seat's state does not change.Floor (.) never changes; seats don't move, and nobody sits on the floor.After one round of these rules, every seat in the example layout becomes occupied:```.............................```After a second round, the seats with four or more occupied adjacent seats become empty again:```.LL.L.LLLLLL.LL.L.L..L..LLL.LL.L.LL.LL.LL.LLLL...L.L.....LLLLLLLL.LLLLLL.L.LLLL.```This process continues for three more rounds:```..L.LLL.LL......L..L..LL.LL.L.........LL.LLL.L.L..L.L.LLLLL.LL.L.L....LLL..L.LL.LL.LL.LLL...L.L.....LLLLLL.LLLLLL.L.LL..L.L.LLLLL.LL..L....L..L.L.LL.LL.LL...L.L.....LLLL.LLLLLL.L.LL.```At this point, something interesting happens: the chaos stabilizes and further applications of these rules cause no seats to change state! Once people stop moving around, you count 37 occupied seats.Simulate your seating area by applying the seating rules repeatedly until no seats change state. How many seats end up occupied?
###Code
import numpy as np
with open("inputs/11.txt") as input_file:
input = input_file.read()
seatplan = [[seat for seat in row] for row in input.splitlines()]
seatplan = np.asarray(seatplan, dtype="S1")
%%time
FLOOR = b"."
OCCUPIED = b"#"
FREE = b"L"
def num_occupied(seatplan, x, y):
check_seats = np.array([
[x, y], [x+1,y], [x+2, y ],
[x, y+1], [x+2, y+1],
[x, y+2], [x+1,y+2], [x+2, y+2]
])
x_coords = check_seats[:, 0]
y_coords = check_seats[:, 1]
padded_plan = np.pad(seatplan, pad_width=1, mode="constant", constant_values=FLOOR)
frame = padded_plan[x_coords, y_coords]
return np.sum(frame == OCCUPIED)
def iterate_seatplan(old_seatplan):
new_plan = old_seatplan.copy()
for x, y in np.ndindex(old_seatplan.shape):
occupied = num_occupied(old_seatplan, x, y)
if old_seatplan[x,y] == OCCUPIED and occupied >= 4:
new_plan[x, y] = FREE
elif old_seatplan[x, y] == FREE and occupied == 0:
new_plan[x,y]=OCCUPIED
return new_plan
def calc_seatplan(seatplan):
old_plan = seatplan.copy()
while True:
new_plan = iterate_seatplan(old_plan)
if np.array_equal(old_plan, new_plan):
break
old_plan = new_plan
num_occupied_seats = np.sum(new_plan == OCCUPIED)
print(num_occupied_seats)
calc_seatplan(seatplan)
from scipy import ndimage
%%time
def iterate_seatplan_smart(old_plan):
new_plan = old_plan.copy()
int_plan = (old_plan == OCCUPIED).astype(np.int32)
kernel = [
[1,1,1],
[1,0,1],
[1,1,1],
]
occupancy_plan = ndimage.convolve(int_plan, kernel, mode="constant", cval=0)
to_be_occupied = np.where((occupancy_plan == 0) & (old_plan == FREE))
to_be_freed = np.where((occupancy_plan >= 4) & (old_plan == OCCUPIED))
new_plan[to_be_occupied] = OCCUPIED
new_plan[to_be_freed] = FREE
return new_plan
def calc_seatplan_smart(seatplan):
old_plan = seatplan.copy()
while True:
new_plan = iterate_seatplan_smart(old_plan)
if np.array_equal(old_plan, new_plan):
break
old_plan = new_plan
num_occupied_seats = np.sum(new_plan == OCCUPIED)
print(num_occupied_seats)
calc_seatplan_smart(seatplan)
print(x[[1,2,2],:])
x[[1,2,2,2,2,2,2,2,2],[1,2,2,2,2,2,2,2,2]]
###Output
_____no_output_____
###Markdown
Your puzzle answer was 2265.
###Code
%matplotlib inline
import cv2
from matplotlib import pyplot as plt
x=["H", "a", "l"]
for index, letter in enumerate(x):
print(letter, index)
img=cv2.imread("python-logo.png")
img=cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
%%time
for y, row_data in enumerate(img):
for x, pixel in enumerate(row_data):
for z, value in enumerate(pixel):
img[y,x,z] = value/2
x = np.arange(12).reshape(3,4)
y = np.arange(4)
plt.imshow(img * 50)
x=[[1,2,3],[1,2,4]]
x + 1
###Output
_____no_output_____ |
mv_gaussian/HP tuning.ipynb | ###Markdown
Five obs
###Code
snl_res = np.zeros((14, nbr_data_hp_tuning))
for i in range(nbr_data_hp_tuning):
id_job = str(2) + '_' + str(11) + '_' + str(10) + "_" + str(i+1)
p = 'low_dim_w_five_obs/hp_tuning/snl_' + id_job + '.txt'
snl_res[:,i] = read_res_file(p)
snpec_res = np.zeros((14, nbr_data_hp_tuning))
for i in range(nbr_data_hp_tuning):
id_job = str(2) + '_' + str(11) + '_' + str(10) + "_" + str(i+1)
p = 'low_dim_w_five_obs/hp_tuning/snpec_' + id_job + '.txt'
snpec_res[:,i] = read_res_file(p)
snpla_res = np.zeros((17, nbr_data_hp_tuning))
for i in range(nbr_data_hp_tuning):
id_job = str(2) + '_' + str(11) + '_' + str(10) + "_" + str(i+1)
p = 'low_dim_w_five_obs/hp_tuning/snpla_' + id_job + '.txt'
snpla_res[:,i] = read_res_file(p)
snreb_res = np.zeros((14, nbr_data_hp_tuning))
for i in range(nbr_data_hp_tuning):
id_job = str(2) + '_' + str(11) + '_' + str(10) + "_" + str(i+1)
p = 'low_dim_w_five_obs/hp_tuning/snre_b_' + id_job + '.txt'
snreb_res[:,i] = read_res_file(p)
print("Nbr NaN SNL: " + str(np.isnan(snl_res[-1,:]).sum()))
print("Nbr NaN SNPE-C: " + str(np.isnan(snpec_res[-1,:]).sum()))
print("Nbr NaN SNPLA: " + str(np.isnan(snpla_res[-1,:]).sum()))
print("Nbr NaN SNRE-B: " + str(np.isnan(snreb_res[-1,:]).sum()))
snl_res = snl_res[:, ~np.isnan(snl_res).any(axis=0)]
snpec_res = snpec_res[:, ~np.isnan(snpec_res).any(axis=0)]
snpla_res = snpla_res[:, ~np.isnan(snpla_res).any(axis=0)]
snreb_res = snreb_res[:, ~np.isnan(snreb_res).any(axis=0)]
plt.figure()
plt.plot(snl_res[-1-nbr_round:-1, :]);
plt.yscale('log')
plt.figure()
plt.plot(snpec_res[-1-nbr_round:-1, :]);
plt.yscale('log')
plt.figure()
plt.plot(snpla_res[-1-nbr_round:-1, :]);
plt.yscale('log')
plt.figure()
plt.plot(snreb_res[-1-nbr_round:-1, :]);
plt.yscale('log')
print("Optimla hp for SNL: " + str(snl_res[1,np.argmin(snl_res[-1, :])]))
print("Optimla hp for SNPE-C: " + str(snpec_res[1,np.argmin(snpec_res[-1, :])]))
print("Optimla hp for SNPLA: " + str(snpla_res[1:5,np.argmin(snpla_res[-1, :])]))
print("Optimla hp for SNRE-B: " + str(snreb_res[1,np.argmin(snreb_res[-1, :])]))
print("SNL:")
print("Min: " + str(np.min(snl_res[-1, :])))
print("Q1, Q2, Q3: " + str(np.quantile(snl_res[-1, :], [0.25, 0.5, 0.75])))
print("Max: " + str(np.max(snl_res[-1, :])))
print("---")
print("SNPE-C:")
print("Min: " + str(np.min(snpec_res[-1, :])))
print("Q1, Q2, Q3: " + str(np.quantile(snpec_res[-1, :], [0.25, 0.5, 0.75])))
print("Max: " + str(np.max(snpec_res[-1, :])))
print("---")
print("SNPLA:")
print("Min: " + str(np.min(snpla_res[-1, :])))
print("Q1, Q2, Q3: " + str(np.quantile(snpla_res[-1, :], [0.25, 0.5, 0.75])))
print("Max: " + str(np.max(snpla_res[-1, :])))
print("---")
print("SNRE-B:")
print("Min: " + str(np.min(snreb_res[-1, :])))
print("Q1, Q2, Q3: " + str(np.quantile(snreb_res[-1, :], [0.25, 0.5, 0.75])))
print("Max: " + str(np.max(snreb_res[-1, :])))
print((0.75*np.diff(np.quantile(snl_res[-1, :], [0.25, 0.75]))/np.quantile(snl_res[-1, :], [0.5])).round(3))
print((0.75*np.diff(np.quantile(snpec_res[-1, :], [0.25, 0.75]))/np.quantile(snpec_res[-1, :], [0.5])).round(3))
print((0.75*np.diff(np.quantile(snpla_res[-1, :], [0.25, 0.75]))/np.quantile(snpla_res[-1, :], [0.5])).round(3))
print((0.75*np.diff(np.quantile(snreb_res[-1, :], [0.25, 0.75]))/np.quantile(snreb_res[-1, :], [0.5])).round(3))
print(np.quantile(snl_res[-1, :], [0.5]).round(3))
print(np.quantile(snpec_res[-1, :], [0.5]).round(3))
print(np.quantile(snpla_res[-1, :], [0.5]).round(3))
print(np.quantile(snreb_res[-1, :], [0.5]).round(3))
print("---")
print(np.diff(np.quantile(snl_res[-1, :], [0.25, 0.75])).round(3))
print(np.diff(np.quantile(snpec_res[-1, :], [0.25, 0.75])).round(3))
print(np.diff(np.quantile(snpla_res[-1, :], [0.25, 0.75])).round(3))
print(np.diff(np.quantile(snreb_res[-1, :], [0.25, 0.75])).round(3))
np.diff([2,5])
###Output
_____no_output_____
###Markdown
summary stats
###Code
snl_res = np.zeros((14, nbr_data_hp_tuning))
for i in range(nbr_data_hp_tuning):
id_job = str(2) + '_' + str(11) + '_' + str(10) + "_" + str(i+1)
p = 'low_dim_w_summary_stats/hp_tuning/snl_' + id_job + '.txt'
snl_res[:,i] = read_res_file(p)
snpec_res = np.zeros((14, nbr_data_hp_tuning))
for i in range(nbr_data_hp_tuning):
id_job = str(2) + '_' + str(11) + '_' + str(10) + "_" + str(i+1)
p = 'low_dim_w_summary_stats/hp_tuning/snpec_' + id_job + '.txt'
snpec_res[:,i] = read_res_file(p)
snpla_res = np.zeros((17, nbr_data_hp_tuning))
for i in range(nbr_data_hp_tuning):
id_job = str(2) + '_' + str(11) + '_' + str(10) + "_" + str(i+1)
p = 'low_dim_w_summary_stats/hp_tuning/snpla_' + id_job + '.txt'
snpla_res[:,i] = read_res_file(p)
snreb_res = np.zeros((14, nbr_data_hp_tuning))
for i in range(nbr_data_hp_tuning):
id_job = str(2) + '_' + str(11) + '_' + str(10) + "_" + str(i+1)
p = 'low_dim_w_summary_stats/hp_tuning/snre_b_' + id_job + '.txt'
snreb_res[:,i] = read_res_file(p)
print("Nbr NaN SNL: " + str(np.isnan(snl_res[-1,:]).sum()))
print("Nbr NaN SNPE-C: " + str(np.isnan(snpec_res[-1,:]).sum()))
print("Nbr NaN SNPLA: " + str(np.isnan(snpla_res[-1,:]).sum()))
print("Nbr NaN SNRE-B: " + str(np.isnan(snreb_res[-1,:]).sum()))
snl_res = snl_res[:, ~np.isnan(snl_res).any(axis=0)]
snpec_res = snpec_res[:, ~np.isnan(snpec_res).any(axis=0)]
snpla_res = snpla_res[:, ~np.isnan(snpla_res).any(axis=0)]
snreb_res = snreb_res[:, ~np.isnan(snreb_res).any(axis=0)]
plt.figure()
plt.plot(snl_res[-1-nbr_round:-1, :]);
plt.yscale('log')
plt.figure()
plt.plot(snpec_res[-1-nbr_round:-1, :]);
plt.yscale('log')
plt.figure()
plt.plot(snpla_res[-1-nbr_round:-1, :]);
plt.yscale('log')
plt.figure()
plt.plot(snreb_res[-1-nbr_round:-1, :]);
plt.yscale('log')
print("Optimla hp for SNL: " + str(snl_res[1,np.argmin(snl_res[-1, :])]))
print("Optimla hp for SNPE-C: " + str(snpec_res[1,np.argmin(snpec_res[-1, :])]))
print("Optimla hp for SNPLA: " + str(snpla_res[1:5,np.argmin(snpla_res[-1, :])]))
print("Optimla hp for SNRE-B: " + str(snreb_res[1,np.argmin(snreb_res[-1, :])]))
print("SNL:")
print("Min: " + str(np.min(snl_res[-1, :])))
print("Q1, Q2, Q3: " + str(np.quantile(snl_res[-1, :], [0.25, 0.5, 0.75])))
print("Max: " + str(np.max(snl_res[-1, :])))
print("---")
print("SNPE-C:")
print("Min: " + str(np.min(snpec_res[-1, :])))
print("Q1, Q2, Q3: " + str(np.quantile(snpec_res[-1, :], [0.25, 0.5, 0.75])))
print("Max: " + str(np.max(snpec_res[-1, :])))
print("---")
print("SNPLA:")
print("Min: " + str(np.min(snpla_res[-1, :])))
print("Q1, Q2, Q3: " + str(np.quantile(snpla_res[-1, :], [0.25, 0.5, 0.75])))
print("Max: " + str(np.max(snpla_res[-1, :])))
print("---")
print("SNRE-B:")
print("Min: " + str(np.min(snreb_res[-1, :])))
print("Q1, Q2, Q3: " + str(np.quantile(snreb_res[-1, :], [0.25, 0.5, 0.75])))
print("Max: " + str(np.max(snreb_res[-1, :])))
print((0.75*np.diff(np.quantile(snl_res[-1, :], [0.25, 0.75]))/np.quantile(snl_res[-1, :], [0.5])).round(3))
print((0.75*np.diff(np.quantile(snpec_res[-1, :], [0.25, 0.75]))/np.quantile(snpec_res[-1, :], [0.5])).round(3))
print((0.75*np.diff(np.quantile(snpla_res[-1, :], [0.25, 0.75]))/np.quantile(snpla_res[-1, :], [0.5])).round(3))
print((0.75*np.diff(np.quantile(snreb_res[-1, :], [0.25, 0.75]))/np.quantile(snreb_res[-1, :], [0.5])).round(3))
print(np.quantile(snl_res[-1, :], [0.5]).round(3))
print(np.quantile(snpec_res[-1, :], [0.5]).round(3))
print(np.quantile(snpla_res[-1, :], [0.5]).round(3))
print(np.quantile(snreb_res[-1, :], [0.5]).round(3))
print("---")
print(np.diff(np.quantile(snl_res[-1, :], [0.25, 0.75])).round(3))
print(np.diff(np.quantile(snpec_res[-1, :], [0.25, 0.75])).round(3))
print(np.diff(np.quantile(snpla_res[-1, :], [0.25, 0.75])).round(3))
print(np.diff(np.quantile(snreb_res[-1, :], [0.25, 0.75])).round(3))
###Output
[0.612]
[0.114]
[0.027]
[3.68]
---
[625.626]
[0.072]
[0.031]
[4.544]
###Markdown
learnable summary stats
###Code
snl_res = np.zeros((14, nbr_data_hp_tuning))
for i in range(nbr_data_hp_tuning):
id_job = str(2) + '_' + str(11) + '_' + str(10) + "_" + str(i+1)
p = 'low_dim_w_learnable_summary_stats/hp_tuning/snl_' + id_job + '.txt'
snl_res[:,i] = read_res_file(p)
snpec_res = np.zeros((14, nbr_data_hp_tuning))
for i in range(nbr_data_hp_tuning):
id_job = str(2) + '_' + str(11) + '_' + str(10) + "_" + str(i+1)
p = 'low_dim_w_learnable_summary_stats/hp_tuning/snpec_' + id_job + '.txt'
snpec_res[:,i] = read_res_file(p)
snpla_res = np.zeros((17, nbr_data_hp_tuning))
for i in range(nbr_data_hp_tuning):
id_job = str(2) + '_' + str(11) + '_' + str(10) + "_" + str(i+1)
p = 'low_dim_w_learnable_summary_stats/hp_tuning/snpla_' + id_job + '.txt'
snpla_res[:,i] = read_res_file(p)
snreb_res = np.zeros((14, nbr_data_hp_tuning))
for i in range(nbr_data_hp_tuning):
id_job = str(2) + '_' + str(11) + '_' + str(10) + "_" + str(i+1)
p = 'low_dim_w_learnable_summary_stats/hp_tuning/snre_b_' + id_job + '.txt'
snreb_res[:,i] = read_res_file(p)
print("Nbr NaN SNL: " + str(np.isnan(snl_res[-1,:]).sum()))
print("Nbr NaN SNPE-C: " + str(np.isnan(snpec_res[-1,:]).sum()))
print("Nbr NaN SNPLA: " + str(np.isnan(snpla_res[-1,:]).sum()))
print("Nbr NaN SNRE-B: " + str(np.isnan(snreb_res[-1,:]).sum()))
snl_res = snl_res[:, ~np.isnan(snl_res).any(axis=0)]
snpec_res = snpec_res[:, ~np.isnan(snpec_res).any(axis=0)]
snpla_res = snpla_res[:, ~np.isnan(snpla_res).any(axis=0)]
snreb_res = snreb_res[:, ~np.isnan(snreb_res).any(axis=0)]
plt.figure()
plt.plot(snl_res[-1-nbr_round:-1, :]);
plt.yscale('log')
plt.figure()
plt.plot(snpec_res[-1-nbr_round:-1, :]);
plt.yscale('log')
plt.figure()
plt.plot(snpla_res[-1-nbr_round:-1, :]);
plt.yscale('log')
plt.figure()
plt.plot(snreb_res[-1-nbr_round:-1, :]);
plt.yscale('log')
print("Optimla hp for SNL: " + str(snl_res[1,np.argmin(snl_res[-1, :])]))
print("Optimla hp for SNPE-C: " + str(snpec_res[1,np.argmin(snpec_res[-1, :])]))
print("Optimla hp for SNPLA: " + str(snpla_res[1:5,np.argmin(snpla_res[-1, :])]))
print("Optimla hp for SNRE-B: " + str(snreb_res[1,np.argmin(snreb_res[-1, :])]))
print("SNL:")
print("Min: " + str(np.min(snl_res[-1, :])))
print("Q1, Q2, Q3: " + str(np.quantile(snl_res[-1, :], [0.25, 0.5, 0.75])))
print("Max: " + str(np.max(snl_res[-1, :])))
print("---")
print("SNPE-C:")
print("Min: " + str(np.min(snpec_res[-1, :])))
print("Q1, Q2, Q3: " + str(np.quantile(snpec_res[-1, :], [0.25, 0.5, 0.75])))
print("Max: " + str(np.max(snpec_res[-1, :])))
print("---")
print("SNPLA:")
print("Min: " + str(np.min(snpla_res[-1, :])))
print("Q1, Q2, Q3: " + str(np.quantile(snpla_res[-1, :], [0.25, 0.5, 0.75])))
print("Max: " + str(np.max(snpla_res[-1, :])))
print("---")
print("SNRE-B:")
print("Min: " + str(np.min(snreb_res[-1, :])))
print("Q1, Q2, Q3: " + str(np.quantile(snreb_res[-1, :], [0.25, 0.5, 0.75])))
print("Max: " + str(np.max(snreb_res[-1, :])))
print((0.75*np.diff(np.quantile(snl_res[-1, :], [0.25, 0.75]))/np.quantile(snl_res[-1, :], [0.5])).round(3))
print((0.75*np.diff(np.quantile(snpec_res[-1, :], [0.25, 0.75]))/np.quantile(snpec_res[-1, :], [0.5])).round(3))
print((0.75*np.diff(np.quantile(snpla_res[-1, :], [0.25, 0.75]))/np.quantile(snpla_res[-1, :], [0.5])).round(3))
print((0.75*np.diff(np.quantile(snreb_res[-1, :], [0.25, 0.75]))/np.quantile(snreb_res[-1, :], [0.5])).round(3))
###Output
[0.649]
[0.345]
[0.547]
[0.421]
|
e-mail/.ipynb_checkpoints/desafio-checkpoint.ipynb | ###Markdown
Desafio Python e E-mail DescriçãoDigamos que você trabalha em uma indústria e está responsável pela área de inteligência de negócio.Todo dia, você, a equipe ou até mesmo um programa, gera um report diferente para cada área da empresa:- Financeiro- Logística- Manutenção- Marketing- Operações- Produção- VendasCada um desses reports deve ser enviado por e-mail para o Gerente de cada Área.Crie um programa que faça isso automaticamente. A relação de Gerentes (com seus respectivos e-mails) e áreas está no arquivo 'Enviar E-mails.xlsx'.Dica: Use o pandas read_excel para ler o arquivo dos e-mails que isso vai facilitar.
###Code
import pandas as pd
import win32com.client as win32
outlook = win32.Dispatch('outlook.application')
gerentes_df = pd.read_excel('Enviar E-mails.xlsx')
#gerentes_df.info()
for i, email in enumerate(gerentes_df['E-mail']):
gerente = gerentes_df.loc[i, 'Gerente']
area = gerentes_df.loc[i, 'Relatório']
mail = outlook.CreateItem(0)
mail.To = '[email protected]'
mail.Subject = 'Relatório de {}'.format(area)
mail.Body = '''
Prezado {},
Segue em anexo o Relatório de {}, conforme solicitado.
Qualquer dúvida estou à disposição.
Att.,
'''.format(gerente, area)
attachment = r'C:\Users\Maki\Downloads\e-mail\{}.xlsx'.format(area)
mail.Attachments.Add(attachment)
mail.Send()
###Output
_____no_output_____ |
lib-src/index.ipynb | ###Markdown
Project name here> Summary description here. This file will become your README and also the index of your documentation. Install `pip install your_project_name` How to use Fill me in please! Don't forget code examples:
###Code
1+1
###Output
_____no_output_____ |
task-01/main.ipynb | ###Markdown
Function
###Code
from math import sin, cos, pi
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as colors
def one_particle(x0, y0, mux, muy, b4):
""" calculate one particle stability """
px0 = 0.0; py0 = 0.0;
for n in range(10001):
#print('{:3} {:+19.16f} {:+19.16f} {:+19.16f}'.format(n, x0, y0, x0*x0 + y0*y0))
if (x0*x0 + y0*y0) < 1.0:
x1 = x0 * cos(2.0*pi*mux) + px0 * sin(2.0*pi*mux);
px1 = -x0 * sin(2.0*pi*mux) + px0 * cos(2.0*pi*mux);
y1 = y0 * cos(2.0*pi*muy) + py0 * sin(2.0*pi*muy);
py1 = -y0 * sin(2.0*pi*muy) + py0 * cos(2.0*pi*muy);
px2 = px1 + b4 * (x1*x1*x1 - 3.0*x1*y1*y1);
py2 = py1 - b4 * (y1*y1*y1 - 3.0*x1*x1*y1);
x0 = x1; y0 = y1;
px0 = px2; py0 = py2;
else:
return (n-1)
return n
def calculate(X, Y, upper, delta, mux, muy, b4):
""" calculate in a I Quadrant"""
Z = np.zeros(( upper+1, upper+1 ))
for y in Y.T[:1][0]:
for x in X[:1][0]:
x0 = x*delta
y0 = y*delta
in_circle = 1 if ((x0*x0 + y0*y0) < 1.0) else 0
if (in_circle == 1):
n = one_particle(x0, y0, mux, muy, b4)
Z[y][x] = int(n)
#print(x, y, Z[x][y])
return Z
###Output
_____no_output_____
###Markdown
Main
###Code
# parameters
mux = 0.32;
muy = 0.32;
upper = 50
delta = 1.0 / upper
b4 = [500, 50, 5, 0.5, 0.05, 0.005, 0.0005, 0.00005, 0.000005]
# initialization X, Y mesh
X = np.arange(0, upper+1)
Y = np.arange(0, upper+1)
X, Y = np.meshgrid(X, Y)
# calculate and plot for a different b4
fig, ax = plt.subplots(3, 3, figsize=(14,12))
for i in range(3):
for j in range(3):
Z = calculate(X, Y, upper, delta, mux, muy, b4[2*i + j]) + 1
sub = ax[i][j].pcolor(X*delta, Y*delta, Z, cmap='Greys', norm=colors.LogNorm(vmin=1.0, vmax=Z.max()))
ax[i][j].set_title("b4 = {}".format(b4[2*i + j]))
fig.colorbar(sub, ax=ax)
plt.show()
###Output
_____no_output_____ |
src/.ipynb_checkpoints/Chapter1 (1)-checkpoint.ipynb | ###Markdown
Notebook Examples for Chapter 1
###Code
%matplotlib inline
import IPython.display as disp
import ee
ee.Initialize()
minlon = 6.31
minlat = 50.83
maxlon = 6.58
maxlat = 50.95
rect = ee.Geometry.Rectangle([minlon,minlat,maxlon,maxlat]);
collection = ee.ImageCollection('COPERNICUS/S1_GRD') \
.filterBounds(ee.Geometry.Point(minlon,minlat)) \
.filterBounds(ee.Geometry.Point(maxlon,maxlat)) \
.filterDate(ee.Date('2017-05-01'), ee.Date('2017-06-01')) \
.filter(ee.Filter.eq('transmitterReceiverPolarisation', ['VV','VH'])) \
.filter(ee.Filter.eq('resolution_meters', 10)) \
.filter(ee.Filter.eq('instrumentMode', 'IW'))
image = ee.Image(collection.first()).clip(rect)
url = image.select('VV').getThumbURL({'min':-20,'max':0})
disp.Image(url=url)
cd /home/scripts
run ex1_1
###Output
_____no_output_____ |
fy18-annual-report/report-final.ipynb | ###Markdown
Enterprise Dataset Inventory AnalysisThis notebook presents the code used to generate the Enterprise Dataset Inventory (EDI) analysis in the [Chief Data Officer's Annual Report](http://opendata.dc.gov/pages/cdo-annual-report) published on March 11, 2018. The EDI and Annual Report fulfill the requirements of the [District of Columbia Data Policy](https://octo.dc.gov/page/district-columbia-data-policy) enacted by Mayor Muriel Bowser on April 27, 2017. The report requires two archived datasets available in [this GitHub repository](https://github.com/DCgov/enterprise-dataset-inventory/tree/master/fy18-annual-report/data): [agency_participants.csv](https://github.com/DCgov/enterprise-dataset-inventory/blob/master/fy18-annual-report/data/agency_participants_archived.csv) contains a list of agency names, acronyms, whether the agency is mayoral or non-mayoral, and whether or not the agency participated in the 2017-18 Enterprise Dataset Inventory. [dataset_inventory_2018_03_11.csv](https://github.com/DCgov/enterprise-dataset-inventory/blob/master/fy18-annual-report/data/dataset_inventory_2018_03_11.csv) contains an archived copy of the data table derived from the Enterprise Dataset Inventory on the day of the report's publication. A continually-updated copy of this dataset is available on the [DC Open Data Portal](http://opendata.dc.gov/datasets/enterprise-dataset-inventory) and can be analyzed [using this notebook](https://github.com/DCgov/enterprise-dataset-inventory/blob/master/report-updated.ipynb).The report was generated in Python 3.6.3 and requires the following packages: Pandas, NumPy, Matplotlib, and Seaborn.Please report any issues to the GitHub repository [here](https://github.com/DCgov/enterprise-dataset-inventory/issues).
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set(font='DejaVu Sans')
np.set_printoptions(suppress=True)
###Output
_____no_output_____
###Markdown
Import Data
###Code
participation = pd.read_csv('data/agency_participants_archived.csv')
df = pd.read_csv('data/dataset_inventory_2018_03_11.csv')
###Output
_____no_output_____
###Markdown
When was the most recent record published?
###Code
print('Data Last Pulled: '+pd.to_datetime(df.PUBLISHED_DATE).max().strftime("%Y/%m/%d"))
###Output
Data Last Pulled: 2018/03/11
###Markdown
Assigning OCTO Data to Data OwnersThe Office of the Chief Technology Officer (OCTO) maintains and publishes a large number of datasets for DC government agencies. For the sake of the inventory, OCTO largely attributed ownership of these datasets to the originating agency. This function cleans up data ownership and ensures that datasets are properly attributed in the analysis. Datasets maintained or published by OCTO that originate with non-DC government agencies (e.g., federal agencies) are assigned to OCTO as the data owner for the sake of this analysis.
###Code
not_dc = ['AOC', 'BID', 'CENSUS', 'CT', 'MWCOG', 'NCPC', 'NGA', 'NPS', 'USDA', 'USDOT', 'USFWS', 'USGS', 'USPS', 'WDCEP', 'WMATA']
def data_owner(x):
if x.AGENCY_ACRONYM in not_dc:
return 'OCTO'
elif (x.AGENCY_ACRONYM == 'OCTO') and ('DCEO' in x.DATA_OWNER):
return 'DOEE'
elif (x.AGENCY_ACRONYM == 'OCTO') and ('OA' in x.DATA_OWNER):
return 'DCOA'
elif (x.AGENCY_ACRONYM == 'OA'):
return 'DCOA'
elif (x.AGENCY_ACRONYM == 'OCTO') and ('NavTEQ' in x.DATA_OWNER):
return 'OCTO'
elif (x.AGENCY_ACRONYM == 'OCTO') and ('OCTO' not in x.DATA_OWNER):
return x.DATA_OWNER
elif (x.AGENCY_ACRONYM == 'OCTO') and ('PASS' in x.DATA_OWNER):
return 'OCP'
else:
return x.AGENCY_ACRONYM
df['AGENCYCODE'] = df.apply(lambda x: data_owner(x), axis = 1)
###Output
_____no_output_____
###Markdown
How many agencies participated?
###Code
mayoral = participation[participation.TYPE_OF_AGENCY=='Mayoral Agency']
nonmayoral = participation[participation.TYPE_OF_AGENCY=='Non-Mayoral Agency']
print('Number of datasets recorded: '+str(len(df)))
print('Total number of agencies: ' +str(len(participation)))
print('\nNumber of Agencies participating: '+str(np.round((participation.PARTICIPATING=='Yes').sum())))
print('Percent of Agencies Participating: '+str(100 * np.round((participation.PARTICIPATING=='Yes').sum()*1.0/participation.shape[0], 3)))
print('\nNumber of Mayoral Agencies: ' +str(np.round(mayoral.shape[0])))
print('Number of Mayoral Agencies Participating: ' +str(np.round((mayoral.PARTICIPATING=='Yes').sum())))
print('Percent of Mayoral Agencies Participating: ' +str(100 * np.round((mayoral.PARTICIPATING=='Yes').sum()*1.0/mayoral.shape[0], 3)))
print('\nNumber of Non-Mayoral Agencies: ' +str(np.round(nonmayoral.shape[0])))
print('Number of Non-Mayoral Agencies Participating: ' +str(np.round((nonmayoral.PARTICIPATING=='Yes').sum())))
print('Percent of Non-Mayoral Agencies Participating: ' +str(100 * np.round((nonmayoral.PARTICIPATING=='Yes').sum()*1.0/nonmayoral.shape[0], 3)))
fig, (ax, ax1, ax2) = plt.subplots(1,3,figsize = (20,12))
sns.barplot(np.sort(participation.PARTICIPATING.unique()), participation.PARTICIPATING.value_counts().sort_index(), ax=ax)
ax.set_xlabel('Participated', fontsize = 25)
ax.set_ylabel('Number of Agencies', fontsize = 25)
ax.set_title('All Agencies', fontsize = 25)
ax.tick_params(which = 'major', labelsize = 25)
ax.set_ylim(0,80)
if ('No' in mayoral.PARTICIPATING.unique()) and ('Yes' in mayoral.PARTICIPATING.unique()):
x = np.sort(mayoral.PARTICIPATING.unique())
y = mayoral.PARTICIPATING.value_counts().sort_index()
elif ('No' not in mayoral.PARTICIPATING.unique()) and ('Yes' in mayoral.PARTICIPATING.unique()):
x = ['No', 'Yes']
y = [0, mayoral.PARTICIPATING.value_counts().sort_index()]
elif ('No' in mayoral.PARTICIPATING.unique()) and ('Yes' not in mayoral.PARTICIPATING.unique()):
x = ['No', 'Yes']
y = [mayoral.PARTICIPATING.value_counts().sort_index(), 0]
sns.barplot(x, y, ax=ax1)
ax1.set_xlabel('Participated', fontsize = 25)
ax1.set_ylabel('', fontsize = 25)
ax1.set_title('Mayoral Agencies', fontsize = 25)
ax1.tick_params(which = 'major', labelsize = 25)
ax1.set_ylim(0,80)
if ('No' in nonmayoral.PARTICIPATING.unique()) and ('Yes' in nonmayoral.PARTICIPATING.unique()):
x = np.sort(nonmayoral.PARTICIPATING.unique())
y = nonmayoral.PARTICIPATING.value_counts().sort_index()
elif ('No' not in nonmayoral.PARTICIPATING.unique()) and ('Yes' in nonmayoral.PARTICIPATING.unique()):
x = ['No', 'Yes']
y = [0, nonmayoral.PARTICIPATING.value_counts().sort_index()]
elif ('No' in nonmayoral.PARTICIPATING.unique()) and ('Yes' not in nonmayoral.PARTICIPATING.unique()):
x = ['No', 'Yes']
y = [nonmayoral.PARTICIPATING.value_counts().sort_index(), 0]
sns.barplot(x, y, ax=ax2)
ax2.set_xlabel('Participated', fontsize = 25)
ax2.set_ylabel('', fontsize = 25)
ax2.set_title('Non-Mayoral Agencies', fontsize = 25)
ax2.tick_params(which = 'major', labelsize = 25)
ax2.set_ylim(0,80)
plt.suptitle('Figure 1\nHow many agencies participated?', fontsize = 25, y = 1.00)
plt.show()
###Output
Number of datasets recorded: 1640
Total number of agencies: 99
Number of Agencies participating: 79
Percent of Agencies Participating: 79.8
Number of Mayoral Agencies: 69
Number of Mayoral Agencies Participating: 69
Percent of Mayoral Agencies Participating: 100.0
Number of Non-Mayoral Agencies: 30
Number of Non-Mayoral Agencies Participating: 10
Percent of Non-Mayoral Agencies Participating: 33.3
###Markdown
How many data sets did agencies enter?
###Code
agency_counts = df.AGENCYCODE.value_counts()
print(agency_counts.sort_values(ascending=False).head(n=10))
print('\nAverage Number of Data Sets: '+str(round(df.AGENCYCODE.value_counts().mean(),1)))
print('Median Number of Data Sets: '+str(df.AGENCYCODE.value_counts().median()))
print('Most Common Number of Data Sets: '+str(df.AGENCYCODE.value_counts().mode().values[0]))
fig, ax = plt.subplots(figsize=(20,12))
sns.barplot(agency_counts.index, agency_counts.values, color='blue')
plt.xticks(rotation=90)
ax.set_xticklabels(agency_counts.index, fontsize=14)
ax.set_yticklabels(np.arange(0, 250, 50), fontsize=25)
ax.set_xlabel('Number of Data Sets', fontsize=25)
ax.set_ylabel('Agency Names', fontsize=25)
ax.set_title('Figure 2\nHow many datasets did agencies inventory?', fontsize=25)
plt.show()
###Output
OCTO 250
DCPS 168
DDOT 143
DOH 109
DOEE 76
OP 70
OCFO 55
DPW 39
DCRA 37
OUC 36
Name: AGENCYCODE, dtype: int64
Average Number of Data Sets: 21.6
Median Number of Data Sets: 9.5
Most Common Number of Data Sets: 2
###Markdown
Dataset Classification
###Code
class_names = ['Open', 'Public Not Proactively Released', 'For District Government Use',
'Confidential', 'Restricted Confidential']
df['classification'] = df.DATASET_CLASSIFICATION_NAME.map({'Open': 0, 'Public Not Proactively Released': 1,
'For District Government Use': 2, 'Confidential': 3,
'Restricted Confidential': 4})
print('Number Classified')
print(np.round(df.groupby(['classification', 'DATASET_CLASSIFICATION_NAME']).classification.count(), 3))
print('Percent Classified')
print(100 * np.round(df.groupby(['classification', 'DATASET_CLASSIFICATION_NAME']).classification.count()/len(df), 3))
fig, ax = plt.subplots(figsize = (20,12))
sns.barplot(np.sort(df.classification.unique()), df.classification.value_counts().sort_index(), ax=ax)
ax.set_ylabel('Number of Data Sets Classified', fontsize=25)
ax.set_xticks(np.arange(-.25,4.75))
ax.set_xticklabels(class_names)
plt.xticks(rotation=35)
ax.tick_params(which = 'major', labelsize = 25)
ax.set_title('Figure 3\nHow were datasets classified?', fontsize = 25)
###Output
Number Classified
classification DATASET_CLASSIFICATION_NAME
0 Open 708
1 Public Not Proactively Released 137
2 For District Government Use 193
3 Confidential 508
4 Restricted Confidential 94
Name: classification, dtype: int64
Percent Classified
classification DATASET_CLASSIFICATION_NAME
0 Open 43.2
1 Public Not Proactively Released 8.4
2 For District Government Use 11.8
3 Confidential 31.0
4 Restricted Confidential 5.7
Name: classification, dtype: float64
###Markdown
How many Open Data Sets are in the Open Data Portal?
###Code
print('Data Sets in Open Data Portal: ' +str(df[df.OPENDATA_PORTAL=='Yes'].shape[0]))
print('Data Sets Classified \'Open\': ' +str(df[df.DATASET_CLASSIFICATION_NAME=='Open'].shape[0]))
print('Open Data Sets not in Open Data Portal: ' +str(df[(df.DATASET_CLASSIFICATION_NAME == 'Open') & (df.OPENDATA_PORTAL == 'No')].shape[0]))
open_data = df[df.DATASET_CLASSIFICATION_NAME=='Open']
print(open_data.OPENDATA_PORTAL.value_counts().sort_index())
print(100 * round(open_data.OPENDATA_PORTAL.value_counts().sort_index()/len(open_data), 3))
fig, (ax) = plt.subplots(1,1,figsize = (20,12))
sns.barplot(np.sort(open_data.OPENDATA_PORTAL.unique()), open_data.OPENDATA_PORTAL.value_counts().sort_index(), ax=ax)
ax.set_xlabel('In Open Data Portal', fontsize = 25)
ax.set_ylabel('Number of Data Sets Classified Open', fontsize=25)
ax.set_xticklabels(['No', 'Yes'])
ax.tick_params(which = 'major', labelsize = 25)
ax.set_title('Figure 4\nIs DC\'s Open Data in the Open Data Portal?', fontsize=25)
###Output
Data Sets in Open Data Portal: 517
Data Sets Classified 'Open': 708
Open Data Sets not in Open Data Portal: 191
No 191
Yes 517
Name: OPENDATA_PORTAL, dtype: int64
No 27.0
Yes 73.0
Name: OPENDATA_PORTAL, dtype: float64
###Markdown
Data Set Category
###Code
categories = pd.DataFrame(df.DATASET_CATEGORY.value_counts())
categories = categories[categories.index != '-1']
print(categories.DATASET_CATEGORY)
fig, ax = plt.subplots(figsize = (20,12))
sns.barplot(categories.index, categories.DATASET_CATEGORY, ax=ax, color = 'blue')
plt.xticks(rotation=90)
ax.tick_params(which = 'major', labelsize = 25)
ax.set_ylabel('Number of Data Sets Categorized', fontsize=25)
ax.set_title('Figure 5\nHow were datasets categorized?', fontsize=25)
###Output
Government Operations 207
Health 175
Transportation 165
Education 145
Public Services 122
Public Safety 122
Business and Economic Development 108
Administrative and Other Boundaries 102
Environment 98
Utility and Communication 64
Property and Land 60
Planning Land Use and Zoning 46
Financial 45
Aerial Photography and Scanned Maps 42
Historic 32
Facility and Structure 22
Demographic 21
Cultural and Society 20
Recreation 17
Location 11
Technology 6
Elevation 5
Communication 4
Basemap 1
Name: DATASET_CATEGORY, dtype: int64
|
notebooks/source/bayesian_regression.ipynb | ###Markdown
Bayesian Regression Using NumPyroIn this tutorial, we will explore how to do bayesian regression in NumPyro, using a simple example adapted from Statistical Rethinking [[1](References)]. In particular, we would like to explore the following: - Write a simple model using the `sample` NumPyro primitive. - Run inference using MCMC in NumPyro, in particular, using the No U-Turn Sampler (NUTS) to get a posterior distribution over our regression parameters of interest. - Learn about inference utilities such as `Predictive` and `log_likelihood`. - Learn how we can use effect-handlers in NumPyro to generate execution traces from the model, condition on sample statements, seed models with RNG seeds, etc., and use this to implement various utilities that will be useful for MCMC. e.g. computing model log likelihood, generating empirical distribution over the posterior predictive, etc. Tutorial Outline:1. [Dataset](Dataset)2. [Regression Model to Predict Divorce Rate](Regression-Model-to-Predict-Divorce-Rate) - [Model-1: Predictor-Marriage Rate](Model-1:-Predictor---Marriage-Rate) - [Posterior Distribution over the Regression Parameters](Posterior-Distribution-over-the-Regression-Parameters) - [Posterior Predictive Distribution](Posterior-Predictive-Distribution) - [Predictive Utility With Effect Handlers](Predictive-Utility-With-Effect-Handlers) - [Model Predictive Density](Model-Predictive-Density) - [Model-2: Predictor-Median Age of Marriage](Model-2:-Predictor---Median-Age-of-Marriage) - [Model-3: Predictor-Marriage Rate and Median Age of Marriage](Model-3:-Predictor---Marriage-Rate-and-Median-Age-of-Marriage) - [Divorce Rate Residuals by State](Divorce-Rate-Residuals-by-State)3. [Regression Model with Measurement Error](Regression-Model-with-Measurement-Error) - [Effect of Incorporating Measurement Noise on Residuals](Effect-of-Incorporating-Measurement-Noise-on-Residuals)4. [References](References)
###Code
%reset -s -f
import os
from IPython.display import set_matplotlib_formats
import jax.numpy as jnp
from jax import random, vmap
from jax.scipy.special import logsumexp
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import numpyro
from numpyro.diagnostics import hpdi
import numpyro.distributions as dist
from numpyro import handlers
from numpyro.infer import MCMC, NUTS
plt.style.use('bmh')
if "NUMPYRO_SPHINXBUILD" in os.environ:
set_matplotlib_formats('svg')
assert numpyro.__version__.startswith('0.4.0')
###Output
_____no_output_____
###Markdown
DatasetFor this example, we will use the `WaffleDivorce` dataset from Chapter 05, Statistical Rethinking [[1](References)]. The dataset contains divorce rates in each of the 50 states in the USA, along with predictors such as population, median age of marriage, whether it is a Southern state and, curiously, number of Waffle Houses.
###Code
DATASET_URL = 'https://raw.githubusercontent.com/rmcelreath/rethinking/master/data/WaffleDivorce.csv'
dset = pd.read_csv(DATASET_URL, sep=';')
dset
###Output
_____no_output_____
###Markdown
Let us plot the pair-wise relationship amongst the main variables in the dataset, using `seaborn.pairplot`.
###Code
vars = ['Population', 'MedianAgeMarriage', 'Marriage', 'WaffleHouses', 'South', 'Divorce']
sns.pairplot(dset, x_vars=vars, y_vars=vars, palette='husl');
###Output
_____no_output_____
###Markdown
From the plots above, we can clearly observe that there is a relationship between divorce rates and marriage rates in a state (as might be expected), and also between divorce rates and median age of marriage. There is also a weak relationship between number of Waffle Houses and divorce rates, which is not obvious from the plot above, but will be clearer if we regress `Divorce` against `WaffleHouse` and plot the results.
###Code
sns.regplot('WaffleHouses', 'Divorce', dset);
###Output
_____no_output_____
###Markdown
This is an example of a spurious association. We do not expect the number of Waffle Houses in a state to affect the divorce rate, but it is likely correlated with other factors that have an effect on the divorce rate. We will not delve into this spurious association in this tutorial, but the interested reader is encouraged to read Chapters 5 and 6 of [[1](References)] which explores the problem of causal association in the presence of multiple predictors. For simplicity, we will primarily focus on marriage rate and the median age of marriage as our predictors for divorce rate throughout the remaining tutorial. Regression Model to Predict Divorce RateLet us now write a regressionn model in *NumPyro* to predict the divorce rate as a linear function of marriage rate and median age of marriage in each of the states. First, note that our predictor variables have somewhat different scales. It is a good practice to standardize our predictors and response variables to mean `0` and standard deviation `1`, which should result in [faster inference](https://mc-stan.org/docs/2_19/stan-users-guide/standardizing-predictors-and-outputs.html).
###Code
standardize = lambda x: (x - x.mean()) / x.std()
dset['AgeScaled'] = dset.MedianAgeMarriage.pipe(standardize)
dset['MarriageScaled'] = dset.Marriage.pipe(standardize)
dset['DivorceScaled'] = dset.Divorce.pipe(standardize)
###Output
_____no_output_____
###Markdown
We write the NumPyro model as follows. While the code should largely be self-explanatory, take note of the following: - In NumPyro, *model* code is any Python callable which can optionally accept additional arguments and keywords. For HMC which we will be using for this tutorial, these arguments and keywords remain static during inference, but we can reuse the same model to generate [predictions](Posterior-Predictive-Distribution) on new data. - In addition to regular Python statements, the model code also contains primitives like `sample`. These primitives can be interpreted with various side-effects using effect handlers. For more on effect handlers, refer to [[3](References)], [[4](References)]. For now, just remember that a `sample` statement makes this a stochastic function that samples some latent parameters from a *prior distribution*. Our goal is to infer the *posterior distribution* of these parameters conditioned on observed data. - The reason why we have kept our predictors as optional keyword arguments is to be able to reuse the same model as we vary the set of predictors. Likewise, the reason why the response variable is optional is that we would like to reuse this model to sample from the posterior predictive distribution. See the [section](Posterior-Predictive-Distribution) on plotting the posterior predictive distribution, as an example.
###Code
def model(marriage=None, age=None, divorce=None):
a = numpyro.sample('a', dist.Normal(0., 0.2))
M, A = 0., 0.
if marriage is not None:
bM = numpyro.sample('bM', dist.Normal(0., 0.5))
M = bM * marriage
if age is not None:
bA = numpyro.sample('bA', dist.Normal(0., 0.5))
A = bA * age
sigma = numpyro.sample('sigma', dist.Exponential(1.))
mu = a + M + A
numpyro.sample('obs', dist.Normal(mu, sigma), obs=divorce)
###Output
_____no_output_____
###Markdown
Model 1: Predictor - Marriage RateWe first try to model the divorce rate as depending on a single variable, marriage rate. As mentioned above, we can use the same `model` code as earlier, but only pass values for `marriage` and `divorce` keyword arguments. We will use the No U-Turn Sampler (see [[5](References)] for more details on the NUTS algorithm) to run inference on this simple model.The Hamiltonian Monte Carlo (or, the NUTS) implementation in NumPyro takes in a potential energy function. This is the negative log joint density for the model. Therefore, for our model description above, we need to construct a function which given the parameter values returns the potential energy (or negative log joint density). Additionally, the verlet integrator in HMC (or, NUTS) returns sample values simulated using Hamiltonian dynamics in the unconstrained space. As such, continuous variables with bounded support need to be transformed into unconstrained space using bijective transforms. We also need to transform these samples back to their constrained support before returning these values to the user. Thankfully, this is handled on the backend for us, within a convenience class for doing [MCMC inference](https://numpyro.readthedocs.io/en/latest/mcmc.htmlnumpyro.mcmc.MCMC) that has the following methods: - `run(...)`: runs warmup, adapts steps size and mass matrix, and does sampling using the sample from the warmup phase. - `print_summary()`: print diagnostic information like quantiles, effective sample size, and the Gelman-Rubin diagnostic. - `get_samples()`: gets samples from the posterior distribution.Note the following: - JAX uses functional PRNGs. Unlike other languages / frameworks which maintain a global random state, in JAX, every call to a sampler requires an [explicit PRNGKey](https://github.com/google/jaxrandom-numbers-are-different). We will split our initial random seed for subsequent operations, so that we do not accidentally reuse the same seed. - We run inference with the `NUTS` sampler. To run vanilla HMC, we can instead use the [HMC](https://numpyro.readthedocs.io/en/latest/mcmc.htmlnumpyro.mcmc.HMC) class.
###Code
# Start from this source of randomness. We will split keys for subsequent operations.
rng_key = random.PRNGKey(0)
rng_key, rng_key_ = random.split(rng_key)
num_warmup, num_samples = 1000, 2000
# Run NUTS.
kernel = NUTS(model)
mcmc = MCMC(kernel, num_warmup, num_samples)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_1 = mcmc.get_samples()
###Output
sample: 100%|██████████| 3000/3000 [00:04<00:00, 669.73it/s, 3 steps of size 7.48e-01. acc. prob=0.91]
###Markdown
Posterior Distribution over the Regression ParametersWe notice that the progress bar gives us online statistics on the acceptance probability, step size and number of steps taken per sample while running NUTS. In particular, during warmup, we adapt the step size and mass matrix to achieve a certain target acceptance probability which is 0.8, by default. We were able to successfully adapt our step size to achieve this target in the warmup phase.During warmup, the aim is to adapt hyper-parameters such as step size and mass matrix (the HMC algorithm is very sensitive to these hyper-parameters), and to reach the typical set (see [[6](References)] for more details). If there are any issues in the model specification, the first signal to notice would be low acceptance probabilities or very high number of steps. We use the sample from the end of the warmup phase to seed the MCMC chain (denoted by the second `sample` progress bar) from which we generate the desired number of samples from our target distribution.At the end of inference, NumPyro prints the mean, std and 90% CI values for each of the latent parameters. Note that since we standardized our predictors and response variable, we would expect the intercept to have mean 0, as can be seen here. It also prints other convergence diagnostics on the latent parameters in the model, including [effective sample size](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.effective_sample_size) and the [gelman rubin diagnostic](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.gelman_rubin) ($\hat{R}$). The value for these diagnostics indicates that the chain has converged to the target distribution. In our case, the "target distribution" is the posterior distribution over the latent parameters that we are interested in. Note that this is often worth verifying with multiple chains for more complicated models. In the end, `samples_1` is a collection (in our case, a `dict` since `init_samples` was a `dict`) containing samples from the posterior distribution for each of the latent parameters in the model.To look at our regression fit, let us plot the regression line using our posterior estimates for the regression parameters, along with the 90% Credibility Interval (CI). Note that the [hpdi](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.hpdi) function in NumPyro's diagnostics module can be used to compute CI. In the functions below, note that the collected samples from the posterior are all along the leading axis.
###Code
def plot_regression(x, y_mean, y_hpdi):
# Sort values for plotting by x axis
idx = jnp.argsort(x)
marriage = x[idx]
mean = y_mean[idx]
hpdi = y_hpdi[:, idx]
divorce = dset.DivorceScaled.values[idx]
# Plot
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 6))
ax.plot(marriage, mean)
ax.plot(marriage, divorce, 'o')
ax.fill_between(marriage, hpdi[0], hpdi[1], alpha=0.3, interpolate=True)
return ax
# Compute empirical posterior distribution over mu
posterior_mu = jnp.expand_dims(samples_1['a'], -1) + \
jnp.expand_dims(samples_1['bM'], -1) * dset.MarriageScaled.values
mean_mu = jnp.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_mu, hpdi_mu)
ax.set(xlabel='Marriage rate', ylabel='Divorce rate', title='Regression line with 90% CI');
###Output
_____no_output_____
###Markdown
We can see from the plot, that the CI broadens towards the tails where the data is relatively sparse, as can be expected. Posterior Predictive DistributionLet us now look at the posterior predictive distribution to see how our predictive distribution looks with respect to the observed divorce rates. To get samples from the posterior predictive distribution, we need to run the model by substituting the latent parameters with samples from the posterior. NumPyro provides a handy [Predictive](http://num.pyro.ai/en/latest/utilities.htmlnumpyro.infer.util.Predictive) utility for this purpose. Note that by default we generate a single prediction for each sample from the joint posterior distribution, but this can be controlled using the `num_samples` argument.
###Code
from numpyro.infer import Predictive
rng_key, rng_key_ = random.split(rng_key)
predictive = Predictive(model, samples_1)
predictions = predictive(rng_key_, marriage=dset.MarriageScaled.values)['obs']
df = dset.filter(['Location'])
df['Mean Predictions'] = jnp.mean(predictions, axis=0)
df.head()
###Output
_____no_output_____
###Markdown
Predictive Utility With Effect HandlersTo remove the magic behind `Predictive`, let us see how we can combine [effect handlers](https://numpyro.readthedocs.io/en/latest/handlers.html) with the [vmap](https://github.com/google/jaxauto-vectorization-with-vmap) JAX primitive to implement our own simplified predictive utility function that can do vectorized predictions.
###Code
def predict(rng_key, post_samples, model, *args, **kwargs):
model = handlers.condition(handlers.seed(model, rng_key), post_samples)
model_trace = handlers.trace(model).get_trace(*args, **kwargs)
return model_trace['obs']['value']
# vectorize predictions via vmap
predict_fn = vmap(lambda rng_key, samples: predict(rng_key, samples, model, marriage=dset.MarriageScaled.values))
###Output
_____no_output_____
###Markdown
Note the use of the `condition`, `seed` and `trace` effect handlers in the `predict` function. - The `seed` effect-handler is used to wrap a stochastic function with an initial `PRNGKey` seed. When a sample statement inside the model is called, it uses the existing seed to sample from a distribution but this effect-handler also splits the existing key to ensure that future `sample` calls in the model use the newly split key instead. This is to prevent us from having to explicitly pass in a `PRNGKey` to each `sample` statement in the model. - The `condition` effect handler conditions the latent sample sites to certain values. In our case, we are conditioning on values from the posterior distribution returned by MCMC. - The `trace` effect handler runs the model and records the execution trace within an `OrderedDict`. This trace object contains execution metadata that is useful for computing quantities such as the log joint density. It should be clear now that the `predict` function simply runs the model by substituting the latent parameters with samples from the posterior (generated by the `mcmc` function) to generate predictions. Note the use of JAX's auto-vectorization transform called [vmap](https://github.com/google/jaxauto-vectorization-with-vmap) to vectorize predictions. Note that if we didn't use `vmap`, we would have to use a native for loop which for each sample which is much slower. Each draw from the posterior can be used to get predictions over all the 50 states. When we vectorize this over all the samples from the posterior using `vmap`, we will get a `predictions_1` array of shape `(num_samples, 50)`. We can then compute the mean and 90% CI of these samples to plot the posterior predictive distribution. We note that our mean predictions match those obtained from the `Predictive` utility class.
###Code
# Using the same key as we used for Predictive - note that the results are identical.
predictions_1 = predict_fn(random.split(rng_key_, num_samples), samples_1)
mean_pred = jnp.mean(predictions_1, axis=0)
df = dset.filter(['Location'])
df['Mean Predictions'] = jnp.mean(predictions, axis=0)
df.head()
hpdi_pred = hpdi(predictions_1, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel='Marriage rate', ylabel='Divorce rate', title='Predictions with 90% CI');
###Output
_____no_output_____
###Markdown
We have used the same `plot_regression` function as earlier. We notice that our CI for the predictive distribution is much broader as compared to the last plot due to the additional noise introduced by the `sigma` parameter. Most data points lie well within the 90% CI, which indicates a good fit. Posterior Predictive DensityLikewise, making use of effect-handlers and `vmap`, we can also compute the log likelihood for this model given the dataset, and the log posterior predictive density [[6](References)] which is given by $$ log \prod_{i=1}^{n} \int p(y_i | \theta) p_{post}(\theta) d\theta \approx \sum_{i=1}^n log \frac{\sum_s p(\theta^{s})}{S} \\= \sum_{i=1}^n (log \sum_s p(\theta^{s}) - log(S))$$.Here, $i$ indexes the observed data points $y$ and $s$ indexes the posterior samples over the latent parameters $\theta$. If the posterior predictive density for a model has a comparatively high value, it indicates that the observed data-points have higher probability under the given model.
###Code
def log_likelihood(rng_key, params, model, *args, **kwargs):
model = handlers.condition(model, params)
model_trace = handlers.trace(model).get_trace(*args, **kwargs)
obs_node = model_trace['obs']
return obs_node['fn'].log_prob(obs_node['value'])
def log_pred_density(rng_key, params, model, *args, **kwargs):
n = list(params.values())[0].shape[0]
log_lk_fn = vmap(lambda rng_key, params: log_likelihood(rng_key, params, model, *args, **kwargs))
log_lk_vals = log_lk_fn(random.split(rng_key, n), params)
return (logsumexp(log_lk_vals, 0) - jnp.log(n)).sum()
###Output
_____no_output_____
###Markdown
Note that NumPyro provides the [log_likelihood](http://num.pyro.ai/en/latest/utilities.htmllog-likelihood) utility function that can be used directly for computing `log likelihood` as in the first function for any general model. In this tutorial, we would like to emphasize that there is nothing magical about such utility functions, and you can roll out your own inference utilities using NumPyro's effect handling stack.
###Code
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(log_pred_density(rng_key_,
samples_1,
model,
marriage=dset.MarriageScaled.values,
divorce=dset.DivorceScaled.values)))
###Output
Log posterior predictive density: -66.65252685546875
###Markdown
Model 2: Predictor - Median Age of MarriageWe will now model the divorce rate as a function of the median age of marriage. The computations are mostly a reproduction of what we did for Model 1. Notice the following: - Divorce rate is inversely related to the age of marriage. Hence states where the median age of marriage is low will likely have a higher divorce rate. - We get a higher log likelihood as compared to Model 2, indicating that median age of marriage is likely a much better predictor of divorce rate.
###Code
rng_key, rng_key_ = random.split(rng_key)
mcmc.run(rng_key_, age=dset.AgeScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_2 = mcmc.get_samples()
posterior_mu = jnp.expand_dims(samples_2['a'], -1) + \
jnp.expand_dims(samples_2['bA'], -1) * dset.AgeScaled.values
mean_mu = jnp.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_mu, hpdi_mu)
ax.set(xlabel='Median marriage age', ylabel='Divorce rate', title='Regression line with 90% CI');
rng_key, rng_key_ = random.split(rng_key)
predictions_2 = Predictive(model, samples_2)(rng_key_,
age=dset.AgeScaled.values)['obs']
mean_pred = jnp.mean(predictions_2, axis=0)
hpdi_pred = hpdi(predictions_2, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel='Median Age', ylabel='Divorce rate', title='Predictions with 90% CI');
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(log_pred_density(rng_key_,
samples_2,
model,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values)))
###Output
Log posterior predictive density: -59.238067626953125
###Markdown
Model 3: Predictor - Marriage Rate and Median Age of MarriageFinally, we will also model divorce rate as depending on both marriage rate as well as the median age of marriage. Note that the model's posterior predictive density is similar to Model 2 which likely indicates that the marginal information from marriage rate in predicting divorce rate is low when the median age of marriage is already known.
###Code
rng_key, rng_key_ = random.split(rng_key)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_3 = mcmc.get_samples()
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(
log_pred_density(rng_key_,
samples_3,
model,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values)
))
###Output
Log posterior predictive density: -59.06075668334961
###Markdown
Divorce Rate Residuals by StateThe regression plots above shows that the observed divorce rates for many states differs considerably from the mean regression line. To dig deeper into how the last model (Model 3) under-predicts or over-predicts for each of the states, we will plot the posterior predictive and residuals (`Observed divorce rate - Predicted divorce rate`) for each of the states.
###Code
# Predictions for Model 3.
rng_key, rng_key_ = random.split(rng_key)
predictions_3 = Predictive(model, samples_3)(rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values)['obs']
y = jnp.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 16))
pred_mean = jnp.mean(predictions_3, axis=0)
pred_hpdi = hpdi(predictions_3, 0.9)
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = jnp.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
idx = jnp.argsort(residuals_mean)
# Plot posterior predictive
ax[0].plot(jnp.zeros(50), y, '--')
ax[0].errorbar(pred_mean[idx], y, xerr=pred_hpdi[1, idx] - pred_mean[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
ax[0].plot(dset.DivorceScaled.values[idx], y, marker='o',
ls='none', color='gray')
ax[0].set(xlabel='Posterior Predictive (red) vs. Actuals (gray)', ylabel='State',
title='Posterior Predictive with 90% CI')
ax[0].set_yticks(y)
ax[0].set_yticklabels(dset.Loc.values[idx], fontsize=10);
# Plot residuals
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = jnp.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
err = residuals_hpdi[1] - residuals_mean
ax[1].plot(jnp.zeros(50), y, '--')
ax[1].errorbar(residuals_mean[idx], y, xerr=err[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
ax[1].set(xlabel='Residuals', ylabel='State', title='Residuals with 90% CI')
ax[1].set_yticks(y)
ax[1].set_yticklabels(dset.Loc.values[idx], fontsize=10);
###Output
_____no_output_____
###Markdown
The plot on the left shows the mean predictions with 90% CI for each of the states using Model 3. The gray markers indicate the actual observed divorce rates. The right plot shows the residuals for each of the states, and both these plots are sorted by the residuals, i.e. at the bottom, we are looking at states where the model predictions are higher than the observed rates, whereas at the top, the reverse is true.Overall, the model fit seems good because most observed data points like within a 90% CI around the mean predictions. However, notice how the model over-predicts by a large margin for states like Idaho (bottom left), and on the other end under-predicts for states like Maine (top right). This is likely indicative of other factors that we are missing out in our model that affect divorce rate across different states. Even ignoring other socio-political variables, one such factor that we have not yet modeled is the measurement noise given by `Divorce SE` in the dataset. We will explore this in the next section. Regression Model with Measurement ErrorNote that in our previous models, each data point influences the regression line equally. Is this well justified? We will build on the previous model to incorporate measurement error given by `Divorce SE` variable in the dataset. Incorporating measurement noise will be useful in ensuring that observations that have higher confidence (i.e. lower measurement noise) have a greater impact on the regression line. On the other hand, this will also help us better model outliers with high measurement errors. For more details on modeling errors due to measurement noise, refer to Chapter 15 of [[1](References)].To do this, we will reuse Model 3, with the only change that the final observed value has a measurement error given by `divorce_sd` (notice that this has to be standardized since the `divorce` variable itself has been standardized to mean 0 and std 1).
###Code
def model_se(marriage, age, divorce_sd, divorce=None):
a = numpyro.sample('a', dist.Normal(0., 0.2))
bM = numpyro.sample('bM', dist.Normal(0., 0.5))
M = bM * marriage
bA = numpyro.sample('bA', dist.Normal(0., 0.5))
A = bA * age
sigma = numpyro.sample('sigma', dist.Exponential(1.))
mu = a + M + A
divorce_rate = numpyro.sample('divorce_rate', dist.Normal(mu, sigma))
numpyro.sample('obs', dist.Normal(divorce_rate, divorce_sd), obs=divorce)
# Standardize
dset['DivorceScaledSD'] = dset['Divorce SE'] / jnp.std(dset.Divorce.values)
rng_key, rng_key_ = random.split(rng_key)
kernel = NUTS(model_se, target_accept_prob=0.9)
mcmc = MCMC(kernel, num_warmup=1000, num_samples=3000)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values, age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_4 = mcmc.get_samples()
###Output
sample: 100%|██████████| 4000/4000 [00:06<00:00, 650.76it/s, 15 steps of size 3.00e-01. acc. prob=0.91]
###Markdown
Effect of Incorporating Measurement Noise on ResidualsNotice that our values for the regression coefficients is very similar to Model 3. However, introducing measurement noise allows us to more closely match our predictive distribution to the observed values. We can see this if we plot the residuals as earlier.
###Code
rng_key, rng_key_ = random.split(rng_key)
predictions_4 = Predictive(model_se, samples_4)(rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values)['obs']
sd = dset.DivorceScaledSD.values
residuals_4 = dset.DivorceScaled.values - predictions_4
residuals_mean = jnp.mean(residuals_4, axis=0)
residuals_hpdi = hpdi(residuals_4, 0.9)
err = residuals_hpdi[1] - residuals_mean
idx = jnp.argsort(residuals_mean)
y = jnp.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 16))
# Plot Residuals
ax.plot(jnp.zeros(50), y, '--')
ax.errorbar(residuals_mean[idx], y, xerr=err[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
# Plot SD
ax.errorbar(residuals_mean[idx], y, xerr=sd[idx],
ls='none', color='orange', alpha=0.9)
# Plot earlier mean residual
ax.plot(jnp.mean(dset.DivorceScaled.values - predictions_3, 0)[idx], y,
ls='none', marker='o', ms=6, color='black', alpha=0.6)
ax.set(xlabel='Residuals', ylabel='State', title='Residuals with 90% CI')
ax.set_yticks(y)
ax.set_yticklabels(dset.Loc.values[idx], fontsize=10);
ax.text(-2.8, -7, 'Residuals (with error-bars) from current model (in red). '
'Black marker \nshows residuals from the previous model (Model 3). '
'Measurement \nerror is indicated by orange bar.');
###Output
_____no_output_____
###Markdown
The plot above shows the residuals for each of the states, along with the measurement noise given by inner error bar. The gray dots are the mean residuals from our earlier Model 3. Notice how having an additional degree of freedom to model the measurement noise has shrunk the residuals. In particular, for Idaho and Maine, our predictions are now much closer to the observed values after incorporating measurement noise in the model.To better see how measurement noise affects the movement of the regression line, let us plot the residuals with respect to the measurement noise.
###Code
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10, 6))
x = dset.DivorceScaledSD.values
y1 = jnp.mean(residuals_3, 0)
y2 = jnp.mean(residuals_4, 0)
ax.plot(x, y1, ls='none', marker='o')
ax.plot(x, y2, ls='none', marker='o')
for i, (j, k) in enumerate(zip(y1, y2)):
ax.plot([x[i], x[i]], [j, k], '--', color='gray');
ax.set(xlabel='Measurement Noise', ylabel='Residual', title='Mean residuals (Model 4: red, Model 3: blue)');
###Output
_____no_output_____
###Markdown
Bayesian Regression Using NumPyroIn this tutorial, we will explore how to do bayesian regression in NumPyro, using a simple example adapted from Statistical Rethinking [[1](References)]. In particular, we would like to explore the following: - Write a simple model using the `sample` NumPyro primitive. - Run inference using MCMC in NumPyro, in particular, using the No U-Turn Sampler (NUTS) to get a posterior distribution over our regression parameters of interest. - Learn about inference utilities such as `Predictive` and `log_likelihood`. - Learn how we can use effect-handlers in NumPyro to generate execution traces from the model, condition on sample statements, seed models with RNG seeds, etc., and use this to implement various utilities that will be useful for MCMC. e.g. computing model log likelihood, generating empirical distribution over the posterior predictive, etc. Tutorial Outline:1. [Dataset](Dataset)2. [Regression Model to Predict Divorce Rate](Regression-Model-to-Predict-Divorce-Rate) - [Model-1: Predictor-Marriage Rate](Model-1:-Predictor---Marriage-Rate) - [Posterior Distribution over the Regression Parameters](Posterior-Distribution-over-the-Regression-Parameters) - [Posterior Predictive Distribution](Posterior-Predictive-Distribution) - [Predictive Utility With Effect Handlers](Predictive-Utility-With-Effect-Handlers) - [Model Predictive Density](Model-Predictive-Density) - [Model-2: Predictor-Median Age of Marriage](Model-2:-Predictor---Median-Age-of-Marriage) - [Model-3: Predictor-Marriage Rate and Median Age of Marriage](Model-3:-Predictor---Marriage-Rate-and-Median-Age-of-Marriage) - [Divorce Rate Residuals by State](Divorce-Rate-Residuals-by-State)3. [Regression Model with Measurement Error](Regression-Model-with-Measurement-Error) - [Effect of Incorporating Measurement Noise on Residuals](Effect-of-Incorporating-Measurement-Noise-on-Residuals)4. [References](References)
###Code
%reset -s -f
import os
from IPython.display import set_matplotlib_formats
import jax.numpy as np
from jax import random, vmap
from jax.scipy.special import logsumexp
import matplotlib.pyplot as plt
import numpy as onp
import pandas as pd
import seaborn as sns
import numpyro
from numpyro.diagnostics import hpdi
import numpyro.distributions as dist
from numpyro import handlers
from numpyro.infer import MCMC, NUTS
plt.style.use('bmh')
if "NUMPYRO_SPHINXBUILD" in os.environ:
set_matplotlib_formats('svg')
assert numpyro.__version__.startswith('0.2.4')
###Output
_____no_output_____
###Markdown
DatasetFor this example, we will use the `WaffleDivorce` dataset from Chapter 05, Statistical Rethinking [[1](References)]. The dataset contains divorce rates in each of the 50 states in the USA, along with predictors such as population, median age of marriage, whether it is a Southern state and, curiously, number of Waffle Houses.
###Code
DATASET_URL = 'https://raw.githubusercontent.com/rmcelreath/rethinking/master/data/WaffleDivorce.csv'
dset = pd.read_csv(DATASET_URL, sep=';')
dset
###Output
_____no_output_____
###Markdown
Let us plot the pair-wise relationship amongst the main variables in the dataset, using `seaborn.pairplot`.
###Code
vars = ['Population', 'MedianAgeMarriage', 'Marriage', 'WaffleHouses', 'South', 'Divorce']
sns.pairplot(dset, x_vars=vars, y_vars=vars, palette='husl');
###Output
_____no_output_____
###Markdown
From the plots above, we can clearly observe that there is a relationship between divorce rates and marriage rates in a state (as might be expected), and also between divorce rates and median age of marriage. There is also a weak relationship between number of Waffle Houses and divorce rates, which is not obvious from the plot above, but will be clearer if we regress `Divorce` against `WaffleHouse` and plot the results.
###Code
sns.regplot('WaffleHouses', 'Divorce', dset);
###Output
_____no_output_____
###Markdown
This is an example of a spurious association. We do not expect the number of Waffle Houses in a state to affect the divorce rate, but it is likely correlated with other factors that have an effect on the divorce rate. We will not delve into this spurious association in this tutorial, but the interested reader is encouraged to read Chapters 5 and 6 of [[1](References)] which explores the problem of causal association in the presence of multiple predictors. For simplicity, we will primarily focus on marriage rate and the median age of marriage as our predictors for divorce rate throughout the remaining tutorial. Regression Model to Predict Divorce RateLet us now write a regressionn model in *NumPyro* to predict the divorce rate as a linear function of marriage rate and median age of marriage in each of the states. First, note that our predictor variables have somewhat different scales. It is a good practice to standardize our predictors and response variables to mean `0` and standard deviation `1`, which should result in [faster inference](https://mc-stan.org/docs/2_19/stan-users-guide/standardizing-predictors-and-outputs.html).
###Code
standardize = lambda x: (x - x.mean()) / x.std()
dset['AgeScaled'] = dset.MedianAgeMarriage.pipe(standardize)
dset['MarriageScaled'] = dset.Marriage.pipe(standardize)
dset['DivorceScaled'] = dset.Divorce.pipe(standardize)
###Output
_____no_output_____
###Markdown
We write the NumPyro model as follows. While the code should largely be self-explanatory, take note of the following: - In NumPyro, *model* code is any Python callable which can optionally accept additional arguments and keywords. For HMC which we will be using for this tutorial, these arguments and keywords remain static during inference, but we can reuse the same model to generate [predictions](Posterior-Predictive-Distribution) on new data. - In addition to regular Python statements, the model code also contains primitives like `sample`. These primitives can be interpreted with various side-effects using effect handlers. For more on effect handlers, refer to [[3](References)], [[4](References)]. For now, just remember that a `sample` statement makes this a stochastic function that samples some latent parameters from a *prior distribution*. Our goal is to infer the *posterior distribution* of these parameters conditioned on observed data. - The reason why we have kept our predictors as optional keyword arguments is to be able to reuse the same model as we vary the set of predictors. Likewise, the reason why the response variable is optional is that we would like to reuse this model to sample from the posterior predictive distribution. See the [section](Posterior-Predictive-Distribution) on plotting the posterior predictive distribution, as an example.
###Code
def model(marriage=None, age=None, divorce=None):
a = numpyro.sample('a', dist.Normal(0., 0.2))
M, A = 0., 0.
if marriage is not None:
bM = numpyro.sample('bM', dist.Normal(0., 0.5))
M = bM * marriage
if age is not None:
bA = numpyro.sample('bA', dist.Normal(0., 0.5))
A = bA * age
sigma = numpyro.sample('sigma', dist.Exponential(1.))
mu = a + M + A
numpyro.sample('obs', dist.Normal(mu, sigma), obs=divorce)
###Output
_____no_output_____
###Markdown
Model 1: Predictor - Marriage RateWe first try to model the divorce rate as depending on a single variable, marriage rate. As mentioned above, we can use the same `model` code as earlier, but only pass values for `marriage` and `divorce` keyword arguments. We will use the No U-Turn Sampler (see [[5](References)] for more details on the NUTS algorithm) to run inference on this simple model.The Hamiltonian Monte Carlo (or, the NUTS) implementation in NumPyro takes in a potential energy function. This is the negative log joint density for the model. Therefore, for our model description above, we need to construct a function which given the parameter values returns the potential energy (or negative log joint density). Additionally, the verlet integrator in HMC (or, NUTS) returns sample values simulated using Hamiltonian dynamics in the unconstrained space. As such, continuous variables with bounded support need to be transformed into unconstrained space using bijective transforms. We also need to transform these samples back to their constrained support before returning these values to the user. Thankfully, this is handled on the backend for us, within a convenience class for doing [MCMC inference](https://numpyro.readthedocs.io/en/latest/mcmc.htmlnumpyro.mcmc.MCMC) that has the following methods: - `run(...)`: runs warmup, adapts steps size and mass matrix, and does sampling using the sample from the warmup phase. - `print_summary()`: print diagnostic information like quantiles, effective sample size, and the Gelman-Rubin diagnostic. - `get_samples()`: gets samples from the posterior distribution.Note the following: - JAX uses functional PRNGs. Unlike other languages / frameworks which maintain a global random state, in JAX, every call to a sampler requires an [explicit PRNGKey](https://github.com/google/jaxrandom-numbers-are-different). We will split our initial random seed for subsequent operations, so that we do not accidentally reuse the same seed. - We run inference with the `NUTS` sampler. To run vanilla HMC, we can instead use the [HMC](https://numpyro.readthedocs.io/en/latest/mcmc.htmlnumpyro.mcmc.HMC) class.
###Code
# Start from this source of randomness. We will split keys for subsequent operations.
rng_key = random.PRNGKey(0)
rng_key, rng_key_ = random.split(rng_key)
num_warmup, num_samples = 1000, 2000
# Run NUTS.
kernel = NUTS(model)
mcmc = MCMC(kernel, num_warmup, num_samples)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_1 = mcmc.get_samples()
###Output
sample: 100%|██████████| 3000/3000 [00:06<00:00, 487.44it/s, 3 steps of size 7.35e-01. acc. prob=0.92]
###Markdown
Posterior Distribution over the Regression ParametersWe notice that the progress bar gives us online statistics on the acceptance probability, step size and number of steps taken per sample while running NUTS. In particular, during warmup, we adapt the step size and mass matrix to achieve a certain target acceptance probability which is 0.8, by default. We were able to successfully adapt our step size to achieve this target in the warmup phase.During warmup, the aim is to adapt hyper-parameters such as step size and mass matrix (the HMC algorithm is very sensitive to these hyper-parameters), and to reach the typical set (see [[6](References)] for more details). If there are any issues in the model specification, the first signal to notice would be low acceptance probabilities or very high number of steps. We use the sample from the end of the warmup phase to seed the MCMC chain (denoted by the second `sample` progress bar) from which we generate the desired number of samples from our target distribution.At the end of inference, NumPyro prints the mean, std and 90% CI values for each of the latent parameters. Note that since we standardized our predictors and response variable, we would expect the intercept to have mean 0, as can be seen here. It also prints other convergence diagnostics on the latent parameters in the model, including [effective sample size](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.effective_sample_size) and the [gelman rubin diagnostic](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.gelman_rubin) ($\hat{R}$). The value for these diagnostics indicates that the chain has converged to the target distribution. In our case, the "target distribution" is the posterior distribution over the latent parameters that we are interested in. Note that this is often worth verifying with multiple chains for more complicated models. In the end, `samples_1` is a collection (in our case, a `dict` since `init_samples` was a `dict`) containing samples from the posterior distribution for each of the latent parameters in the model.To look at our regression fit, let us plot the regression line using our posterior estimates for the regression parameters, along with the 90% Credibility Interval (CI). Note that the [hpdi](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.hpdi) function in NumPyro's diagnostics module can be used to compute CI. In the functions below, note that the collected samples from the posterior are all along the leading axis.
###Code
def plot_regression(x, y_mean, y_hpdi):
# Sort values for plotting by x axis
idx = np.argsort(x)
marriage = x[idx]
mean = y_mean[idx]
hpdi = y_hpdi[:, idx]
divorce = dset.DivorceScaled.values[idx]
# Plot
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 6))
ax.plot(marriage, mean)
ax.plot(marriage, divorce, 'o')
ax.fill_between(marriage, hpdi[0], hpdi[1], alpha=0.3, interpolate=True)
return ax
# Compute empirical posterior distribution over mu
posterior_mu = np.expand_dims(samples_1['a'], -1) + \
np.expand_dims(samples_1['bM'], -1) * dset.MarriageScaled.values
mean_mu = np.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_mu, hpdi_mu)
ax.set(xlabel='Marriage rate', ylabel='Divorce rate', title='Regression line with 90% CI');
###Output
_____no_output_____
###Markdown
We can see from the plot, that the CI broadens towards the tails where the data is relatively sparse, as can be expected. Posterior Predictive DistributionLet us now look at the posterior predictive distribution to see how our predictive distribution looks with respect to the observed divorce rates. To get samples from the posterior predictive distribution, we need to run the model by substituting the latent parameters with samples from the posterior. NumPyro provides a handy [Predictive](http://num.pyro.ai/en/latest/utilities.htmlnumpyro.infer.util.Predictive) utility for this purpose. Note that by default we generate a single prediction for each sample from the joint posterior distribution, but this can be controlled using the `num_samples` argument.
###Code
from numpyro.infer import Predictive
rng_key, rng_key_ = random.split(rng_key)
predictive = Predictive(model, samples_1)
predictions = predictive(rng_key_, marriage=dset.MarriageScaled.values)['obs']
df = dset.filter(['Location'])
df['Mean Predictions'] = np.mean(predictions, axis=0)
df.head()
###Output
_____no_output_____
###Markdown
Predictive Utility With Effect HandlersTo remove the magic behind `Predictive`, let us see how we can combine [effect handlers](https://numpyro.readthedocs.io/en/latest/handlers.html) with the [vmap](https://github.com/google/jaxauto-vectorization-with-vmap) JAX primitive to implement our own simplified predictive utility function that can do vectorized predictions.
###Code
def predict(rng_key, post_samples, model, *args, **kwargs):
model = handlers.condition(handlers.seed(model, rng_key), post_samples)
model_trace = handlers.trace(model).get_trace(*args, **kwargs)
return model_trace['obs']['value']
# vectorize predictions via vmap
predict_fn = vmap(lambda rng_key, samples: predict(rng_key, samples, model, marriage=dset.MarriageScaled.values))
###Output
_____no_output_____
###Markdown
Note the use of the `condition`, `seed` and `trace` effect handlers in the `predict` function. - The `seed` effect-handler is used to wrap a stochastic function with an initial `PRNGKey` seed. When a sample statement inside the model is called, it uses the existing seed to sample from a distribution but this effect-handler also splits the existing key to ensure that future `sample` calls in the model use the newly split key instead. This is to prevent us from having to explicitly pass in a `PRNGKey` to each `sample` statement in the model. - The `condition` effect handler conditions the latent sample sites to certain values. In our case, we are conditioning on values from the posterior distribution returned by MCMC. - The `trace` effect handler runs the model and records the execution trace within an `OrderedDict`. This trace object contains execution metadata that is useful for computing quantities such as the log joint density. It should be clear now that the `predict` function simply runs the model by substituting the latent parameters with samples from the posterior (generated by the `mcmc` function) to generate predictions. Note the use of JAX's auto-vectorization transform called [vmap](https://github.com/google/jaxauto-vectorization-with-vmap) to vectorize predictions. Note that if we didn't use `vmap`, we would have to use a native for loop which for each sample which is much slower. Each draw from the posterior can be used to get predictions over all the 50 states. When we vectorize this over all the samples from the posterior using `vmap`, we will get a `predictions_1` array of shape `(num_samples, 50)`. We can then compute the mean and 90% CI of these samples to plot the posterior predictive distribution. We note that our mean predictions match those obtained from the `Predictive` utility class.
###Code
# Using the same key as we used for Predictive - note that the results are identical.
predictions_1 = predict_fn(random.split(rng_key_, num_samples), samples_1)
mean_pred = np.mean(predictions_1, axis=0)
df = dset.filter(['Location'])
df['Mean Predictions'] = np.mean(predictions, axis=0)
df.head()
hpdi_pred = hpdi(predictions_1, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel='Marriage rate', ylabel='Divorce rate', title='Predictions with 90% CI');
###Output
_____no_output_____
###Markdown
We have used the same `plot_regression` function as earlier. We notice that our CI for the predictive distribution is much broader as compared to the last plot due to the additional noise introduced by the `sigma` parameter. Most data points lie well within the 90% CI, which indicates a good fit. Posterior Predictive DensityLikewise, making use of effect-handlers and `vmap`, we can also compute the log likelihood for this model given the dataset, and the log posterior predictive density [[6](References)] which is given by $$ log \prod_{i=1}^{n} \int p(y_i | \theta) p_{post}(\theta) d\theta \approx \sum_{i=1}^n log \frac{\sum_s p(\theta^{s})}{S} \\= \sum_{i=1}^n (log \sum_s p(\theta^{s}) - log(S))$$.Here, $i$ indexes the observed data points $y$ and $s$ indexes the posterior samples over the latent parameters $\theta$. If the posterior predictive density for a model has a comparatively high value, it indicates that the observed data-points have higher probability under the given model.
###Code
def log_likelihood(rng_key, params, model, *args, **kwargs):
model = handlers.condition(handlers.seed(model, rng_key), params)
model_trace = handlers.trace(model).get_trace(*args, **kwargs)
obs_node = model_trace['obs']
return obs_node['fn'].log_prob(obs_node['value'])
def log_pred_density(rng_key, params, model, *args, **kwargs):
n = list(params.values())[0].shape[0]
log_lk_fn = vmap(lambda rng_key, params: log_likelihood(rng_key, params, model, *args, **kwargs))
log_lk_vals = log_lk_fn(random.split(rng_key, n), params)
return (logsumexp(log_lk_vals, 0) - np.log(n)).sum()
###Output
_____no_output_____
###Markdown
Note that NumPyro provides the [log_likelihood](http://num.pyro.ai/en/latest/utilities.htmllog-likelihood) utility function that can be used directly for computing `log likelihood` as in the first function for any general model. In this tutorial, we would like to emphasize that there is nothing magical about such utility functions, and you can roll out your own inference utilities using NumPyro's effect handling stack.
###Code
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(log_pred_density(rng_key_,
samples_1,
model,
marriage=dset.MarriageScaled.values,
divorce=dset.DivorceScaled.values)))
###Output
Log posterior predictive density: -66.70353698730469
###Markdown
Model 2: Predictor - Median Age of MarriageWe will now model the divorce rate as a function of the median age of marriage. The computations are mostly a reproduction of what we did for Model 1. Notice the following: - Divorce rate is inversely related to the age of marriage. Hence states where the median age of marriage is low will likely have a higher divorce rate. - We get a higher log likelihood as compared to Model 2, indicating that median age of marriage is likely a much better predictor of divorce rate.
###Code
rng_key, rng_key_ = random.split(rng_key)
mcmc.run(rng_key_, age=dset.AgeScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_2 = mcmc.get_samples()
posterior_mu = np.expand_dims(samples_2['a'], -1) + \
np.expand_dims(samples_2['bA'], -1) * dset.AgeScaled.values
mean_mu = np.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_mu, hpdi_mu)
ax.set(xlabel='Median marriage age', ylabel='Divorce rate', title='Regression line with 90% CI');
rng_key, rng_key_ = random.split(rng_key)
predictions_2 = Predictive(model, samples_2)(rng_key_,
age=dset.AgeScaled.values)['obs']
mean_pred = np.mean(predictions_2, axis=0)
hpdi_pred = hpdi(predictions_2, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel='Median Age', ylabel='Divorce rate', title='Predictions with 90% CI');
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(log_pred_density(rng_key_,
samples_2,
model,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values)))
###Output
Log posterior predictive density: -59.22672653198242
###Markdown
Model 3: Predictor - Marriage Rate and Median Age of MarriageFinally, we will also model divorce rate as depending on both marriage rate as well as the median age of marriage. Note that the model's posterior predictive density is similar to Model 2 which likely indicates that the marginal information from marriage rate in predicting divorce rate is low when the median age of marriage is already known.
###Code
rng_key, rng_key_ = random.split(rng_key)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_3 = mcmc.get_samples()
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(
log_pred_density(rng_key_,
samples_3,
model,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values)
))
###Output
Log posterior predictive density: -59.12065505981445
###Markdown
Divorce Rate Residuals by StateThe regression plots above shows that the observed divorce rates for many states differs considerably from the mean regression line. To dig deeper into how the last model (Model 3) under-predicts or over-predicts for each of the states, we will plot the posterior predictive and residuals (`Observed divorce rate - Predicted divorce rate`) for each of the states.
###Code
# Predictions for Model 3.
rng_key, rng_key_ = random.split(rng_key)
predictions_3 = Predictive(model, samples_3)(rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values)['obs']
y = np.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 16))
pred_mean = np.mean(predictions_3, axis=0)
pred_hpdi = hpdi(predictions_3, 0.9)
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = np.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
idx = np.argsort(residuals_mean)
# Plot posterior predictive
ax[0].plot(np.zeros(50), y, '--')
ax[0].errorbar(pred_mean[idx], y, xerr=pred_hpdi[1, idx] - pred_mean[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
ax[0].plot(dset.DivorceScaled.values[idx], y, marker='o',
ls='none', color='gray')
ax[0].set(xlabel='Posterior Predictive (red) vs. Actuals (gray)', ylabel='State',
title='Posterior Predictive with 90% CI')
ax[0].set_yticks(y)
ax[0].set_yticklabels(dset.Loc.values[idx], fontsize=10);
# Plot residuals
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = np.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
err = residuals_hpdi[1] - residuals_mean
ax[1].plot(np.zeros(50), y, '--')
ax[1].errorbar(residuals_mean[idx], y, xerr=err[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
ax[1].set(xlabel='Residuals', ylabel='State', title='Residuals with 90% CI')
ax[1].set_yticks(y)
ax[1].set_yticklabels(dset.Loc.values[idx], fontsize=10);
###Output
_____no_output_____
###Markdown
The plot on the left shows the mean predictions with 90% CI for each of the states using Model 3. The gray markers indicate the actual observed divorce rates. The right plot shows the residuals for each of the states, and both these plots are sorted by the residuals, i.e. at the bottom, we are looking at states where the model predictions are higher than the observed rates, whereas at the top, the reverse is true.Overall, the model fit seems good because most observed data points like within a 90% CI around the mean predictions. However, notice how the model over-predicts by a large margin for states like Idaho (bottom left), and on the other end under-predicts for states like Maine (top right). This is likely indicative of other factors that we are missing out in our model that affect divorce rate across different states. Even ignoring other socio-political variables, one such factor that we have not yet modeled is the measurement noise given by `Divorce SE` in the dataset. We will explore this in the next section. Regression Model with Measurement ErrorNote that in our previous models, each data point influences the regression line equally. Is this well justified? We will build on the previous model to incorporate measurement error given by `Divorce SE` variable in the dataset. Incorporating measurement noise will be useful in ensuring that observations that have higher confidence (i.e. lower measurement noise) have a greater impact on the regression line. On the other hand, this will also help us better model outliers with high measurement errors. For more details on modeling errors due to measurement noise, refer to Chapter 15 of [[1](References)].To do this, we will reuse Model 3, with the only change that the final observed value has a measurement error given by `divorce_sd` (notice that this has to be standardized since the `divorce` variable itself has been standardized to mean 0 and std 1).
###Code
def model_se(marriage, age, divorce_sd, divorce=None):
a = numpyro.sample('a', dist.Normal(0., 0.2))
bM = numpyro.sample('bM', dist.Normal(0., 0.5))
M = bM * marriage
bA = numpyro.sample('bA', dist.Normal(0., 0.5))
A = bA * age
sigma = numpyro.sample('sigma', dist.Exponential(1.))
mu = a + M + A
divorce_rate = numpyro.sample('divorce_rate', dist.Normal(mu, sigma))
numpyro.sample('obs', dist.Normal(divorce_rate, divorce_sd), obs=divorce)
# Standardize
dset['DivorceScaledSD'] = dset['Divorce SE'] / np.std(dset.Divorce.values)
rng_key, rng_key_ = random.split(rng_key)
kernel = NUTS(model_se, target_accept_prob=0.9)
mcmc = MCMC(kernel, num_warmup=1000, num_samples=3000)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values, age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_4 = mcmc.get_samples()
###Output
sample: 100%|██████████| 4000/4000 [00:10<00:00, 379.65it/s, 15 steps of size 2.93e-01. acc. prob=0.92]
###Markdown
Effect of Incorporating Measurement Noise on ResidualsNotice that our values for the regression coefficients is very similar to Model 3. However, introducing measurement noise allows us to more closely match our predictive distribution to the observed values. We can see this if we plot the residuals as earlier.
###Code
rng_key, rng_key_ = random.split(rng_key)
predictions_4 = Predictive(model_se, samples_4)(rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values)['obs']
sd = dset.DivorceScaledSD.values
residuals_4 = dset.DivorceScaled.values - predictions_4
residuals_mean = np.mean(residuals_4, axis=0)
residuals_hpdi = hpdi(residuals_4, 0.9)
err = residuals_hpdi[1] - residuals_mean
idx = np.argsort(residuals_mean)
y = np.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 16))
# Plot Residuals
ax.plot(np.zeros(50), y, '--')
ax.errorbar(residuals_mean[idx], y, xerr=err[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
# Plot SD
ax.errorbar(residuals_mean[idx], y, xerr=sd[idx],
ls='none', color='orange', alpha=0.9)
# Plot earlier mean residual
ax.plot(np.mean(dset.DivorceScaled.values - predictions_3, 0)[idx], y,
ls='none', marker='o', ms=6, color='black', alpha=0.6)
ax.set(xlabel='Residuals', ylabel='State', title='Residuals with 90% CI')
ax.set_yticks(y)
ax.set_yticklabels(dset.Loc.values[idx], fontsize=10);
ax.text(-2.8, -7, 'Residuals (with error-bars) from current model (in red). '
'Black marker \nshows residuals from the previous model (Model 3). '
'Measurement \nerror is indicated by orange bar.');
###Output
_____no_output_____
###Markdown
The plot above shows the residuals for each of the states, along with the measurement noise given by inner error bar. The gray dots are the mean residuals from our earlier Model 3. Notice how having an additional degree of freedom to model the measurement noise has shrunk the residuals. In particular, for Idaho and Maine, our predictions are now much closer to the observed values after incorporating measurement noise in the model.To better see how measurement noise affects the movement of the regression line, let us plot the residuals with respect to the measurement noise.
###Code
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10, 6))
x = dset.DivorceScaledSD.values
y1 = np.mean(residuals_3, 0)
y2 = np.mean(residuals_4, 0)
ax.plot(x, y1, ls='none', marker='o')
ax.plot(x, y2, ls='none', marker='o')
for i, (j, k) in enumerate(zip(y1, y2)):
ax.plot([x[i], x[i]], [j, k], '--', color='gray');
ax.set(xlabel='Measurement Noise', ylabel='Residual', title='Mean residuals (Model 4: red, Model 3: blue)');
###Output
_____no_output_____
###Markdown
Bayesian Regression Using NumPyroIn this tutorial, we will explore how to do bayesian regression in NumPyro, using a simple example adapted from Statistical Rethinking [[1](References)]. In particular, we would like to explore the following: - Write a simple model using the `sample` NumPyro primitive. - Run inference using MCMC in NumPyro, in particular, using the No U-Turn Sampler (NUTS) to get a posterior distribution over our regression parameters of interest. - Learn about inference utilities such as `Predictive` and `log_likelihood`. - Learn how we can use effect-handlers in NumPyro to generate execution traces from the model, condition on sample statements, seed models with RNG seeds, etc., and use this to implement various utilities that will be useful for MCMC. e.g. computing model log likelihood, generating empirical distribution over the posterior predictive, etc. Tutorial Outline:1. [Dataset](Dataset)2. [Regression Model to Predict Divorce Rate](Regression-Model-to-Predict-Divorce-Rate) - [Model-1: Predictor-Marriage Rate](Model-1:-Predictor---Marriage-Rate) - [Posterior Distribution over the Regression Parameters](Posterior-Distribution-over-the-Regression-Parameters) - [Posterior Predictive Distribution](Posterior-Predictive-Distribution) - [Predictive Utility With Effect Handlers](Predictive-Utility-With-Effect-Handlers) - [Model Predictive Density](Model-Predictive-Density) - [Model-2: Predictor-Median Age of Marriage](Model-2:-Predictor---Median-Age-of-Marriage) - [Model-3: Predictor-Marriage Rate and Median Age of Marriage](Model-3:-Predictor---Marriage-Rate-and-Median-Age-of-Marriage) - [Divorce Rate Residuals by State](Divorce-Rate-Residuals-by-State)3. [Regression Model with Measurement Error](Regression-Model-with-Measurement-Error) - [Effect of Incorporating Measurement Noise on Residuals](Effect-of-Incorporating-Measurement-Noise-on-Residuals)4. [References](References)
###Code
!pip install -q numpyro@git+https://github.com/pyro-ppl/numpyro
import os
from IPython.display import set_matplotlib_formats
import jax.numpy as jnp
from jax import random, vmap
from jax.scipy.special import logsumexp
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import numpyro
from numpyro.diagnostics import hpdi
import numpyro.distributions as dist
from numpyro import handlers
from numpyro.infer import MCMC, NUTS
plt.style.use('bmh')
if "NUMPYRO_SPHINXBUILD" in os.environ:
set_matplotlib_formats('svg')
assert numpyro.__version__.startswith('0.6.0')
###Output
_____no_output_____
###Markdown
DatasetFor this example, we will use the `WaffleDivorce` dataset from Chapter 05, Statistical Rethinking [[1](References)]. The dataset contains divorce rates in each of the 50 states in the USA, along with predictors such as population, median age of marriage, whether it is a Southern state and, curiously, number of Waffle Houses.
###Code
DATASET_URL = 'https://raw.githubusercontent.com/rmcelreath/rethinking/master/data/WaffleDivorce.csv'
dset = pd.read_csv(DATASET_URL, sep=';')
dset
###Output
_____no_output_____
###Markdown
Let us plot the pair-wise relationship amongst the main variables in the dataset, using `seaborn.pairplot`.
###Code
vars = ['Population', 'MedianAgeMarriage', 'Marriage', 'WaffleHouses', 'South', 'Divorce']
sns.pairplot(dset, x_vars=vars, y_vars=vars, palette='husl');
###Output
_____no_output_____
###Markdown
From the plots above, we can clearly observe that there is a relationship between divorce rates and marriage rates in a state (as might be expected), and also between divorce rates and median age of marriage. There is also a weak relationship between number of Waffle Houses and divorce rates, which is not obvious from the plot above, but will be clearer if we regress `Divorce` against `WaffleHouse` and plot the results.
###Code
sns.regplot('WaffleHouses', 'Divorce', dset);
###Output
_____no_output_____
###Markdown
This is an example of a spurious association. We do not expect the number of Waffle Houses in a state to affect the divorce rate, but it is likely correlated with other factors that have an effect on the divorce rate. We will not delve into this spurious association in this tutorial, but the interested reader is encouraged to read Chapters 5 and 6 of [[1](References)] which explores the problem of causal association in the presence of multiple predictors. For simplicity, we will primarily focus on marriage rate and the median age of marriage as our predictors for divorce rate throughout the remaining tutorial. Regression Model to Predict Divorce RateLet us now write a regressionn model in *NumPyro* to predict the divorce rate as a linear function of marriage rate and median age of marriage in each of the states. First, note that our predictor variables have somewhat different scales. It is a good practice to standardize our predictors and response variables to mean `0` and standard deviation `1`, which should result in [faster inference](https://mc-stan.org/docs/2_19/stan-users-guide/standardizing-predictors-and-outputs.html).
###Code
standardize = lambda x: (x - x.mean()) / x.std()
dset['AgeScaled'] = dset.MedianAgeMarriage.pipe(standardize)
dset['MarriageScaled'] = dset.Marriage.pipe(standardize)
dset['DivorceScaled'] = dset.Divorce.pipe(standardize)
###Output
_____no_output_____
###Markdown
We write the NumPyro model as follows. While the code should largely be self-explanatory, take note of the following: - In NumPyro, *model* code is any Python callable which can optionally accept additional arguments and keywords. For HMC which we will be using for this tutorial, these arguments and keywords remain static during inference, but we can reuse the same model to generate [predictions](Posterior-Predictive-Distribution) on new data. - In addition to regular Python statements, the model code also contains primitives like `sample`. These primitives can be interpreted with various side-effects using effect handlers. For more on effect handlers, refer to [[3](References)], [[4](References)]. For now, just remember that a `sample` statement makes this a stochastic function that samples some latent parameters from a *prior distribution*. Our goal is to infer the *posterior distribution* of these parameters conditioned on observed data. - The reason why we have kept our predictors as optional keyword arguments is to be able to reuse the same model as we vary the set of predictors. Likewise, the reason why the response variable is optional is that we would like to reuse this model to sample from the posterior predictive distribution. See the [section](Posterior-Predictive-Distribution) on plotting the posterior predictive distribution, as an example.
###Code
def model(marriage=None, age=None, divorce=None):
a = numpyro.sample('a', dist.Normal(0., 0.2))
M, A = 0., 0.
if marriage is not None:
bM = numpyro.sample('bM', dist.Normal(0., 0.5))
M = bM * marriage
if age is not None:
bA = numpyro.sample('bA', dist.Normal(0., 0.5))
A = bA * age
sigma = numpyro.sample('sigma', dist.Exponential(1.))
mu = a + M + A
numpyro.sample('obs', dist.Normal(mu, sigma), obs=divorce)
###Output
_____no_output_____
###Markdown
Model 1: Predictor - Marriage RateWe first try to model the divorce rate as depending on a single variable, marriage rate. As mentioned above, we can use the same `model` code as earlier, but only pass values for `marriage` and `divorce` keyword arguments. We will use the No U-Turn Sampler (see [[5](References)] for more details on the NUTS algorithm) to run inference on this simple model.The Hamiltonian Monte Carlo (or, the NUTS) implementation in NumPyro takes in a potential energy function. This is the negative log joint density for the model. Therefore, for our model description above, we need to construct a function which given the parameter values returns the potential energy (or negative log joint density). Additionally, the verlet integrator in HMC (or, NUTS) returns sample values simulated using Hamiltonian dynamics in the unconstrained space. As such, continuous variables with bounded support need to be transformed into unconstrained space using bijective transforms. We also need to transform these samples back to their constrained support before returning these values to the user. Thankfully, this is handled on the backend for us, within a convenience class for doing [MCMC inference](https://numpyro.readthedocs.io/en/latest/mcmc.htmlnumpyro.mcmc.MCMC) that has the following methods: - `run(...)`: runs warmup, adapts steps size and mass matrix, and does sampling using the sample from the warmup phase. - `print_summary()`: print diagnostic information like quantiles, effective sample size, and the Gelman-Rubin diagnostic. - `get_samples()`: gets samples from the posterior distribution.Note the following: - JAX uses functional PRNGs. Unlike other languages / frameworks which maintain a global random state, in JAX, every call to a sampler requires an [explicit PRNGKey](https://github.com/google/jaxrandom-numbers-are-different). We will split our initial random seed for subsequent operations, so that we do not accidentally reuse the same seed. - We run inference with the `NUTS` sampler. To run vanilla HMC, we can instead use the [HMC](https://numpyro.readthedocs.io/en/latest/mcmc.htmlnumpyro.mcmc.HMC) class.
###Code
# Start from this source of randomness. We will split keys for subsequent operations.
rng_key = random.PRNGKey(0)
rng_key, rng_key_ = random.split(rng_key)
# Run NUTS.
kernel = NUTS(model)
mcmc = MCMC(kernel, num_warmup=1000, num_samples=2000)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_1 = mcmc.get_samples()
###Output
sample: 100%|██████████| 3000/3000 [00:06<00:00, 429.13it/s, 3 steps of size 7.48e-01. acc. prob=0.91]
###Markdown
Posterior Distribution over the Regression ParametersWe notice that the progress bar gives us online statistics on the acceptance probability, step size and number of steps taken per sample while running NUTS. In particular, during warmup, we adapt the step size and mass matrix to achieve a certain target acceptance probability which is 0.8, by default. We were able to successfully adapt our step size to achieve this target in the warmup phase.During warmup, the aim is to adapt hyper-parameters such as step size and mass matrix (the HMC algorithm is very sensitive to these hyper-parameters), and to reach the typical set (see [[6](References)] for more details). If there are any issues in the model specification, the first signal to notice would be low acceptance probabilities or very high number of steps. We use the sample from the end of the warmup phase to seed the MCMC chain (denoted by the second `sample` progress bar) from which we generate the desired number of samples from our target distribution.At the end of inference, NumPyro prints the mean, std and 90% CI values for each of the latent parameters. Note that since we standardized our predictors and response variable, we would expect the intercept to have mean 0, as can be seen here. It also prints other convergence diagnostics on the latent parameters in the model, including [effective sample size](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.effective_sample_size) and the [gelman rubin diagnostic](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.gelman_rubin) ($\hat{R}$). The value for these diagnostics indicates that the chain has converged to the target distribution. In our case, the "target distribution" is the posterior distribution over the latent parameters that we are interested in. Note that this is often worth verifying with multiple chains for more complicated models. In the end, `samples_1` is a collection (in our case, a `dict` since `init_samples` was a `dict`) containing samples from the posterior distribution for each of the latent parameters in the model.To look at our regression fit, let us plot the regression line using our posterior estimates for the regression parameters, along with the 90% Credibility Interval (CI). Note that the [hpdi](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.hpdi) function in NumPyro's diagnostics module can be used to compute CI. In the functions below, note that the collected samples from the posterior are all along the leading axis.
###Code
def plot_regression(x, y_mean, y_hpdi):
# Sort values for plotting by x axis
idx = jnp.argsort(x)
marriage = x[idx]
mean = y_mean[idx]
hpdi = y_hpdi[:, idx]
divorce = dset.DivorceScaled.values[idx]
# Plot
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 6))
ax.plot(marriage, mean)
ax.plot(marriage, divorce, 'o')
ax.fill_between(marriage, hpdi[0], hpdi[1], alpha=0.3, interpolate=True)
return ax
# Compute empirical posterior distribution over mu
posterior_mu = jnp.expand_dims(samples_1['a'], -1) + \
jnp.expand_dims(samples_1['bM'], -1) * dset.MarriageScaled.values
mean_mu = jnp.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_mu, hpdi_mu)
ax.set(xlabel='Marriage rate', ylabel='Divorce rate', title='Regression line with 90% CI');
###Output
_____no_output_____
###Markdown
We can see from the plot, that the CI broadens towards the tails where the data is relatively sparse, as can be expected. Posterior Predictive DistributionLet us now look at the posterior predictive distribution to see how our predictive distribution looks with respect to the observed divorce rates. To get samples from the posterior predictive distribution, we need to run the model by substituting the latent parameters with samples from the posterior. NumPyro provides a handy [Predictive](http://num.pyro.ai/en/latest/utilities.htmlnumpyro.infer.util.Predictive) utility for this purpose. Note that by default we generate a single prediction for each sample from the joint posterior distribution, but this can be controlled using the `num_samples` argument.
###Code
from numpyro.infer import Predictive
rng_key, rng_key_ = random.split(rng_key)
predictive = Predictive(model, samples_1)
predictions = predictive(rng_key_, marriage=dset.MarriageScaled.values)['obs']
df = dset.filter(['Location'])
df['Mean Predictions'] = jnp.mean(predictions, axis=0)
df.head()
###Output
_____no_output_____
###Markdown
Predictive Utility With Effect HandlersTo remove the magic behind `Predictive`, let us see how we can combine [effect handlers](https://numpyro.readthedocs.io/en/latest/handlers.html) with the [vmap](https://github.com/google/jaxauto-vectorization-with-vmap) JAX primitive to implement our own simplified predictive utility function that can do vectorized predictions.
###Code
def predict(rng_key, post_samples, model, *args, **kwargs):
model = handlers.seed(handlers.condition(model, post_samples), rng_key)
model_trace = handlers.trace(model).get_trace(*args, **kwargs)
return model_trace['obs']['value']
# vectorize predictions via vmap
predict_fn = vmap(lambda rng_key, samples: predict(rng_key, samples, model, marriage=dset.MarriageScaled.values))
###Output
_____no_output_____
###Markdown
Note the use of the `condition`, `seed` and `trace` effect handlers in the `predict` function. - The `seed` effect-handler is used to wrap a stochastic function with an initial `PRNGKey` seed. When a sample statement inside the model is called, it uses the existing seed to sample from a distribution but this effect-handler also splits the existing key to ensure that future `sample` calls in the model use the newly split key instead. This is to prevent us from having to explicitly pass in a `PRNGKey` to each `sample` statement in the model. - The `condition` effect handler conditions the latent sample sites to certain values. In our case, we are conditioning on values from the posterior distribution returned by MCMC. - The `trace` effect handler runs the model and records the execution trace within an `OrderedDict`. This trace object contains execution metadata that is useful for computing quantities such as the log joint density. It should be clear now that the `predict` function simply runs the model by substituting the latent parameters with samples from the posterior (generated by the `mcmc` function) to generate predictions. Note the use of JAX's auto-vectorization transform called [vmap](https://github.com/google/jaxauto-vectorization-with-vmap) to vectorize predictions. Note that if we didn't use `vmap`, we would have to use a native for loop which for each sample which is much slower. Each draw from the posterior can be used to get predictions over all the 50 states. When we vectorize this over all the samples from the posterior using `vmap`, we will get a `predictions_1` array of shape `(num_samples, 50)`. We can then compute the mean and 90% CI of these samples to plot the posterior predictive distribution. We note that our mean predictions match those obtained from the `Predictive` utility class.
###Code
# Using the same key as we used for Predictive - note that the results are identical.
predictions_1 = predict_fn(random.split(rng_key_, num_samples), samples_1)
mean_pred = jnp.mean(predictions_1, axis=0)
df = dset.filter(['Location'])
df['Mean Predictions'] = mean_pred
df.head()
hpdi_pred = hpdi(predictions_1, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel='Marriage rate', ylabel='Divorce rate', title='Predictions with 90% CI');
###Output
_____no_output_____
###Markdown
We have used the same `plot_regression` function as earlier. We notice that our CI for the predictive distribution is much broader as compared to the last plot due to the additional noise introduced by the `sigma` parameter. Most data points lie well within the 90% CI, which indicates a good fit. Posterior Predictive DensityLikewise, making use of effect-handlers and `vmap`, we can also compute the log likelihood for this model given the dataset, and the log posterior predictive density [[6](References)] which is given by $$ log \prod_{i=1}^{n} \int p(y_i | \theta) p_{post}(\theta) d\theta \approx \sum_{i=1}^n log \frac{\sum_s p(\theta^{s})}{S} \\= \sum_{i=1}^n (log \sum_s p(\theta^{s}) - log(S))$$.Here, $i$ indexes the observed data points $y$ and $s$ indexes the posterior samples over the latent parameters $\theta$. If the posterior predictive density for a model has a comparatively high value, it indicates that the observed data-points have higher probability under the given model.
###Code
def log_likelihood(rng_key, params, model, *args, **kwargs):
model = handlers.condition(model, params)
model_trace = handlers.trace(model).get_trace(*args, **kwargs)
obs_node = model_trace['obs']
return obs_node['fn'].log_prob(obs_node['value'])
def log_pred_density(rng_key, params, model, *args, **kwargs):
n = list(params.values())[0].shape[0]
log_lk_fn = vmap(lambda rng_key, params: log_likelihood(rng_key, params, model, *args, **kwargs))
log_lk_vals = log_lk_fn(random.split(rng_key, n), params)
return (logsumexp(log_lk_vals, 0) - jnp.log(n)).sum()
###Output
_____no_output_____
###Markdown
Note that NumPyro provides the [log_likelihood](http://num.pyro.ai/en/latest/utilities.htmllog-likelihood) utility function that can be used directly for computing `log likelihood` as in the first function for any general model. In this tutorial, we would like to emphasize that there is nothing magical about such utility functions, and you can roll out your own inference utilities using NumPyro's effect handling stack.
###Code
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(log_pred_density(rng_key_,
samples_1,
model,
marriage=dset.MarriageScaled.values,
divorce=dset.DivorceScaled.values)))
###Output
Log posterior predictive density: -66.65252685546875
###Markdown
Model 2: Predictor - Median Age of MarriageWe will now model the divorce rate as a function of the median age of marriage. The computations are mostly a reproduction of what we did for Model 1. Notice the following: - Divorce rate is inversely related to the age of marriage. Hence states where the median age of marriage is low will likely have a higher divorce rate. - We get a higher log likelihood as compared to Model 2, indicating that median age of marriage is likely a much better predictor of divorce rate.
###Code
rng_key, rng_key_ = random.split(rng_key)
mcmc.run(rng_key_, age=dset.AgeScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_2 = mcmc.get_samples()
posterior_mu = jnp.expand_dims(samples_2['a'], -1) + \
jnp.expand_dims(samples_2['bA'], -1) * dset.AgeScaled.values
mean_mu = jnp.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_mu, hpdi_mu)
ax.set(xlabel='Median marriage age', ylabel='Divorce rate', title='Regression line with 90% CI');
rng_key, rng_key_ = random.split(rng_key)
predictions_2 = Predictive(model, samples_2)(rng_key_,
age=dset.AgeScaled.values)['obs']
mean_pred = jnp.mean(predictions_2, axis=0)
hpdi_pred = hpdi(predictions_2, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel='Median Age', ylabel='Divorce rate', title='Predictions with 90% CI');
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(log_pred_density(rng_key_,
samples_2,
model,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values)))
###Output
Log posterior predictive density: -59.238067626953125
###Markdown
Model 3: Predictor - Marriage Rate and Median Age of MarriageFinally, we will also model divorce rate as depending on both marriage rate as well as the median age of marriage. Note that the model's posterior predictive density is similar to Model 2 which likely indicates that the marginal information from marriage rate in predicting divorce rate is low when the median age of marriage is already known.
###Code
rng_key, rng_key_ = random.split(rng_key)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_3 = mcmc.get_samples()
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(
log_pred_density(rng_key_,
samples_3,
model,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values)
))
###Output
Log posterior predictive density: -59.06075668334961
###Markdown
Divorce Rate Residuals by StateThe regression plots above shows that the observed divorce rates for many states differs considerably from the mean regression line. To dig deeper into how the last model (Model 3) under-predicts or over-predicts for each of the states, we will plot the posterior predictive and residuals (`Observed divorce rate - Predicted divorce rate`) for each of the states.
###Code
# Predictions for Model 3.
rng_key, rng_key_ = random.split(rng_key)
predictions_3 = Predictive(model, samples_3)(rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values)['obs']
y = jnp.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 16))
pred_mean = jnp.mean(predictions_3, axis=0)
pred_hpdi = hpdi(predictions_3, 0.9)
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = jnp.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
idx = jnp.argsort(residuals_mean)
# Plot posterior predictive
ax[0].plot(jnp.zeros(50), y, '--')
ax[0].errorbar(pred_mean[idx], y, xerr=pred_hpdi[1, idx] - pred_mean[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
ax[0].plot(dset.DivorceScaled.values[idx], y, marker='o',
ls='none', color='gray')
ax[0].set(xlabel='Posterior Predictive (red) vs. Actuals (gray)', ylabel='State',
title='Posterior Predictive with 90% CI')
ax[0].set_yticks(y)
ax[0].set_yticklabels(dset.Loc.values[idx], fontsize=10);
# Plot residuals
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = jnp.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
err = residuals_hpdi[1] - residuals_mean
ax[1].plot(jnp.zeros(50), y, '--')
ax[1].errorbar(residuals_mean[idx], y, xerr=err[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
ax[1].set(xlabel='Residuals', ylabel='State', title='Residuals with 90% CI')
ax[1].set_yticks(y)
ax[1].set_yticklabels(dset.Loc.values[idx], fontsize=10);
###Output
_____no_output_____
###Markdown
The plot on the left shows the mean predictions with 90% CI for each of the states using Model 3. The gray markers indicate the actual observed divorce rates. The right plot shows the residuals for each of the states, and both these plots are sorted by the residuals, i.e. at the bottom, we are looking at states where the model predictions are higher than the observed rates, whereas at the top, the reverse is true.Overall, the model fit seems good because most observed data points like within a 90% CI around the mean predictions. However, notice how the model over-predicts by a large margin for states like Idaho (bottom left), and on the other end under-predicts for states like Maine (top right). This is likely indicative of other factors that we are missing out in our model that affect divorce rate across different states. Even ignoring other socio-political variables, one such factor that we have not yet modeled is the measurement noise given by `Divorce SE` in the dataset. We will explore this in the next section. Regression Model with Measurement ErrorNote that in our previous models, each data point influences the regression line equally. Is this well justified? We will build on the previous model to incorporate measurement error given by `Divorce SE` variable in the dataset. Incorporating measurement noise will be useful in ensuring that observations that have higher confidence (i.e. lower measurement noise) have a greater impact on the regression line. On the other hand, this will also help us better model outliers with high measurement errors. For more details on modeling errors due to measurement noise, refer to Chapter 14 of [[1](References)].To do this, we will reuse Model 3, with the only change that the final observed value has a measurement error given by `divorce_sd` (notice that this has to be standardized since the `divorce` variable itself has been standardized to mean 0 and std 1).
###Code
def model_se(marriage, age, divorce_sd, divorce=None):
a = numpyro.sample('a', dist.Normal(0., 0.2))
bM = numpyro.sample('bM', dist.Normal(0., 0.5))
M = bM * marriage
bA = numpyro.sample('bA', dist.Normal(0., 0.5))
A = bA * age
sigma = numpyro.sample('sigma', dist.Exponential(1.))
mu = a + M + A
divorce_rate = numpyro.sample('divorce_rate', dist.Normal(mu, sigma))
numpyro.sample('obs', dist.Normal(divorce_rate, divorce_sd), obs=divorce)
# Standardize
dset['DivorceScaledSD'] = dset['Divorce SE'] / jnp.std(dset.Divorce.values)
rng_key, rng_key_ = random.split(rng_key)
kernel = NUTS(model_se, target_accept_prob=0.9)
mcmc = MCMC(kernel, num_warmup=1000, num_samples=3000)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values, age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_4 = mcmc.get_samples()
###Output
sample: 100%|██████████| 4000/4000 [00:09<00:00, 408.70it/s, 15 steps of size 3.00e-01. acc. prob=0.91]
###Markdown
Effect of Incorporating Measurement Noise on ResidualsNotice that our values for the regression coefficients is very similar to Model 3. However, introducing measurement noise allows us to more closely match our predictive distribution to the observed values. We can see this if we plot the residuals as earlier.
###Code
rng_key, rng_key_ = random.split(rng_key)
predictions_4 = Predictive(model_se, samples_4)(rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values)['obs']
sd = dset.DivorceScaledSD.values
residuals_4 = dset.DivorceScaled.values - predictions_4
residuals_mean = jnp.mean(residuals_4, axis=0)
residuals_hpdi = hpdi(residuals_4, 0.9)
err = residuals_hpdi[1] - residuals_mean
idx = jnp.argsort(residuals_mean)
y = jnp.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 16))
# Plot Residuals
ax.plot(jnp.zeros(50), y, '--')
ax.errorbar(residuals_mean[idx], y, xerr=err[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
# Plot SD
ax.errorbar(residuals_mean[idx], y, xerr=sd[idx],
ls='none', color='orange', alpha=0.9)
# Plot earlier mean residual
ax.plot(jnp.mean(dset.DivorceScaled.values - predictions_3, 0)[idx], y,
ls='none', marker='o', ms=6, color='black', alpha=0.6)
ax.set(xlabel='Residuals', ylabel='State', title='Residuals with 90% CI')
ax.set_yticks(y)
ax.set_yticklabels(dset.Loc.values[idx], fontsize=10);
ax.text(-2.8, -7, 'Residuals (with error-bars) from current model (in red). '
'Black marker \nshows residuals from the previous model (Model 3). '
'Measurement \nerror is indicated by orange bar.');
###Output
_____no_output_____
###Markdown
The plot above shows the residuals for each of the states, along with the measurement noise given by inner error bar. The gray dots are the mean residuals from our earlier Model 3. Notice how having an additional degree of freedom to model the measurement noise has shrunk the residuals. In particular, for Idaho and Maine, our predictions are now much closer to the observed values after incorporating measurement noise in the model.To better see how measurement noise affects the movement of the regression line, let us plot the residuals with respect to the measurement noise.
###Code
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10, 6))
x = dset.DivorceScaledSD.values
y1 = jnp.mean(residuals_3, 0)
y2 = jnp.mean(residuals_4, 0)
ax.plot(x, y1, ls='none', marker='o')
ax.plot(x, y2, ls='none', marker='o')
for i, (j, k) in enumerate(zip(y1, y2)):
ax.plot([x[i], x[i]], [j, k], '--', color='gray');
ax.set(xlabel='Measurement Noise', ylabel='Residual', title='Mean residuals (Model 4: red, Model 3: blue)');
###Output
_____no_output_____
###Markdown
Bayesian Regression Using NumPyroIn this tutorial, we will explore how to do bayesian regression in NumPyro, using a simple example adapted from Statistical Rethinking [[1](References)]. In particular, we would like to explore the following: - Write a simple model using the `sample` NumPyro primitive. - Run inference using MCMC in NumPyro, in particular, using the No U-Turn Sampler (NUTS) to get a posterior distribution over our regression parameters of interest. - Learn about inference utilities such as `Predictive` and `log_likelihood`. - Learn how we can use effect-handlers in NumPyro to generate execution traces from the model, condition on sample statements, seed models with RNG seeds, etc., and use this to implement various utilities that will be useful for MCMC. e.g. computing model log likelihood, generating empirical distribution over the posterior predictive, etc. Tutorial Outline:1. [Dataset](Dataset)2. [Regression Model to Predict Divorce Rate](Regression-Model-to-Predict-Divorce-Rate) - [Model-1: Predictor-Marriage Rate](Model-1:-Predictor---Marriage-Rate) - [Posterior Distribution over the Regression Parameters](Posterior-Distribution-over-the-Regression-Parameters) - [Posterior Predictive Distribution](Posterior-Predictive-Distribution) - [Predictive Utility With Effect Handlers](Predictive-Utility-With-Effect-Handlers) - [Model Predictive Density](Model-Predictive-Density) - [Model-2: Predictor-Median Age of Marriage](Model-2:-Predictor---Median-Age-of-Marriage) - [Model-3: Predictor-Marriage Rate and Median Age of Marriage](Model-3:-Predictor---Marriage-Rate-and-Median-Age-of-Marriage) - [Divorce Rate Residuals by State](Divorce-Rate-Residuals-by-State)3. [Regression Model with Measurement Error](Regression-Model-with-Measurement-Error) - [Effect of Incorporating Measurement Noise on Residuals](Effect-of-Incorporating-Measurement-Noise-on-Residuals)4. [References](References)
###Code
!pip install -q numpyro@git+https://github.com/pyro-ppl/numpyro
import os
from IPython.display import set_matplotlib_formats
import jax.numpy as jnp
from jax import random, vmap
from jax.scipy.special import logsumexp
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import numpyro
from numpyro.diagnostics import hpdi
import numpyro.distributions as dist
from numpyro import handlers
from numpyro.infer import MCMC, NUTS
plt.style.use('bmh')
if "NUMPYRO_SPHINXBUILD" in os.environ:
set_matplotlib_formats('svg')
assert numpyro.__version__.startswith('0.6.0')
###Output
_____no_output_____
###Markdown
DatasetFor this example, we will use the `WaffleDivorce` dataset from Chapter 05, Statistical Rethinking [[1](References)]. The dataset contains divorce rates in each of the 50 states in the USA, along with predictors such as population, median age of marriage, whether it is a Southern state and, curiously, number of Waffle Houses.
###Code
DATASET_URL = 'https://raw.githubusercontent.com/rmcelreath/rethinking/master/data/WaffleDivorce.csv'
dset = pd.read_csv(DATASET_URL, sep=';')
dset
###Output
_____no_output_____
###Markdown
Let us plot the pair-wise relationship amongst the main variables in the dataset, using `seaborn.pairplot`.
###Code
vars = ['Population', 'MedianAgeMarriage', 'Marriage', 'WaffleHouses', 'South', 'Divorce']
sns.pairplot(dset, x_vars=vars, y_vars=vars, palette='husl');
###Output
_____no_output_____
###Markdown
From the plots above, we can clearly observe that there is a relationship between divorce rates and marriage rates in a state (as might be expected), and also between divorce rates and median age of marriage. There is also a weak relationship between number of Waffle Houses and divorce rates, which is not obvious from the plot above, but will be clearer if we regress `Divorce` against `WaffleHouse` and plot the results.
###Code
sns.regplot('WaffleHouses', 'Divorce', dset);
###Output
_____no_output_____
###Markdown
This is an example of a spurious association. We do not expect the number of Waffle Houses in a state to affect the divorce rate, but it is likely correlated with other factors that have an effect on the divorce rate. We will not delve into this spurious association in this tutorial, but the interested reader is encouraged to read Chapters 5 and 6 of [[1](References)] which explores the problem of causal association in the presence of multiple predictors. For simplicity, we will primarily focus on marriage rate and the median age of marriage as our predictors for divorce rate throughout the remaining tutorial. Regression Model to Predict Divorce RateLet us now write a regressionn model in *NumPyro* to predict the divorce rate as a linear function of marriage rate and median age of marriage in each of the states. First, note that our predictor variables have somewhat different scales. It is a good practice to standardize our predictors and response variables to mean `0` and standard deviation `1`, which should result in [faster inference](https://mc-stan.org/docs/2_19/stan-users-guide/standardizing-predictors-and-outputs.html).
###Code
standardize = lambda x: (x - x.mean()) / x.std()
dset['AgeScaled'] = dset.MedianAgeMarriage.pipe(standardize)
dset['MarriageScaled'] = dset.Marriage.pipe(standardize)
dset['DivorceScaled'] = dset.Divorce.pipe(standardize)
###Output
_____no_output_____
###Markdown
We write the NumPyro model as follows. While the code should largely be self-explanatory, take note of the following: - In NumPyro, *model* code is any Python callable which can optionally accept additional arguments and keywords. For HMC which we will be using for this tutorial, these arguments and keywords remain static during inference, but we can reuse the same model to generate [predictions](Posterior-Predictive-Distribution) on new data. - In addition to regular Python statements, the model code also contains primitives like `sample`. These primitives can be interpreted with various side-effects using effect handlers. For more on effect handlers, refer to [[3](References)], [[4](References)]. For now, just remember that a `sample` statement makes this a stochastic function that samples some latent parameters from a *prior distribution*. Our goal is to infer the *posterior distribution* of these parameters conditioned on observed data. - The reason why we have kept our predictors as optional keyword arguments is to be able to reuse the same model as we vary the set of predictors. Likewise, the reason why the response variable is optional is that we would like to reuse this model to sample from the posterior predictive distribution. See the [section](Posterior-Predictive-Distribution) on plotting the posterior predictive distribution, as an example.
###Code
def model(marriage=None, age=None, divorce=None):
a = numpyro.sample('a', dist.Normal(0., 0.2))
M, A = 0., 0.
if marriage is not None:
bM = numpyro.sample('bM', dist.Normal(0., 0.5))
M = bM * marriage
if age is not None:
bA = numpyro.sample('bA', dist.Normal(0., 0.5))
A = bA * age
sigma = numpyro.sample('sigma', dist.Exponential(1.))
mu = a + M + A
numpyro.sample('obs', dist.Normal(mu, sigma), obs=divorce)
###Output
_____no_output_____
###Markdown
Model 1: Predictor - Marriage RateWe first try to model the divorce rate as depending on a single variable, marriage rate. As mentioned above, we can use the same `model` code as earlier, but only pass values for `marriage` and `divorce` keyword arguments. We will use the No U-Turn Sampler (see [[5](References)] for more details on the NUTS algorithm) to run inference on this simple model.The Hamiltonian Monte Carlo (or, the NUTS) implementation in NumPyro takes in a potential energy function. This is the negative log joint density for the model. Therefore, for our model description above, we need to construct a function which given the parameter values returns the potential energy (or negative log joint density). Additionally, the verlet integrator in HMC (or, NUTS) returns sample values simulated using Hamiltonian dynamics in the unconstrained space. As such, continuous variables with bounded support need to be transformed into unconstrained space using bijective transforms. We also need to transform these samples back to their constrained support before returning these values to the user. Thankfully, this is handled on the backend for us, within a convenience class for doing [MCMC inference](https://numpyro.readthedocs.io/en/latest/mcmc.htmlnumpyro.mcmc.MCMC) that has the following methods: - `run(...)`: runs warmup, adapts steps size and mass matrix, and does sampling using the sample from the warmup phase. - `print_summary()`: print diagnostic information like quantiles, effective sample size, and the Gelman-Rubin diagnostic. - `get_samples()`: gets samples from the posterior distribution.Note the following: - JAX uses functional PRNGs. Unlike other languages / frameworks which maintain a global random state, in JAX, every call to a sampler requires an [explicit PRNGKey](https://github.com/google/jaxrandom-numbers-are-different). We will split our initial random seed for subsequent operations, so that we do not accidentally reuse the same seed. - We run inference with the `NUTS` sampler. To run vanilla HMC, we can instead use the [HMC](https://numpyro.readthedocs.io/en/latest/mcmc.htmlnumpyro.mcmc.HMC) class.
###Code
# Start from this source of randomness. We will split keys for subsequent operations.
rng_key = random.PRNGKey(0)
rng_key, rng_key_ = random.split(rng_key)
num_warmup, num_samples = 1000, 2000
# Run NUTS.
kernel = NUTS(model)
mcmc = MCMC(kernel, num_warmup, num_samples)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_1 = mcmc.get_samples()
###Output
sample: 100%|██████████| 3000/3000 [00:06<00:00, 429.13it/s, 3 steps of size 7.48e-01. acc. prob=0.91]
###Markdown
Posterior Distribution over the Regression ParametersWe notice that the progress bar gives us online statistics on the acceptance probability, step size and number of steps taken per sample while running NUTS. In particular, during warmup, we adapt the step size and mass matrix to achieve a certain target acceptance probability which is 0.8, by default. We were able to successfully adapt our step size to achieve this target in the warmup phase.During warmup, the aim is to adapt hyper-parameters such as step size and mass matrix (the HMC algorithm is very sensitive to these hyper-parameters), and to reach the typical set (see [[6](References)] for more details). If there are any issues in the model specification, the first signal to notice would be low acceptance probabilities or very high number of steps. We use the sample from the end of the warmup phase to seed the MCMC chain (denoted by the second `sample` progress bar) from which we generate the desired number of samples from our target distribution.At the end of inference, NumPyro prints the mean, std and 90% CI values for each of the latent parameters. Note that since we standardized our predictors and response variable, we would expect the intercept to have mean 0, as can be seen here. It also prints other convergence diagnostics on the latent parameters in the model, including [effective sample size](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.effective_sample_size) and the [gelman rubin diagnostic](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.gelman_rubin) ($\hat{R}$). The value for these diagnostics indicates that the chain has converged to the target distribution. In our case, the "target distribution" is the posterior distribution over the latent parameters that we are interested in. Note that this is often worth verifying with multiple chains for more complicated models. In the end, `samples_1` is a collection (in our case, a `dict` since `init_samples` was a `dict`) containing samples from the posterior distribution for each of the latent parameters in the model.To look at our regression fit, let us plot the regression line using our posterior estimates for the regression parameters, along with the 90% Credibility Interval (CI). Note that the [hpdi](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.hpdi) function in NumPyro's diagnostics module can be used to compute CI. In the functions below, note that the collected samples from the posterior are all along the leading axis.
###Code
def plot_regression(x, y_mean, y_hpdi):
# Sort values for plotting by x axis
idx = jnp.argsort(x)
marriage = x[idx]
mean = y_mean[idx]
hpdi = y_hpdi[:, idx]
divorce = dset.DivorceScaled.values[idx]
# Plot
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 6))
ax.plot(marriage, mean)
ax.plot(marriage, divorce, 'o')
ax.fill_between(marriage, hpdi[0], hpdi[1], alpha=0.3, interpolate=True)
return ax
# Compute empirical posterior distribution over mu
posterior_mu = jnp.expand_dims(samples_1['a'], -1) + \
jnp.expand_dims(samples_1['bM'], -1) * dset.MarriageScaled.values
mean_mu = jnp.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_mu, hpdi_mu)
ax.set(xlabel='Marriage rate', ylabel='Divorce rate', title='Regression line with 90% CI');
###Output
_____no_output_____
###Markdown
We can see from the plot, that the CI broadens towards the tails where the data is relatively sparse, as can be expected. Posterior Predictive DistributionLet us now look at the posterior predictive distribution to see how our predictive distribution looks with respect to the observed divorce rates. To get samples from the posterior predictive distribution, we need to run the model by substituting the latent parameters with samples from the posterior. NumPyro provides a handy [Predictive](http://num.pyro.ai/en/latest/utilities.htmlnumpyro.infer.util.Predictive) utility for this purpose. Note that by default we generate a single prediction for each sample from the joint posterior distribution, but this can be controlled using the `num_samples` argument.
###Code
from numpyro.infer import Predictive
rng_key, rng_key_ = random.split(rng_key)
predictive = Predictive(model, samples_1)
predictions = predictive(rng_key_, marriage=dset.MarriageScaled.values)['obs']
df = dset.filter(['Location'])
df['Mean Predictions'] = jnp.mean(predictions, axis=0)
df.head()
###Output
_____no_output_____
###Markdown
Predictive Utility With Effect HandlersTo remove the magic behind `Predictive`, let us see how we can combine [effect handlers](https://numpyro.readthedocs.io/en/latest/handlers.html) with the [vmap](https://github.com/google/jaxauto-vectorization-with-vmap) JAX primitive to implement our own simplified predictive utility function that can do vectorized predictions.
###Code
def predict(rng_key, post_samples, model, *args, **kwargs):
model = handlers.seed(handlers.condition(model, post_samples), rng_key)
model_trace = handlers.trace(model).get_trace(*args, **kwargs)
return model_trace['obs']['value']
# vectorize predictions via vmap
predict_fn = vmap(lambda rng_key, samples: predict(rng_key, samples, model, marriage=dset.MarriageScaled.values))
###Output
_____no_output_____
###Markdown
Note the use of the `condition`, `seed` and `trace` effect handlers in the `predict` function. - The `seed` effect-handler is used to wrap a stochastic function with an initial `PRNGKey` seed. When a sample statement inside the model is called, it uses the existing seed to sample from a distribution but this effect-handler also splits the existing key to ensure that future `sample` calls in the model use the newly split key instead. This is to prevent us from having to explicitly pass in a `PRNGKey` to each `sample` statement in the model. - The `condition` effect handler conditions the latent sample sites to certain values. In our case, we are conditioning on values from the posterior distribution returned by MCMC. - The `trace` effect handler runs the model and records the execution trace within an `OrderedDict`. This trace object contains execution metadata that is useful for computing quantities such as the log joint density. It should be clear now that the `predict` function simply runs the model by substituting the latent parameters with samples from the posterior (generated by the `mcmc` function) to generate predictions. Note the use of JAX's auto-vectorization transform called [vmap](https://github.com/google/jaxauto-vectorization-with-vmap) to vectorize predictions. Note that if we didn't use `vmap`, we would have to use a native for loop which for each sample which is much slower. Each draw from the posterior can be used to get predictions over all the 50 states. When we vectorize this over all the samples from the posterior using `vmap`, we will get a `predictions_1` array of shape `(num_samples, 50)`. We can then compute the mean and 90% CI of these samples to plot the posterior predictive distribution. We note that our mean predictions match those obtained from the `Predictive` utility class.
###Code
# Using the same key as we used for Predictive - note that the results are identical.
predictions_1 = predict_fn(random.split(rng_key_, num_samples), samples_1)
mean_pred = jnp.mean(predictions_1, axis=0)
df = dset.filter(['Location'])
df['Mean Predictions'] = mean_pred
df.head()
hpdi_pred = hpdi(predictions_1, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel='Marriage rate', ylabel='Divorce rate', title='Predictions with 90% CI');
###Output
_____no_output_____
###Markdown
We have used the same `plot_regression` function as earlier. We notice that our CI for the predictive distribution is much broader as compared to the last plot due to the additional noise introduced by the `sigma` parameter. Most data points lie well within the 90% CI, which indicates a good fit. Posterior Predictive DensityLikewise, making use of effect-handlers and `vmap`, we can also compute the log likelihood for this model given the dataset, and the log posterior predictive density [[6](References)] which is given by $$ log \prod_{i=1}^{n} \int p(y_i | \theta) p_{post}(\theta) d\theta \approx \sum_{i=1}^n log \frac{\sum_s p(\theta^{s})}{S} \\= \sum_{i=1}^n (log \sum_s p(\theta^{s}) - log(S))$$.Here, $i$ indexes the observed data points $y$ and $s$ indexes the posterior samples over the latent parameters $\theta$. If the posterior predictive density for a model has a comparatively high value, it indicates that the observed data-points have higher probability under the given model.
###Code
def log_likelihood(rng_key, params, model, *args, **kwargs):
model = handlers.condition(model, params)
model_trace = handlers.trace(model).get_trace(*args, **kwargs)
obs_node = model_trace['obs']
return obs_node['fn'].log_prob(obs_node['value'])
def log_pred_density(rng_key, params, model, *args, **kwargs):
n = list(params.values())[0].shape[0]
log_lk_fn = vmap(lambda rng_key, params: log_likelihood(rng_key, params, model, *args, **kwargs))
log_lk_vals = log_lk_fn(random.split(rng_key, n), params)
return (logsumexp(log_lk_vals, 0) - jnp.log(n)).sum()
###Output
_____no_output_____
###Markdown
Note that NumPyro provides the [log_likelihood](http://num.pyro.ai/en/latest/utilities.htmllog-likelihood) utility function that can be used directly for computing `log likelihood` as in the first function for any general model. In this tutorial, we would like to emphasize that there is nothing magical about such utility functions, and you can roll out your own inference utilities using NumPyro's effect handling stack.
###Code
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(log_pred_density(rng_key_,
samples_1,
model,
marriage=dset.MarriageScaled.values,
divorce=dset.DivorceScaled.values)))
###Output
Log posterior predictive density: -66.65252685546875
###Markdown
Model 2: Predictor - Median Age of MarriageWe will now model the divorce rate as a function of the median age of marriage. The computations are mostly a reproduction of what we did for Model 1. Notice the following: - Divorce rate is inversely related to the age of marriage. Hence states where the median age of marriage is low will likely have a higher divorce rate. - We get a higher log likelihood as compared to Model 2, indicating that median age of marriage is likely a much better predictor of divorce rate.
###Code
rng_key, rng_key_ = random.split(rng_key)
mcmc.run(rng_key_, age=dset.AgeScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_2 = mcmc.get_samples()
posterior_mu = jnp.expand_dims(samples_2['a'], -1) + \
jnp.expand_dims(samples_2['bA'], -1) * dset.AgeScaled.values
mean_mu = jnp.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_mu, hpdi_mu)
ax.set(xlabel='Median marriage age', ylabel='Divorce rate', title='Regression line with 90% CI');
rng_key, rng_key_ = random.split(rng_key)
predictions_2 = Predictive(model, samples_2)(rng_key_,
age=dset.AgeScaled.values)['obs']
mean_pred = jnp.mean(predictions_2, axis=0)
hpdi_pred = hpdi(predictions_2, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel='Median Age', ylabel='Divorce rate', title='Predictions with 90% CI');
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(log_pred_density(rng_key_,
samples_2,
model,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values)))
###Output
Log posterior predictive density: -59.238067626953125
###Markdown
Model 3: Predictor - Marriage Rate and Median Age of MarriageFinally, we will also model divorce rate as depending on both marriage rate as well as the median age of marriage. Note that the model's posterior predictive density is similar to Model 2 which likely indicates that the marginal information from marriage rate in predicting divorce rate is low when the median age of marriage is already known.
###Code
rng_key, rng_key_ = random.split(rng_key)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_3 = mcmc.get_samples()
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(
log_pred_density(rng_key_,
samples_3,
model,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values)
))
###Output
Log posterior predictive density: -59.06075668334961
###Markdown
Divorce Rate Residuals by StateThe regression plots above shows that the observed divorce rates for many states differs considerably from the mean regression line. To dig deeper into how the last model (Model 3) under-predicts or over-predicts for each of the states, we will plot the posterior predictive and residuals (`Observed divorce rate - Predicted divorce rate`) for each of the states.
###Code
# Predictions for Model 3.
rng_key, rng_key_ = random.split(rng_key)
predictions_3 = Predictive(model, samples_3)(rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values)['obs']
y = jnp.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 16))
pred_mean = jnp.mean(predictions_3, axis=0)
pred_hpdi = hpdi(predictions_3, 0.9)
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = jnp.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
idx = jnp.argsort(residuals_mean)
# Plot posterior predictive
ax[0].plot(jnp.zeros(50), y, '--')
ax[0].errorbar(pred_mean[idx], y, xerr=pred_hpdi[1, idx] - pred_mean[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
ax[0].plot(dset.DivorceScaled.values[idx], y, marker='o',
ls='none', color='gray')
ax[0].set(xlabel='Posterior Predictive (red) vs. Actuals (gray)', ylabel='State',
title='Posterior Predictive with 90% CI')
ax[0].set_yticks(y)
ax[0].set_yticklabels(dset.Loc.values[idx], fontsize=10);
# Plot residuals
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = jnp.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
err = residuals_hpdi[1] - residuals_mean
ax[1].plot(jnp.zeros(50), y, '--')
ax[1].errorbar(residuals_mean[idx], y, xerr=err[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
ax[1].set(xlabel='Residuals', ylabel='State', title='Residuals with 90% CI')
ax[1].set_yticks(y)
ax[1].set_yticklabels(dset.Loc.values[idx], fontsize=10);
###Output
_____no_output_____
###Markdown
The plot on the left shows the mean predictions with 90% CI for each of the states using Model 3. The gray markers indicate the actual observed divorce rates. The right plot shows the residuals for each of the states, and both these plots are sorted by the residuals, i.e. at the bottom, we are looking at states where the model predictions are higher than the observed rates, whereas at the top, the reverse is true.Overall, the model fit seems good because most observed data points like within a 90% CI around the mean predictions. However, notice how the model over-predicts by a large margin for states like Idaho (bottom left), and on the other end under-predicts for states like Maine (top right). This is likely indicative of other factors that we are missing out in our model that affect divorce rate across different states. Even ignoring other socio-political variables, one such factor that we have not yet modeled is the measurement noise given by `Divorce SE` in the dataset. We will explore this in the next section. Regression Model with Measurement ErrorNote that in our previous models, each data point influences the regression line equally. Is this well justified? We will build on the previous model to incorporate measurement error given by `Divorce SE` variable in the dataset. Incorporating measurement noise will be useful in ensuring that observations that have higher confidence (i.e. lower measurement noise) have a greater impact on the regression line. On the other hand, this will also help us better model outliers with high measurement errors. For more details on modeling errors due to measurement noise, refer to Chapter 14 of [[1](References)].To do this, we will reuse Model 3, with the only change that the final observed value has a measurement error given by `divorce_sd` (notice that this has to be standardized since the `divorce` variable itself has been standardized to mean 0 and std 1).
###Code
def model_se(marriage, age, divorce_sd, divorce=None):
a = numpyro.sample('a', dist.Normal(0., 0.2))
bM = numpyro.sample('bM', dist.Normal(0., 0.5))
M = bM * marriage
bA = numpyro.sample('bA', dist.Normal(0., 0.5))
A = bA * age
sigma = numpyro.sample('sigma', dist.Exponential(1.))
mu = a + M + A
divorce_rate = numpyro.sample('divorce_rate', dist.Normal(mu, sigma))
numpyro.sample('obs', dist.Normal(divorce_rate, divorce_sd), obs=divorce)
# Standardize
dset['DivorceScaledSD'] = dset['Divorce SE'] / jnp.std(dset.Divorce.values)
rng_key, rng_key_ = random.split(rng_key)
kernel = NUTS(model_se, target_accept_prob=0.9)
mcmc = MCMC(kernel, num_warmup=1000, num_samples=3000)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values, age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_4 = mcmc.get_samples()
###Output
sample: 100%|██████████| 4000/4000 [00:09<00:00, 408.70it/s, 15 steps of size 3.00e-01. acc. prob=0.91]
###Markdown
Effect of Incorporating Measurement Noise on ResidualsNotice that our values for the regression coefficients is very similar to Model 3. However, introducing measurement noise allows us to more closely match our predictive distribution to the observed values. We can see this if we plot the residuals as earlier.
###Code
rng_key, rng_key_ = random.split(rng_key)
predictions_4 = Predictive(model_se, samples_4)(rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values)['obs']
sd = dset.DivorceScaledSD.values
residuals_4 = dset.DivorceScaled.values - predictions_4
residuals_mean = jnp.mean(residuals_4, axis=0)
residuals_hpdi = hpdi(residuals_4, 0.9)
err = residuals_hpdi[1] - residuals_mean
idx = jnp.argsort(residuals_mean)
y = jnp.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 16))
# Plot Residuals
ax.plot(jnp.zeros(50), y, '--')
ax.errorbar(residuals_mean[idx], y, xerr=err[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
# Plot SD
ax.errorbar(residuals_mean[idx], y, xerr=sd[idx],
ls='none', color='orange', alpha=0.9)
# Plot earlier mean residual
ax.plot(jnp.mean(dset.DivorceScaled.values - predictions_3, 0)[idx], y,
ls='none', marker='o', ms=6, color='black', alpha=0.6)
ax.set(xlabel='Residuals', ylabel='State', title='Residuals with 90% CI')
ax.set_yticks(y)
ax.set_yticklabels(dset.Loc.values[idx], fontsize=10);
ax.text(-2.8, -7, 'Residuals (with error-bars) from current model (in red). '
'Black marker \nshows residuals from the previous model (Model 3). '
'Measurement \nerror is indicated by orange bar.');
###Output
_____no_output_____
###Markdown
The plot above shows the residuals for each of the states, along with the measurement noise given by inner error bar. The gray dots are the mean residuals from our earlier Model 3. Notice how having an additional degree of freedom to model the measurement noise has shrunk the residuals. In particular, for Idaho and Maine, our predictions are now much closer to the observed values after incorporating measurement noise in the model.To better see how measurement noise affects the movement of the regression line, let us plot the residuals with respect to the measurement noise.
###Code
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10, 6))
x = dset.DivorceScaledSD.values
y1 = jnp.mean(residuals_3, 0)
y2 = jnp.mean(residuals_4, 0)
ax.plot(x, y1, ls='none', marker='o')
ax.plot(x, y2, ls='none', marker='o')
for i, (j, k) in enumerate(zip(y1, y2)):
ax.plot([x[i], x[i]], [j, k], '--', color='gray');
ax.set(xlabel='Measurement Noise', ylabel='Residual', title='Mean residuals (Model 4: red, Model 3: blue)');
###Output
_____no_output_____
###Markdown
Bayesian Regression Using NumPyroIn this tutorial, we will explore how to do bayesian regression in NumPyro, using a simple example adapted from Statistical Rethinking [[1](References)]. In particular, we would like to explore the following: - Write a simple model using the `sample` NumPyro primitive. - Run inference using MCMC in NumPyro, in particular, using the No U-Turn Sampler (NUTS) to get a posterior distribution over our regression parameters of interest. - Learn about inference utilities such as `Predictive` and `log_likelihood`. - Learn how we can use effect-handlers in NumPyro to generate execution traces from the model, condition on sample statements, seed models with RNG seeds, etc., and use this to implement various utilities that will be useful for MCMC. e.g. computing model log likelihood, generating empirical distribution over the posterior predictive, etc. Tutorial Outline:1. [Dataset](Dataset)2. [Regression Model to Predict Divorce Rate](Regression-Model-to-Predict-Divorce-Rate) - [Model-1: Predictor-Marriage Rate](Model-1:-Predictor---Marriage-Rate) - [Posterior Distribution over the Regression Parameters](Posterior-Distribution-over-the-Regression-Parameters) - [Posterior Predictive Distribution](Posterior-Predictive-Distribution) - [Predictive Utility With Effect Handlers](Predictive-Utility-With-Effect-Handlers) - [Model Predictive Density](Model-Predictive-Density) - [Model-2: Predictor-Median Age of Marriage](Model-2:-Predictor---Median-Age-of-Marriage) - [Model-3: Predictor-Marriage Rate and Median Age of Marriage](Model-3:-Predictor---Marriage-Rate-and-Median-Age-of-Marriage) - [Divorce Rate Residuals by State](Divorce-Rate-Residuals-by-State)3. [Regression Model with Measurement Error](Regression-Model-with-Measurement-Error) - [Effect of Incorporating Measurement Noise on Residuals](Effect-of-Incorporating-Measurement-Noise-on-Residuals)4. [References](References)
###Code
!pip install -q numpyro@git+https://github.com/pyro-ppl/numpyro
import os
from IPython.display import set_matplotlib_formats
import jax.numpy as jnp
from jax import random, vmap
from jax.scipy.special import logsumexp
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import numpyro
from numpyro.diagnostics import hpdi
import numpyro.distributions as dist
from numpyro import handlers
from numpyro.infer import MCMC, NUTS
plt.style.use("bmh")
if "NUMPYRO_SPHINXBUILD" in os.environ:
set_matplotlib_formats("svg")
assert numpyro.__version__.startswith("0.9.2")
###Output
_____no_output_____
###Markdown
DatasetFor this example, we will use the `WaffleDivorce` dataset from Chapter 05, Statistical Rethinking [[1](References)]. The dataset contains divorce rates in each of the 50 states in the USA, along with predictors such as population, median age of marriage, whether it is a Southern state and, curiously, number of Waffle Houses.
###Code
DATASET_URL = "https://raw.githubusercontent.com/rmcelreath/rethinking/master/data/WaffleDivorce.csv"
dset = pd.read_csv(DATASET_URL, sep=";")
dset
###Output
_____no_output_____
###Markdown
Let us plot the pair-wise relationship amongst the main variables in the dataset, using `seaborn.pairplot`.
###Code
vars = [
"Population",
"MedianAgeMarriage",
"Marriage",
"WaffleHouses",
"South",
"Divorce",
]
sns.pairplot(dset, x_vars=vars, y_vars=vars, palette="husl");
###Output
_____no_output_____
###Markdown
From the plots above, we can clearly observe that there is a relationship between divorce rates and marriage rates in a state (as might be expected), and also between divorce rates and median age of marriage. There is also a weak relationship between number of Waffle Houses and divorce rates, which is not obvious from the plot above, but will be clearer if we regress `Divorce` against `WaffleHouse` and plot the results.
###Code
sns.regplot(x="WaffleHouses", y="Divorce", data=dset);
###Output
_____no_output_____
###Markdown
This is an example of a spurious association. We do not expect the number of Waffle Houses in a state to affect the divorce rate, but it is likely correlated with other factors that have an effect on the divorce rate. We will not delve into this spurious association in this tutorial, but the interested reader is encouraged to read Chapters 5 and 6 of [[1](References)] which explores the problem of causal association in the presence of multiple predictors. For simplicity, we will primarily focus on marriage rate and the median age of marriage as our predictors for divorce rate throughout the remaining tutorial. Regression Model to Predict Divorce RateLet us now write a regressionn model in *NumPyro* to predict the divorce rate as a linear function of marriage rate and median age of marriage in each of the states. First, note that our predictor variables have somewhat different scales. It is a good practice to standardize our predictors and response variables to mean `0` and standard deviation `1`, which should result in [faster inference](https://mc-stan.org/docs/2_19/stan-users-guide/standardizing-predictors-and-outputs.html).
###Code
standardize = lambda x: (x - x.mean()) / x.std()
dset["AgeScaled"] = dset.MedianAgeMarriage.pipe(standardize)
dset["MarriageScaled"] = dset.Marriage.pipe(standardize)
dset["DivorceScaled"] = dset.Divorce.pipe(standardize)
###Output
_____no_output_____
###Markdown
We write the NumPyro model as follows. While the code should largely be self-explanatory, take note of the following: - In NumPyro, *model* code is any Python callable which can optionally accept additional arguments and keywords. For HMC which we will be using for this tutorial, these arguments and keywords remain static during inference, but we can reuse the same model to generate [predictions](Posterior-Predictive-Distribution) on new data. - In addition to regular Python statements, the model code also contains primitives like `sample`. These primitives can be interpreted with various side-effects using effect handlers. For more on effect handlers, refer to [[3](References)], [[4](References)]. For now, just remember that a `sample` statement makes this a stochastic function that samples some latent parameters from a *prior distribution*. Our goal is to infer the *posterior distribution* of these parameters conditioned on observed data. - The reason why we have kept our predictors as optional keyword arguments is to be able to reuse the same model as we vary the set of predictors. Likewise, the reason why the response variable is optional is that we would like to reuse this model to sample from the posterior predictive distribution. See the [section](Posterior-Predictive-Distribution) on plotting the posterior predictive distribution, as an example.
###Code
def model(marriage=None, age=None, divorce=None):
a = numpyro.sample("a", dist.Normal(0.0, 0.2))
M, A = 0.0, 0.0
if marriage is not None:
bM = numpyro.sample("bM", dist.Normal(0.0, 0.5))
M = bM * marriage
if age is not None:
bA = numpyro.sample("bA", dist.Normal(0.0, 0.5))
A = bA * age
sigma = numpyro.sample("sigma", dist.Exponential(1.0))
mu = a + M + A
numpyro.sample("obs", dist.Normal(mu, sigma), obs=divorce)
###Output
_____no_output_____
###Markdown
Model 1: Predictor - Marriage RateWe first try to model the divorce rate as depending on a single variable, marriage rate. As mentioned above, we can use the same `model` code as earlier, but only pass values for `marriage` and `divorce` keyword arguments. We will use the No U-Turn Sampler (see [[5](References)] for more details on the NUTS algorithm) to run inference on this simple model.The Hamiltonian Monte Carlo (or, the NUTS) implementation in NumPyro takes in a potential energy function. This is the negative log joint density for the model. Therefore, for our model description above, we need to construct a function which given the parameter values returns the potential energy (or negative log joint density). Additionally, the verlet integrator in HMC (or, NUTS) returns sample values simulated using Hamiltonian dynamics in the unconstrained space. As such, continuous variables with bounded support need to be transformed into unconstrained space using bijective transforms. We also need to transform these samples back to their constrained support before returning these values to the user. Thankfully, this is handled on the backend for us, within a convenience class for doing [MCMC inference](https://numpyro.readthedocs.io/en/latest/mcmc.htmlnumpyro.mcmc.MCMC) that has the following methods: - `run(...)`: runs warmup, adapts steps size and mass matrix, and does sampling using the sample from the warmup phase. - `print_summary()`: print diagnostic information like quantiles, effective sample size, and the Gelman-Rubin diagnostic. - `get_samples()`: gets samples from the posterior distribution.Note the following: - JAX uses functional PRNGs. Unlike other languages / frameworks which maintain a global random state, in JAX, every call to a sampler requires an [explicit PRNGKey](https://github.com/google/jaxrandom-numbers-are-different). We will split our initial random seed for subsequent operations, so that we do not accidentally reuse the same seed. - We run inference with the `NUTS` sampler. To run vanilla HMC, we can instead use the [HMC](https://numpyro.readthedocs.io/en/latest/mcmc.htmlnumpyro.mcmc.HMC) class.
###Code
# Start from this source of randomness. We will split keys for subsequent operations.
rng_key = random.PRNGKey(0)
rng_key, rng_key_ = random.split(rng_key)
# Run NUTS.
kernel = NUTS(model)
num_samples = 2000
mcmc = MCMC(kernel, num_warmup=1000, num_samples=num_samples)
mcmc.run(
rng_key_, marriage=dset.MarriageScaled.values, divorce=dset.DivorceScaled.values
)
mcmc.print_summary()
samples_1 = mcmc.get_samples()
###Output
sample: 100%|██████████| 3000/3000 [00:04<00:00, 748.14it/s, 7 steps of size 7.41e-01. acc. prob=0.92]
###Markdown
Posterior Distribution over the Regression ParametersWe notice that the progress bar gives us online statistics on the acceptance probability, step size and number of steps taken per sample while running NUTS. In particular, during warmup, we adapt the step size and mass matrix to achieve a certain target acceptance probability which is 0.8, by default. We were able to successfully adapt our step size to achieve this target in the warmup phase.During warmup, the aim is to adapt hyper-parameters such as step size and mass matrix (the HMC algorithm is very sensitive to these hyper-parameters), and to reach the typical set (see [[6](References)] for more details). If there are any issues in the model specification, the first signal to notice would be low acceptance probabilities or very high number of steps. We use the sample from the end of the warmup phase to seed the MCMC chain (denoted by the second `sample` progress bar) from which we generate the desired number of samples from our target distribution.At the end of inference, NumPyro prints the mean, std and 90% CI values for each of the latent parameters. Note that since we standardized our predictors and response variable, we would expect the intercept to have mean 0, as can be seen here. It also prints other convergence diagnostics on the latent parameters in the model, including [effective sample size](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.effective_sample_size) and the [gelman rubin diagnostic](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.gelman_rubin) ($\hat{R}$). The value for these diagnostics indicates that the chain has converged to the target distribution. In our case, the "target distribution" is the posterior distribution over the latent parameters that we are interested in. Note that this is often worth verifying with multiple chains for more complicated models. In the end, `samples_1` is a collection (in our case, a `dict` since `init_samples` was a `dict`) containing samples from the posterior distribution for each of the latent parameters in the model.To look at our regression fit, let us plot the regression line using our posterior estimates for the regression parameters, along with the 90% Credibility Interval (CI). Note that the [hpdi](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.hpdi) function in NumPyro's diagnostics module can be used to compute CI. In the functions below, note that the collected samples from the posterior are all along the leading axis.
###Code
def plot_regression(x, y_mean, y_hpdi):
# Sort values for plotting by x axis
idx = jnp.argsort(x)
marriage = x[idx]
mean = y_mean[idx]
hpdi = y_hpdi[:, idx]
divorce = dset.DivorceScaled.values[idx]
# Plot
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 6))
ax.plot(marriage, mean)
ax.plot(marriage, divorce, "o")
ax.fill_between(marriage, hpdi[0], hpdi[1], alpha=0.3, interpolate=True)
return ax
# Compute empirical posterior distribution over mu
posterior_mu = (
jnp.expand_dims(samples_1["a"], -1)
+ jnp.expand_dims(samples_1["bM"], -1) * dset.MarriageScaled.values
)
mean_mu = jnp.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_mu, hpdi_mu)
ax.set(
xlabel="Marriage rate", ylabel="Divorce rate", title="Regression line with 90% CI"
);
###Output
_____no_output_____
###Markdown
We can see from the plot, that the CI broadens towards the tails where the data is relatively sparse, as can be expected. Prior Predictive DistributionLet us check that we have set sensible priors by sampling from the prior predictive distribution. NumPyro provides a handy [Predictive](http://num.pyro.ai/en/latest/utilities.htmlnumpyro.infer.util.Predictive) utility for this purpose.
###Code
from numpyro.infer import Predictive
rng_key, rng_key_ = random.split(rng_key)
prior_predictive = Predictive(model, num_samples=100)
prior_predictions = prior_predictive(rng_key_, marriage=dset.MarriageScaled.values)[
"obs"
]
mean_prior_pred = jnp.mean(prior_predictions, axis=0)
hpdi_prior_pred = hpdi(prior_predictions, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_prior_pred, hpdi_prior_pred)
ax.set(xlabel="Marriage rate", ylabel="Divorce rate", title="Predictions with 90% CI");
###Output
_____no_output_____
###Markdown
Posterior Predictive DistributionLet us now look at the posterior predictive distribution to see how our predictive distribution looks with respect to the observed divorce rates. To get samples from the posterior predictive distribution, we need to run the model by substituting the latent parameters with samples from the posterior. Note that by default we generate a single prediction for each sample from the joint posterior distribution, but this can be controlled using the `num_samples` argument.
###Code
rng_key, rng_key_ = random.split(rng_key)
predictive = Predictive(model, samples_1)
predictions = predictive(rng_key_, marriage=dset.MarriageScaled.values)["obs"]
df = dset.filter(["Location"])
df["Mean Predictions"] = jnp.mean(predictions, axis=0)
df.head()
###Output
_____no_output_____
###Markdown
Predictive Utility With Effect HandlersTo remove the magic behind `Predictive`, let us see how we can combine [effect handlers](https://numpyro.readthedocs.io/en/latest/handlers.html) with the [vmap](https://github.com/google/jaxauto-vectorization-with-vmap) JAX primitive to implement our own simplified predictive utility function that can do vectorized predictions.
###Code
def predict(rng_key, post_samples, model, *args, **kwargs):
model = handlers.seed(handlers.condition(model, post_samples), rng_key)
model_trace = handlers.trace(model).get_trace(*args, **kwargs)
return model_trace["obs"]["value"]
# vectorize predictions via vmap
predict_fn = vmap(
lambda rng_key, samples: predict(
rng_key, samples, model, marriage=dset.MarriageScaled.values
)
)
###Output
_____no_output_____
###Markdown
Note the use of the `condition`, `seed` and `trace` effect handlers in the `predict` function. - The `seed` effect-handler is used to wrap a stochastic function with an initial `PRNGKey` seed. When a sample statement inside the model is called, it uses the existing seed to sample from a distribution but this effect-handler also splits the existing key to ensure that future `sample` calls in the model use the newly split key instead. This is to prevent us from having to explicitly pass in a `PRNGKey` to each `sample` statement in the model. - The `condition` effect handler conditions the latent sample sites to certain values. In our case, we are conditioning on values from the posterior distribution returned by MCMC. - The `trace` effect handler runs the model and records the execution trace within an `OrderedDict`. This trace object contains execution metadata that is useful for computing quantities such as the log joint density. It should be clear now that the `predict` function simply runs the model by substituting the latent parameters with samples from the posterior (generated by the `mcmc` function) to generate predictions. Note the use of JAX's auto-vectorization transform called [vmap](https://github.com/google/jaxauto-vectorization-with-vmap) to vectorize predictions. Note that if we didn't use `vmap`, we would have to use a native for loop which for each sample which is much slower. Each draw from the posterior can be used to get predictions over all the 50 states. When we vectorize this over all the samples from the posterior using `vmap`, we will get a `predictions_1` array of shape `(num_samples, 50)`. We can then compute the mean and 90% CI of these samples to plot the posterior predictive distribution. We note that our mean predictions match those obtained from the `Predictive` utility class.
###Code
# Using the same key as we used for Predictive - note that the results are identical.
predictions_1 = predict_fn(random.split(rng_key_, num_samples), samples_1)
mean_pred = jnp.mean(predictions_1, axis=0)
df = dset.filter(["Location"])
df["Mean Predictions"] = mean_pred
df.head()
hpdi_pred = hpdi(predictions_1, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel="Marriage rate", ylabel="Divorce rate", title="Predictions with 90% CI");
###Output
_____no_output_____
###Markdown
We have used the same `plot_regression` function as earlier. We notice that our CI for the predictive distribution is much broader as compared to the last plot due to the additional noise introduced by the `sigma` parameter. Most data points lie well within the 90% CI, which indicates a good fit. Posterior Predictive DensityLikewise, making use of effect-handlers and `vmap`, we can also compute the log likelihood for this model given the dataset, and the log posterior predictive density [[6](References)] which is given by $$ log \prod_{i=1}^{n} \int p(y_i | \theta) p_{post}(\theta) d\theta \approx \sum_{i=1}^n log \frac{\sum_s p(\theta^{s})}{S} \\= \sum_{i=1}^n (log \sum_s p(\theta^{s}) - log(S))$$.Here, $i$ indexes the observed data points $y$ and $s$ indexes the posterior samples over the latent parameters $\theta$. If the posterior predictive density for a model has a comparatively high value, it indicates that the observed data-points have higher probability under the given model.
###Code
def log_likelihood(rng_key, params, model, *args, **kwargs):
model = handlers.condition(model, params)
model_trace = handlers.trace(model).get_trace(*args, **kwargs)
obs_node = model_trace["obs"]
return obs_node["fn"].log_prob(obs_node["value"])
def log_pred_density(rng_key, params, model, *args, **kwargs):
n = list(params.values())[0].shape[0]
log_lk_fn = vmap(
lambda rng_key, params: log_likelihood(rng_key, params, model, *args, **kwargs)
)
log_lk_vals = log_lk_fn(random.split(rng_key, n), params)
return (logsumexp(log_lk_vals, 0) - jnp.log(n)).sum()
###Output
_____no_output_____
###Markdown
Note that NumPyro provides the [log_likelihood](http://num.pyro.ai/en/latest/utilities.htmllog-likelihood) utility function that can be used directly for computing `log likelihood` as in the first function for any general model. In this tutorial, we would like to emphasize that there is nothing magical about such utility functions, and you can roll out your own inference utilities using NumPyro's effect handling stack.
###Code
rng_key, rng_key_ = random.split(rng_key)
print(
"Log posterior predictive density: {}".format(
log_pred_density(
rng_key_,
samples_1,
model,
marriage=dset.MarriageScaled.values,
divorce=dset.DivorceScaled.values,
)
)
)
###Output
Log posterior predictive density: -66.70008087158203
###Markdown
Model 2: Predictor - Median Age of MarriageWe will now model the divorce rate as a function of the median age of marriage. The computations are mostly a reproduction of what we did for Model 1. Notice the following: - Divorce rate is inversely related to the age of marriage. Hence states where the median age of marriage is low will likely have a higher divorce rate. - We get a higher log likelihood as compared to Model 2, indicating that median age of marriage is likely a much better predictor of divorce rate.
###Code
rng_key, rng_key_ = random.split(rng_key)
mcmc.run(rng_key_, age=dset.AgeScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_2 = mcmc.get_samples()
posterior_mu = (
jnp.expand_dims(samples_2["a"], -1)
+ jnp.expand_dims(samples_2["bA"], -1) * dset.AgeScaled.values
)
mean_mu = jnp.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_mu, hpdi_mu)
ax.set(
xlabel="Median marriage age",
ylabel="Divorce rate",
title="Regression line with 90% CI",
);
rng_key, rng_key_ = random.split(rng_key)
predictions_2 = Predictive(model, samples_2)(rng_key_, age=dset.AgeScaled.values)["obs"]
mean_pred = jnp.mean(predictions_2, axis=0)
hpdi_pred = hpdi(predictions_2, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel="Median Age", ylabel="Divorce rate", title="Predictions with 90% CI");
rng_key, rng_key_ = random.split(rng_key)
print(
"Log posterior predictive density: {}".format(
log_pred_density(
rng_key_,
samples_2,
model,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values,
)
)
)
###Output
Log posterior predictive density: -59.251956939697266
###Markdown
Model 3: Predictor - Marriage Rate and Median Age of MarriageFinally, we will also model divorce rate as depending on both marriage rate as well as the median age of marriage. Note that the model's posterior predictive density is similar to Model 2 which likely indicates that the marginal information from marriage rate in predicting divorce rate is low when the median age of marriage is already known.
###Code
rng_key, rng_key_ = random.split(rng_key)
mcmc.run(
rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values,
)
mcmc.print_summary()
samples_3 = mcmc.get_samples()
rng_key, rng_key_ = random.split(rng_key)
print(
"Log posterior predictive density: {}".format(
log_pred_density(
rng_key_,
samples_3,
model,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values,
)
)
)
###Output
Log posterior predictive density: -59.06374740600586
###Markdown
Divorce Rate Residuals by StateThe regression plots above shows that the observed divorce rates for many states differs considerably from the mean regression line. To dig deeper into how the last model (Model 3) under-predicts or over-predicts for each of the states, we will plot the posterior predictive and residuals (`Observed divorce rate - Predicted divorce rate`) for each of the states.
###Code
# Predictions for Model 3.
rng_key, rng_key_ = random.split(rng_key)
predictions_3 = Predictive(model, samples_3)(
rng_key_, marriage=dset.MarriageScaled.values, age=dset.AgeScaled.values
)["obs"]
y = jnp.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 16))
pred_mean = jnp.mean(predictions_3, axis=0)
pred_hpdi = hpdi(predictions_3, 0.9)
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = jnp.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
idx = jnp.argsort(residuals_mean)
# Plot posterior predictive
ax[0].plot(jnp.zeros(50), y, "--")
ax[0].errorbar(
pred_mean[idx],
y,
xerr=pred_hpdi[1, idx] - pred_mean[idx],
marker="o",
ms=5,
mew=4,
ls="none",
alpha=0.8,
)
ax[0].plot(dset.DivorceScaled.values[idx], y, marker="o", ls="none", color="gray")
ax[0].set(
xlabel="Posterior Predictive (red) vs. Actuals (gray)",
ylabel="State",
title="Posterior Predictive with 90% CI",
)
ax[0].set_yticks(y)
ax[0].set_yticklabels(dset.Loc.values[idx], fontsize=10)
# Plot residuals
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = jnp.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
err = residuals_hpdi[1] - residuals_mean
ax[1].plot(jnp.zeros(50), y, "--")
ax[1].errorbar(
residuals_mean[idx], y, xerr=err[idx], marker="o", ms=5, mew=4, ls="none", alpha=0.8
)
ax[1].set(xlabel="Residuals", ylabel="State", title="Residuals with 90% CI")
ax[1].set_yticks(y)
ax[1].set_yticklabels(dset.Loc.values[idx], fontsize=10);
###Output
_____no_output_____
###Markdown
The plot on the left shows the mean predictions with 90% CI for each of the states using Model 3. The gray markers indicate the actual observed divorce rates. The right plot shows the residuals for each of the states, and both these plots are sorted by the residuals, i.e. at the bottom, we are looking at states where the model predictions are higher than the observed rates, whereas at the top, the reverse is true.Overall, the model fit seems good because most observed data points like within a 90% CI around the mean predictions. However, notice how the model over-predicts by a large margin for states like Idaho (bottom left), and on the other end under-predicts for states like Maine (top right). This is likely indicative of other factors that we are missing out in our model that affect divorce rate across different states. Even ignoring other socio-political variables, one such factor that we have not yet modeled is the measurement noise given by `Divorce SE` in the dataset. We will explore this in the next section. Regression Model with Measurement ErrorNote that in our previous models, each data point influences the regression line equally. Is this well justified? We will build on the previous model to incorporate measurement error given by `Divorce SE` variable in the dataset. Incorporating measurement noise will be useful in ensuring that observations that have higher confidence (i.e. lower measurement noise) have a greater impact on the regression line. On the other hand, this will also help us better model outliers with high measurement errors. For more details on modeling errors due to measurement noise, refer to Chapter 14 of [[1](References)].To do this, we will reuse Model 3, with the only change that the final observed value has a measurement error given by `divorce_sd` (notice that this has to be standardized since the `divorce` variable itself has been standardized to mean 0 and std 1).
###Code
def model_se(marriage, age, divorce_sd, divorce=None):
a = numpyro.sample("a", dist.Normal(0.0, 0.2))
bM = numpyro.sample("bM", dist.Normal(0.0, 0.5))
M = bM * marriage
bA = numpyro.sample("bA", dist.Normal(0.0, 0.5))
A = bA * age
sigma = numpyro.sample("sigma", dist.Exponential(1.0))
mu = a + M + A
divorce_rate = numpyro.sample("divorce_rate", dist.Normal(mu, sigma))
numpyro.sample("obs", dist.Normal(divorce_rate, divorce_sd), obs=divorce)
# Standardize
dset["DivorceScaledSD"] = dset["Divorce SE"] / jnp.std(dset.Divorce.values)
rng_key, rng_key_ = random.split(rng_key)
kernel = NUTS(model_se, target_accept_prob=0.9)
mcmc = MCMC(kernel, num_warmup=1000, num_samples=3000)
mcmc.run(
rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values,
divorce=dset.DivorceScaled.values,
)
mcmc.print_summary()
samples_4 = mcmc.get_samples()
###Output
sample: 100%|██████████| 4000/4000 [00:06<00:00, 578.19it/s, 15 steps of size 2.58e-01. acc. prob=0.93]
###Markdown
Effect of Incorporating Measurement Noise on ResidualsNotice that our values for the regression coefficients is very similar to Model 3. However, introducing measurement noise allows us to more closely match our predictive distribution to the observed values. We can see this if we plot the residuals as earlier.
###Code
rng_key, rng_key_ = random.split(rng_key)
predictions_4 = Predictive(model_se, samples_4)(
rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values,
)["obs"]
sd = dset.DivorceScaledSD.values
residuals_4 = dset.DivorceScaled.values - predictions_4
residuals_mean = jnp.mean(residuals_4, axis=0)
residuals_hpdi = hpdi(residuals_4, 0.9)
err = residuals_hpdi[1] - residuals_mean
idx = jnp.argsort(residuals_mean)
y = jnp.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 16))
# Plot Residuals
ax.plot(jnp.zeros(50), y, "--")
ax.errorbar(
residuals_mean[idx], y, xerr=err[idx], marker="o", ms=5, mew=4, ls="none", alpha=0.8
)
# Plot SD
ax.errorbar(residuals_mean[idx], y, xerr=sd[idx], ls="none", color="orange", alpha=0.9)
# Plot earlier mean residual
ax.plot(
jnp.mean(dset.DivorceScaled.values - predictions_3, 0)[idx],
y,
ls="none",
marker="o",
ms=6,
color="black",
alpha=0.6,
)
ax.set(xlabel="Residuals", ylabel="State", title="Residuals with 90% CI")
ax.set_yticks(y)
ax.set_yticklabels(dset.Loc.values[idx], fontsize=10)
ax.text(
-2.8,
-7,
"Residuals (with error-bars) from current model (in red). "
"Black marker \nshows residuals from the previous model (Model 3). "
"Measurement \nerror is indicated by orange bar.",
);
###Output
_____no_output_____
###Markdown
The plot above shows the residuals for each of the states, along with the measurement noise given by inner error bar. The gray dots are the mean residuals from our earlier Model 3. Notice how having an additional degree of freedom to model the measurement noise has shrunk the residuals. In particular, for Idaho and Maine, our predictions are now much closer to the observed values after incorporating measurement noise in the model.To better see how measurement noise affects the movement of the regression line, let us plot the residuals with respect to the measurement noise.
###Code
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10, 6))
x = dset.DivorceScaledSD.values
y1 = jnp.mean(residuals_3, 0)
y2 = jnp.mean(residuals_4, 0)
ax.plot(x, y1, ls="none", marker="o")
ax.plot(x, y2, ls="none", marker="o")
for i, (j, k) in enumerate(zip(y1, y2)):
ax.plot([x[i], x[i]], [j, k], "--", color="gray")
ax.set(
xlabel="Measurement Noise",
ylabel="Residual",
title="Mean residuals (Model 4: red, Model 3: blue)",
);
###Output
_____no_output_____
###Markdown
Bayesian Regression Using NumPyroIn this tutorial, we will explore how to do bayesian regression in NumPyro, using a simple example adapted from Statistical Rethinking [[1](References)]. In particular, we would like to explore the following: - Write a simple model using the `sample` NumPyro primitive. - Run inference using MCMC in NumPyro, in particular, using the No U-Turn Sampler (NUTS) to get a posterior distribution over our regression parameters of interest. - Learn about inference utilities such as `Predictive` and `log_likelihood`. - Learn how we can use effect-handlers in NumPyro to generate execution traces from the model, condition on sample statements, seed models with RNG seeds, etc., and use this to implement various utilities that will be useful for MCMC. e.g. computing model log likelihood, generating empirical distribution over the posterior predictive, etc. Tutorial Outline:1. [Dataset](Dataset)2. [Regression Model to Predict Divorce Rate](Regression-Model-to-Predict-Divorce-Rate) - [Model-1: Predictor-Marriage Rate](Model-1:-Predictor---Marriage-Rate) - [Posterior Distribution over the Regression Parameters](Posterior-Distribution-over-the-Regression-Parameters) - [Posterior Predictive Distribution](Posterior-Predictive-Distribution) - [Predictive Utility With Effect Handlers](Predictive-Utility-With-Effect-Handlers) - [Model Predictive Density](Model-Predictive-Density) - [Model-2: Predictor-Median Age of Marriage](Model-2:-Predictor---Median-Age-of-Marriage) - [Model-3: Predictor-Marriage Rate and Median Age of Marriage](Model-3:-Predictor---Marriage-Rate-and-Median-Age-of-Marriage) - [Divorce Rate Residuals by State](Divorce-Rate-Residuals-by-State)3. [Regression Model with Measurement Error](Regression-Model-with-Measurement-Error) - [Effect of Incorporating Measurement Noise on Residuals](Effect-of-Incorporating-Measurement-Noise-on-Residuals)4. [References](References)
###Code
!pip install -q numpyro@git+https://github.com/pyro-ppl/numpyro
import os
from IPython.display import set_matplotlib_formats
import jax.numpy as jnp
from jax import random, vmap
from jax.scipy.special import logsumexp
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import numpyro
from numpyro.diagnostics import hpdi
import numpyro.distributions as dist
from numpyro import handlers
from numpyro.infer import MCMC, NUTS
plt.style.use("bmh")
if "NUMPYRO_SPHINXBUILD" in os.environ:
set_matplotlib_formats("svg")
assert numpyro.__version__.startswith("0.9.0")
###Output
_____no_output_____
###Markdown
DatasetFor this example, we will use the `WaffleDivorce` dataset from Chapter 05, Statistical Rethinking [[1](References)]. The dataset contains divorce rates in each of the 50 states in the USA, along with predictors such as population, median age of marriage, whether it is a Southern state and, curiously, number of Waffle Houses.
###Code
DATASET_URL = "https://raw.githubusercontent.com/rmcelreath/rethinking/master/data/WaffleDivorce.csv"
dset = pd.read_csv(DATASET_URL, sep=";")
dset
###Output
_____no_output_____
###Markdown
Let us plot the pair-wise relationship amongst the main variables in the dataset, using `seaborn.pairplot`.
###Code
vars = [
"Population",
"MedianAgeMarriage",
"Marriage",
"WaffleHouses",
"South",
"Divorce",
]
sns.pairplot(dset, x_vars=vars, y_vars=vars, palette="husl");
###Output
_____no_output_____
###Markdown
From the plots above, we can clearly observe that there is a relationship between divorce rates and marriage rates in a state (as might be expected), and also between divorce rates and median age of marriage. There is also a weak relationship between number of Waffle Houses and divorce rates, which is not obvious from the plot above, but will be clearer if we regress `Divorce` against `WaffleHouse` and plot the results.
###Code
sns.regplot(x="WaffleHouses", y="Divorce", data=dset);
###Output
_____no_output_____
###Markdown
This is an example of a spurious association. We do not expect the number of Waffle Houses in a state to affect the divorce rate, but it is likely correlated with other factors that have an effect on the divorce rate. We will not delve into this spurious association in this tutorial, but the interested reader is encouraged to read Chapters 5 and 6 of [[1](References)] which explores the problem of causal association in the presence of multiple predictors. For simplicity, we will primarily focus on marriage rate and the median age of marriage as our predictors for divorce rate throughout the remaining tutorial. Regression Model to Predict Divorce RateLet us now write a regressionn model in *NumPyro* to predict the divorce rate as a linear function of marriage rate and median age of marriage in each of the states. First, note that our predictor variables have somewhat different scales. It is a good practice to standardize our predictors and response variables to mean `0` and standard deviation `1`, which should result in [faster inference](https://mc-stan.org/docs/2_19/stan-users-guide/standardizing-predictors-and-outputs.html).
###Code
standardize = lambda x: (x - x.mean()) / x.std()
dset["AgeScaled"] = dset.MedianAgeMarriage.pipe(standardize)
dset["MarriageScaled"] = dset.Marriage.pipe(standardize)
dset["DivorceScaled"] = dset.Divorce.pipe(standardize)
###Output
_____no_output_____
###Markdown
We write the NumPyro model as follows. While the code should largely be self-explanatory, take note of the following: - In NumPyro, *model* code is any Python callable which can optionally accept additional arguments and keywords. For HMC which we will be using for this tutorial, these arguments and keywords remain static during inference, but we can reuse the same model to generate [predictions](Posterior-Predictive-Distribution) on new data. - In addition to regular Python statements, the model code also contains primitives like `sample`. These primitives can be interpreted with various side-effects using effect handlers. For more on effect handlers, refer to [[3](References)], [[4](References)]. For now, just remember that a `sample` statement makes this a stochastic function that samples some latent parameters from a *prior distribution*. Our goal is to infer the *posterior distribution* of these parameters conditioned on observed data. - The reason why we have kept our predictors as optional keyword arguments is to be able to reuse the same model as we vary the set of predictors. Likewise, the reason why the response variable is optional is that we would like to reuse this model to sample from the posterior predictive distribution. See the [section](Posterior-Predictive-Distribution) on plotting the posterior predictive distribution, as an example.
###Code
def model(marriage=None, age=None, divorce=None):
a = numpyro.sample("a", dist.Normal(0.0, 0.2))
M, A = 0.0, 0.0
if marriage is not None:
bM = numpyro.sample("bM", dist.Normal(0.0, 0.5))
M = bM * marriage
if age is not None:
bA = numpyro.sample("bA", dist.Normal(0.0, 0.5))
A = bA * age
sigma = numpyro.sample("sigma", dist.Exponential(1.0))
mu = a + M + A
numpyro.sample("obs", dist.Normal(mu, sigma), obs=divorce)
###Output
_____no_output_____
###Markdown
Model 1: Predictor - Marriage RateWe first try to model the divorce rate as depending on a single variable, marriage rate. As mentioned above, we can use the same `model` code as earlier, but only pass values for `marriage` and `divorce` keyword arguments. We will use the No U-Turn Sampler (see [[5](References)] for more details on the NUTS algorithm) to run inference on this simple model.The Hamiltonian Monte Carlo (or, the NUTS) implementation in NumPyro takes in a potential energy function. This is the negative log joint density for the model. Therefore, for our model description above, we need to construct a function which given the parameter values returns the potential energy (or negative log joint density). Additionally, the verlet integrator in HMC (or, NUTS) returns sample values simulated using Hamiltonian dynamics in the unconstrained space. As such, continuous variables with bounded support need to be transformed into unconstrained space using bijective transforms. We also need to transform these samples back to their constrained support before returning these values to the user. Thankfully, this is handled on the backend for us, within a convenience class for doing [MCMC inference](https://numpyro.readthedocs.io/en/latest/mcmc.htmlnumpyro.mcmc.MCMC) that has the following methods: - `run(...)`: runs warmup, adapts steps size and mass matrix, and does sampling using the sample from the warmup phase. - `print_summary()`: print diagnostic information like quantiles, effective sample size, and the Gelman-Rubin diagnostic. - `get_samples()`: gets samples from the posterior distribution.Note the following: - JAX uses functional PRNGs. Unlike other languages / frameworks which maintain a global random state, in JAX, every call to a sampler requires an [explicit PRNGKey](https://github.com/google/jaxrandom-numbers-are-different). We will split our initial random seed for subsequent operations, so that we do not accidentally reuse the same seed. - We run inference with the `NUTS` sampler. To run vanilla HMC, we can instead use the [HMC](https://numpyro.readthedocs.io/en/latest/mcmc.htmlnumpyro.mcmc.HMC) class.
###Code
# Start from this source of randomness. We will split keys for subsequent operations.
rng_key = random.PRNGKey(0)
rng_key, rng_key_ = random.split(rng_key)
# Run NUTS.
kernel = NUTS(model)
num_samples = 2000
mcmc = MCMC(kernel, num_warmup=1000, num_samples=num_samples)
mcmc.run(
rng_key_, marriage=dset.MarriageScaled.values, divorce=dset.DivorceScaled.values
)
mcmc.print_summary()
samples_1 = mcmc.get_samples()
###Output
sample: 100%|██████████| 3000/3000 [00:04<00:00, 748.14it/s, 7 steps of size 7.41e-01. acc. prob=0.92]
###Markdown
Posterior Distribution over the Regression ParametersWe notice that the progress bar gives us online statistics on the acceptance probability, step size and number of steps taken per sample while running NUTS. In particular, during warmup, we adapt the step size and mass matrix to achieve a certain target acceptance probability which is 0.8, by default. We were able to successfully adapt our step size to achieve this target in the warmup phase.During warmup, the aim is to adapt hyper-parameters such as step size and mass matrix (the HMC algorithm is very sensitive to these hyper-parameters), and to reach the typical set (see [[6](References)] for more details). If there are any issues in the model specification, the first signal to notice would be low acceptance probabilities or very high number of steps. We use the sample from the end of the warmup phase to seed the MCMC chain (denoted by the second `sample` progress bar) from which we generate the desired number of samples from our target distribution.At the end of inference, NumPyro prints the mean, std and 90% CI values for each of the latent parameters. Note that since we standardized our predictors and response variable, we would expect the intercept to have mean 0, as can be seen here. It also prints other convergence diagnostics on the latent parameters in the model, including [effective sample size](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.effective_sample_size) and the [gelman rubin diagnostic](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.gelman_rubin) ($\hat{R}$). The value for these diagnostics indicates that the chain has converged to the target distribution. In our case, the "target distribution" is the posterior distribution over the latent parameters that we are interested in. Note that this is often worth verifying with multiple chains for more complicated models. In the end, `samples_1` is a collection (in our case, a `dict` since `init_samples` was a `dict`) containing samples from the posterior distribution for each of the latent parameters in the model.To look at our regression fit, let us plot the regression line using our posterior estimates for the regression parameters, along with the 90% Credibility Interval (CI). Note that the [hpdi](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.hpdi) function in NumPyro's diagnostics module can be used to compute CI. In the functions below, note that the collected samples from the posterior are all along the leading axis.
###Code
def plot_regression(x, y_mean, y_hpdi):
# Sort values for plotting by x axis
idx = jnp.argsort(x)
marriage = x[idx]
mean = y_mean[idx]
hpdi = y_hpdi[:, idx]
divorce = dset.DivorceScaled.values[idx]
# Plot
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 6))
ax.plot(marriage, mean)
ax.plot(marriage, divorce, "o")
ax.fill_between(marriage, hpdi[0], hpdi[1], alpha=0.3, interpolate=True)
return ax
# Compute empirical posterior distribution over mu
posterior_mu = (
jnp.expand_dims(samples_1["a"], -1)
+ jnp.expand_dims(samples_1["bM"], -1) * dset.MarriageScaled.values
)
mean_mu = jnp.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_mu, hpdi_mu)
ax.set(
xlabel="Marriage rate", ylabel="Divorce rate", title="Regression line with 90% CI"
);
###Output
_____no_output_____
###Markdown
We can see from the plot, that the CI broadens towards the tails where the data is relatively sparse, as can be expected. Prior Predictive DistributionLet us check that we have set sensible priors by sampling from the prior predictive distribution. NumPyro provides a handy [Predictive](http://num.pyro.ai/en/latest/utilities.htmlnumpyro.infer.util.Predictive) utility for this purpose.
###Code
from numpyro.infer import Predictive
rng_key, rng_key_ = random.split(rng_key)
prior_predictive = Predictive(model, num_samples=100)
prior_predictions = prior_predictive(rng_key_, marriage=dset.MarriageScaled.values)[
"obs"
]
mean_prior_pred = jnp.mean(prior_predictions, axis=0)
hpdi_prior_pred = hpdi(prior_predictions, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_prior_pred, hpdi_prior_pred)
ax.set(xlabel="Marriage rate", ylabel="Divorce rate", title="Predictions with 90% CI");
###Output
_____no_output_____
###Markdown
Posterior Predictive DistributionLet us now look at the posterior predictive distribution to see how our predictive distribution looks with respect to the observed divorce rates. To get samples from the posterior predictive distribution, we need to run the model by substituting the latent parameters with samples from the posterior. Note that by default we generate a single prediction for each sample from the joint posterior distribution, but this can be controlled using the `num_samples` argument.
###Code
rng_key, rng_key_ = random.split(rng_key)
predictive = Predictive(model, samples_1)
predictions = predictive(rng_key_, marriage=dset.MarriageScaled.values)["obs"]
df = dset.filter(["Location"])
df["Mean Predictions"] = jnp.mean(predictions, axis=0)
df.head()
###Output
_____no_output_____
###Markdown
Predictive Utility With Effect HandlersTo remove the magic behind `Predictive`, let us see how we can combine [effect handlers](https://numpyro.readthedocs.io/en/latest/handlers.html) with the [vmap](https://github.com/google/jaxauto-vectorization-with-vmap) JAX primitive to implement our own simplified predictive utility function that can do vectorized predictions.
###Code
def predict(rng_key, post_samples, model, *args, **kwargs):
model = handlers.seed(handlers.condition(model, post_samples), rng_key)
model_trace = handlers.trace(model).get_trace(*args, **kwargs)
return model_trace["obs"]["value"]
# vectorize predictions via vmap
predict_fn = vmap(
lambda rng_key, samples: predict(
rng_key, samples, model, marriage=dset.MarriageScaled.values
)
)
###Output
_____no_output_____
###Markdown
Note the use of the `condition`, `seed` and `trace` effect handlers in the `predict` function. - The `seed` effect-handler is used to wrap a stochastic function with an initial `PRNGKey` seed. When a sample statement inside the model is called, it uses the existing seed to sample from a distribution but this effect-handler also splits the existing key to ensure that future `sample` calls in the model use the newly split key instead. This is to prevent us from having to explicitly pass in a `PRNGKey` to each `sample` statement in the model. - The `condition` effect handler conditions the latent sample sites to certain values. In our case, we are conditioning on values from the posterior distribution returned by MCMC. - The `trace` effect handler runs the model and records the execution trace within an `OrderedDict`. This trace object contains execution metadata that is useful for computing quantities such as the log joint density. It should be clear now that the `predict` function simply runs the model by substituting the latent parameters with samples from the posterior (generated by the `mcmc` function) to generate predictions. Note the use of JAX's auto-vectorization transform called [vmap](https://github.com/google/jaxauto-vectorization-with-vmap) to vectorize predictions. Note that if we didn't use `vmap`, we would have to use a native for loop which for each sample which is much slower. Each draw from the posterior can be used to get predictions over all the 50 states. When we vectorize this over all the samples from the posterior using `vmap`, we will get a `predictions_1` array of shape `(num_samples, 50)`. We can then compute the mean and 90% CI of these samples to plot the posterior predictive distribution. We note that our mean predictions match those obtained from the `Predictive` utility class.
###Code
# Using the same key as we used for Predictive - note that the results are identical.
predictions_1 = predict_fn(random.split(rng_key_, num_samples), samples_1)
mean_pred = jnp.mean(predictions_1, axis=0)
df = dset.filter(["Location"])
df["Mean Predictions"] = mean_pred
df.head()
hpdi_pred = hpdi(predictions_1, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel="Marriage rate", ylabel="Divorce rate", title="Predictions with 90% CI");
###Output
_____no_output_____
###Markdown
We have used the same `plot_regression` function as earlier. We notice that our CI for the predictive distribution is much broader as compared to the last plot due to the additional noise introduced by the `sigma` parameter. Most data points lie well within the 90% CI, which indicates a good fit. Posterior Predictive DensityLikewise, making use of effect-handlers and `vmap`, we can also compute the log likelihood for this model given the dataset, and the log posterior predictive density [[6](References)] which is given by $$ log \prod_{i=1}^{n} \int p(y_i | \theta) p_{post}(\theta) d\theta \approx \sum_{i=1}^n log \frac{\sum_s p(\theta^{s})}{S} \\= \sum_{i=1}^n (log \sum_s p(\theta^{s}) - log(S))$$.Here, $i$ indexes the observed data points $y$ and $s$ indexes the posterior samples over the latent parameters $\theta$. If the posterior predictive density for a model has a comparatively high value, it indicates that the observed data-points have higher probability under the given model.
###Code
def log_likelihood(rng_key, params, model, *args, **kwargs):
model = handlers.condition(model, params)
model_trace = handlers.trace(model).get_trace(*args, **kwargs)
obs_node = model_trace["obs"]
return obs_node["fn"].log_prob(obs_node["value"])
def log_pred_density(rng_key, params, model, *args, **kwargs):
n = list(params.values())[0].shape[0]
log_lk_fn = vmap(
lambda rng_key, params: log_likelihood(rng_key, params, model, *args, **kwargs)
)
log_lk_vals = log_lk_fn(random.split(rng_key, n), params)
return (logsumexp(log_lk_vals, 0) - jnp.log(n)).sum()
###Output
_____no_output_____
###Markdown
Note that NumPyro provides the [log_likelihood](http://num.pyro.ai/en/latest/utilities.htmllog-likelihood) utility function that can be used directly for computing `log likelihood` as in the first function for any general model. In this tutorial, we would like to emphasize that there is nothing magical about such utility functions, and you can roll out your own inference utilities using NumPyro's effect handling stack.
###Code
rng_key, rng_key_ = random.split(rng_key)
print(
"Log posterior predictive density: {}".format(
log_pred_density(
rng_key_,
samples_1,
model,
marriage=dset.MarriageScaled.values,
divorce=dset.DivorceScaled.values,
)
)
)
###Output
Log posterior predictive density: -66.70008087158203
###Markdown
Model 2: Predictor - Median Age of MarriageWe will now model the divorce rate as a function of the median age of marriage. The computations are mostly a reproduction of what we did for Model 1. Notice the following: - Divorce rate is inversely related to the age of marriage. Hence states where the median age of marriage is low will likely have a higher divorce rate. - We get a higher log likelihood as compared to Model 2, indicating that median age of marriage is likely a much better predictor of divorce rate.
###Code
rng_key, rng_key_ = random.split(rng_key)
mcmc.run(rng_key_, age=dset.AgeScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_2 = mcmc.get_samples()
posterior_mu = (
jnp.expand_dims(samples_2["a"], -1)
+ jnp.expand_dims(samples_2["bA"], -1) * dset.AgeScaled.values
)
mean_mu = jnp.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_mu, hpdi_mu)
ax.set(
xlabel="Median marriage age",
ylabel="Divorce rate",
title="Regression line with 90% CI",
);
rng_key, rng_key_ = random.split(rng_key)
predictions_2 = Predictive(model, samples_2)(rng_key_, age=dset.AgeScaled.values)["obs"]
mean_pred = jnp.mean(predictions_2, axis=0)
hpdi_pred = hpdi(predictions_2, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel="Median Age", ylabel="Divorce rate", title="Predictions with 90% CI");
rng_key, rng_key_ = random.split(rng_key)
print(
"Log posterior predictive density: {}".format(
log_pred_density(
rng_key_,
samples_2,
model,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values,
)
)
)
###Output
Log posterior predictive density: -59.251956939697266
###Markdown
Model 3: Predictor - Marriage Rate and Median Age of MarriageFinally, we will also model divorce rate as depending on both marriage rate as well as the median age of marriage. Note that the model's posterior predictive density is similar to Model 2 which likely indicates that the marginal information from marriage rate in predicting divorce rate is low when the median age of marriage is already known.
###Code
rng_key, rng_key_ = random.split(rng_key)
mcmc.run(
rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values,
)
mcmc.print_summary()
samples_3 = mcmc.get_samples()
rng_key, rng_key_ = random.split(rng_key)
print(
"Log posterior predictive density: {}".format(
log_pred_density(
rng_key_,
samples_3,
model,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values,
)
)
)
###Output
Log posterior predictive density: -59.06374740600586
###Markdown
Divorce Rate Residuals by StateThe regression plots above shows that the observed divorce rates for many states differs considerably from the mean regression line. To dig deeper into how the last model (Model 3) under-predicts or over-predicts for each of the states, we will plot the posterior predictive and residuals (`Observed divorce rate - Predicted divorce rate`) for each of the states.
###Code
# Predictions for Model 3.
rng_key, rng_key_ = random.split(rng_key)
predictions_3 = Predictive(model, samples_3)(
rng_key_, marriage=dset.MarriageScaled.values, age=dset.AgeScaled.values
)["obs"]
y = jnp.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 16))
pred_mean = jnp.mean(predictions_3, axis=0)
pred_hpdi = hpdi(predictions_3, 0.9)
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = jnp.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
idx = jnp.argsort(residuals_mean)
# Plot posterior predictive
ax[0].plot(jnp.zeros(50), y, "--")
ax[0].errorbar(
pred_mean[idx],
y,
xerr=pred_hpdi[1, idx] - pred_mean[idx],
marker="o",
ms=5,
mew=4,
ls="none",
alpha=0.8,
)
ax[0].plot(dset.DivorceScaled.values[idx], y, marker="o", ls="none", color="gray")
ax[0].set(
xlabel="Posterior Predictive (red) vs. Actuals (gray)",
ylabel="State",
title="Posterior Predictive with 90% CI",
)
ax[0].set_yticks(y)
ax[0].set_yticklabels(dset.Loc.values[idx], fontsize=10)
# Plot residuals
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = jnp.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
err = residuals_hpdi[1] - residuals_mean
ax[1].plot(jnp.zeros(50), y, "--")
ax[1].errorbar(
residuals_mean[idx], y, xerr=err[idx], marker="o", ms=5, mew=4, ls="none", alpha=0.8
)
ax[1].set(xlabel="Residuals", ylabel="State", title="Residuals with 90% CI")
ax[1].set_yticks(y)
ax[1].set_yticklabels(dset.Loc.values[idx], fontsize=10);
###Output
_____no_output_____
###Markdown
The plot on the left shows the mean predictions with 90% CI for each of the states using Model 3. The gray markers indicate the actual observed divorce rates. The right plot shows the residuals for each of the states, and both these plots are sorted by the residuals, i.e. at the bottom, we are looking at states where the model predictions are higher than the observed rates, whereas at the top, the reverse is true.Overall, the model fit seems good because most observed data points like within a 90% CI around the mean predictions. However, notice how the model over-predicts by a large margin for states like Idaho (bottom left), and on the other end under-predicts for states like Maine (top right). This is likely indicative of other factors that we are missing out in our model that affect divorce rate across different states. Even ignoring other socio-political variables, one such factor that we have not yet modeled is the measurement noise given by `Divorce SE` in the dataset. We will explore this in the next section. Regression Model with Measurement ErrorNote that in our previous models, each data point influences the regression line equally. Is this well justified? We will build on the previous model to incorporate measurement error given by `Divorce SE` variable in the dataset. Incorporating measurement noise will be useful in ensuring that observations that have higher confidence (i.e. lower measurement noise) have a greater impact on the regression line. On the other hand, this will also help us better model outliers with high measurement errors. For more details on modeling errors due to measurement noise, refer to Chapter 14 of [[1](References)].To do this, we will reuse Model 3, with the only change that the final observed value has a measurement error given by `divorce_sd` (notice that this has to be standardized since the `divorce` variable itself has been standardized to mean 0 and std 1).
###Code
def model_se(marriage, age, divorce_sd, divorce=None):
a = numpyro.sample("a", dist.Normal(0.0, 0.2))
bM = numpyro.sample("bM", dist.Normal(0.0, 0.5))
M = bM * marriage
bA = numpyro.sample("bA", dist.Normal(0.0, 0.5))
A = bA * age
sigma = numpyro.sample("sigma", dist.Exponential(1.0))
mu = a + M + A
divorce_rate = numpyro.sample("divorce_rate", dist.Normal(mu, sigma))
numpyro.sample("obs", dist.Normal(divorce_rate, divorce_sd), obs=divorce)
# Standardize
dset["DivorceScaledSD"] = dset["Divorce SE"] / jnp.std(dset.Divorce.values)
rng_key, rng_key_ = random.split(rng_key)
kernel = NUTS(model_se, target_accept_prob=0.9)
mcmc = MCMC(kernel, num_warmup=1000, num_samples=3000)
mcmc.run(
rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values,
divorce=dset.DivorceScaled.values,
)
mcmc.print_summary()
samples_4 = mcmc.get_samples()
###Output
sample: 100%|██████████| 4000/4000 [00:06<00:00, 578.19it/s, 15 steps of size 2.58e-01. acc. prob=0.93]
###Markdown
Effect of Incorporating Measurement Noise on ResidualsNotice that our values for the regression coefficients is very similar to Model 3. However, introducing measurement noise allows us to more closely match our predictive distribution to the observed values. We can see this if we plot the residuals as earlier.
###Code
rng_key, rng_key_ = random.split(rng_key)
predictions_4 = Predictive(model_se, samples_4)(
rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values,
)["obs"]
sd = dset.DivorceScaledSD.values
residuals_4 = dset.DivorceScaled.values - predictions_4
residuals_mean = jnp.mean(residuals_4, axis=0)
residuals_hpdi = hpdi(residuals_4, 0.9)
err = residuals_hpdi[1] - residuals_mean
idx = jnp.argsort(residuals_mean)
y = jnp.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 16))
# Plot Residuals
ax.plot(jnp.zeros(50), y, "--")
ax.errorbar(
residuals_mean[idx], y, xerr=err[idx], marker="o", ms=5, mew=4, ls="none", alpha=0.8
)
# Plot SD
ax.errorbar(residuals_mean[idx], y, xerr=sd[idx], ls="none", color="orange", alpha=0.9)
# Plot earlier mean residual
ax.plot(
jnp.mean(dset.DivorceScaled.values - predictions_3, 0)[idx],
y,
ls="none",
marker="o",
ms=6,
color="black",
alpha=0.6,
)
ax.set(xlabel="Residuals", ylabel="State", title="Residuals with 90% CI")
ax.set_yticks(y)
ax.set_yticklabels(dset.Loc.values[idx], fontsize=10)
ax.text(
-2.8,
-7,
"Residuals (with error-bars) from current model (in red). "
"Black marker \nshows residuals from the previous model (Model 3). "
"Measurement \nerror is indicated by orange bar.",
);
###Output
_____no_output_____
###Markdown
The plot above shows the residuals for each of the states, along with the measurement noise given by inner error bar. The gray dots are the mean residuals from our earlier Model 3. Notice how having an additional degree of freedom to model the measurement noise has shrunk the residuals. In particular, for Idaho and Maine, our predictions are now much closer to the observed values after incorporating measurement noise in the model.To better see how measurement noise affects the movement of the regression line, let us plot the residuals with respect to the measurement noise.
###Code
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10, 6))
x = dset.DivorceScaledSD.values
y1 = jnp.mean(residuals_3, 0)
y2 = jnp.mean(residuals_4, 0)
ax.plot(x, y1, ls="none", marker="o")
ax.plot(x, y2, ls="none", marker="o")
for i, (j, k) in enumerate(zip(y1, y2)):
ax.plot([x[i], x[i]], [j, k], "--", color="gray")
ax.set(
xlabel="Measurement Noise",
ylabel="Residual",
title="Mean residuals (Model 4: red, Model 3: blue)",
);
###Output
_____no_output_____
###Markdown
Bayesian Regression Using NumPyroIn this tutorial, we will explore how to do bayesian regression in NumPyro, using a simple example adapted from Statistical Rethinking [[1](References)]. In particular, we would like to explore the following: - Write a simple model using the `sample` NumPyro primitive. - Run inference using MCMC in NumPyro, in particular, using the No U-Turn Sampler (NUTS) to get a posterior distribution over our regression parameters of interest. - Learn about utilities such as `initialize_model` that are useful for running HMC. - Learn how we can use effect-handlers in NumPyro to generate execution traces, condition on sample sites, seed models with RNG seeds, etc., and use this to implement various utilities that will be useful for MCMC. e.g. computing model log likelihood, generating empirical distribution over the posterior predictive, etc. Tutorial Outline:1. [Dataset](Dataset)2. [Regression Model to Predict Divorce Rate](Regression-Model-to-Predict-Divorce-Rate) - [Model-1: Predictor-Marriage Rate](Model-1:-Predictor---Marriage-Rate) - [Posterior Distribution over the Regression Parameters](Posterior-Distribution-over-the-Regression-Parameters) - [Posterior Predictive Distribution](Posterior-Predictive-Distribution) - [Model Log Likelihood](Model-Log-Likelihood) - [Model-2: Predictor-Median Age of Marriage](Model-2:-Predictor---Median-Age-of-Marriage) - [Model-3: Predictor-Marriage Rate and Median Age of Marriage](Model-3:-Predictor---Marriage-Rate-and-Median-Age-of-Marriage) - [Divorce Rate Residuals by State](Divorce-Rate-Residuals-by-State)3. [Regression Model with Measurement Error](Regression-Model-with-Measurement-Error) - [Effect of Incorporating Measurement Noise on Residuals](Effect-of-Incorporating-Measurement-Noise-on-Residuals)4. [References](References)
###Code
%reset -s -f
import jax
import jax.numpy as np
from jax import random, vmap
from jax.config import config; config.update("jax_platform_name", "cpu")
from jax.scipy.special import logsumexp
import matplotlib
import matplotlib.pyplot as plt
import numpy as onp
import pandas as pd
import seaborn as sns
from numpyro.diagnostics import hpdi
import numpyro.distributions as dist
from numpyro.handlers import sample, seed, substitute, trace
from numpyro.hmc_util import initialize_model
from numpyro.mcmc import mcmc
%matplotlib inline
plt.style.use('bmh')
plt.rcParams.update({'font.size': 16,
'xtick.labelsize': 14,
'ytick.labelsize': 14,
'axes.titlesize': 'large',
'axes.labelsize': 'medium'})
###Output
_____no_output_____
###Markdown
DatasetFor this example, we will use the `WaffleDivorce` dataset from Chapter 05, Statistical Rethinking [[1](References)]. The dataset contains divorce rates in each of the 50 states in the USA, along with predictors such as population, median age of marriage, whether it is a Southern state and, curiously, number of Waffle Houses.
###Code
DATASET_URL = 'https://raw.githubusercontent.com/rmcelreath/rethinking/master/data/WaffleDivorce.csv'
dset = pd.read_csv(DATASET_URL, sep=';')
dset
###Output
_____no_output_____
###Markdown
Let us plot the pair-wise relationship amongst the main variables in the dataset, using `seaborn.pairplot`.
###Code
vars = ['Population', 'MedianAgeMarriage', 'Marriage', 'WaffleHouses', 'South', 'Divorce']
sns.pairplot(dset, x_vars=vars, y_vars=vars, palette='husl');
###Output
_____no_output_____
###Markdown
From the plots above, we can clearly observe that there is a relationship between divorce rates and marriage rates in a state (as might be expected), and also between divorce rates and median age of marriage. There is also a weak relationship between number of Waffle Houses and divorce rates, which is not obvious from the plot above, but will be clearer if we regress `Divorce` against `WaffleHouse` and plot the results. This is an example of a spurious association. We do not expect the number of Waffle Houses in a state to affect the divorce rate, but it is likely correlated with other factors that have an effect on the divorce rate. We will not delve into this spurious association in this tutorial, but the interested reader is encouraged to read Chapters 5 and 6 of [[1](References)] which explores the problem of causal association in the presence of multiple predictors. For simplicity, we will primarily focus on marriage rate and the median age of marriage as our predictors for divorce rate throughout the remaining tutorial.
###Code
sns.regplot('WaffleHouses', 'Divorce', dset);
###Output
_____no_output_____
###Markdown
Regression Model to Predict Divorce RateLet us now write a regressionn model in *NumPyro* to predict the divorce rate as a linear function of marriage rate and median age of marriage in each of the states. First, note that our predictor variables have somewhat different scales. It is a good practice to standardize our predictors and response variables to mean `0` and standard deviation `1`, which should result in faster inference. Refer to this [note](https://mc-stan.org/docs/2_19/stan-users-guide/standardizing-predictors-and-outputs.html) in the Stan manual for more details.
###Code
dset['AgeScaled'] = (dset.MedianAgeMarriage - onp.mean(dset.MedianAgeMarriage)) / onp.std(dset.MedianAgeMarriage)
dset['MarriageScaled'] = (dset.Marriage - onp.mean(dset.Marriage)) / onp.std(dset.Marriage)
dset['DivorceScaled'] = (dset.Divorce - onp.mean(dset.Divorce)) / onp.std(dset.Divorce)
###Output
_____no_output_____
###Markdown
We write the NumPyro model as follows. While the code should largely be self-explanatory, take note of the following: - In NumPyro, model code is any Python callable that can accept arguments and keywords. For HMC which we will be using for this tutorial, these arguments and keywords cannot change during model execution. This is convenient for passing in numpy arrays, or boolean arguments that might affect the execution path. - In addition to regular Python statements, the model code also contains primitives like `sample`. These primitives can be interpreted with various side-effects by effect handlers used by inference algorithms in NumPyro. For more on effect handlers, refer to [[3](References)], [[4](References)]. For now, just remember that a `sample` statement makes this a stochastic function by sampling from some distribution of interest. - The reason why we have kept our predictors as optional keyword arguments is to be able to reuse the same model as we vary the set of predictors. Likewise, the reason why the response variable is optional is that we would like to reuse this model to sample from the posterior predictive distribution. See the [section](Posterior-Predictive-Distribution) on plotting the posterior predictive distribution, as an example.
###Code
def model(marriage=None, age=None, divorce=None):
a = sample('a', dist.Normal(0., 0.2))
M, A = 0., 0.
if marriage is not None:
bM = sample('bM', dist.Normal(0., 0.5))
M = bM * marriage
if age is not None:
bA = sample('bA', dist.Normal(0., 0.5))
A = bA * age
sigma = sample('sigma', dist.Exponential(1.))
mu = a + M + A
sample('obs', dist.Normal(mu, sigma), obs=divorce)
###Output
_____no_output_____
###Markdown
Model 1: Predictor - Marriage RateWe first try to model the divorce rate as depending on a single variable, marriage rate. As mentioned above, we can use the same `model` code as earlier, but only pass values for `marriage` and `divorce` keyword arguments. We will use the No U-Turn Sampler (see [[5](References)] for more details on the NUTS algorithm) to run inference on this simple model.Note the following requirements for running HMC and NUTS in NumPyro: - The Hamiltonian Monte Carlo (or, the NUTS) implementation in Pyro takes in a potential energy function. This is the negative log joint density for the model. - The verlet integrator in HMC (or, NUTS) returns sample values simulated using Hamiltonian dynamics in the unconstrained space. As such, continuous variables with bounded support need to be transformed into unconstrained space using bijective transforms. We also need to transform these samples back to their constrained support before returning these values to the user. Thankfully, all of this is handled on the backend for us. Let us go through the steps one by one. - JAX uses functional PRNGs. Unlike other languages / frameworks which maintain a global random state, in JAX, every call to a sampler requires an [explicit PRNGKey](https://github.com/google/jaxrandom-numbers-are-different). We will split our initial random seed for subsequent operations, so that we do not accidentally reuse the same seed. - The function [initialize_model](https://numpyro.readthedocs.io/en/latest/mcmc.htmlnumpyro.hmc_util.initialize_model) takes a model along with model arguments (and keyword arguments), and returns a tuple of initial parameters, potential energy function, and constrain function. The initial parameters are used to initiate the MCMC chain, the potential energy function is a callable that when given unconstrained sample values returns the potential energy at these sample values. This is used by the verlet integrator in HMC. Lastly, `constrain_fn` is a callable that transforms the unconstrained samples returned by HMC/NUTS to sample values that lie within the constrained support. - Finally, we use the [mcmc](https://numpyro.readthedocs.io/en/latest/mcmc.htmlnumpyro.mcmc.mcmc) function to run inference using the default `NUTS` sampler. Note that to run vanilla HMC, all you need to do is to pass `algo='HMC'` as argument to `mcmc` instead. This is a convenience utility that does all of the following: - Runs warmup - adapts steps size and mass matrix. - Uses the sample from the warmup phase to start MCMC. - Return samples from the posterior distribution and print diagnostic information.
###Code
# Start from this source of randomness. We will split keys for subsequent operations.
rng = random.PRNGKey(0)
rng_, rng = random.split(rng)
# Initialize the model.
init_params, potential_fn, constrain_fn = initialize_model(rng_, model,
marriage=dset.MarriageScaled.values,
divorce=dset.DivorceScaled.values)
num_warmup, num_samples = 1000, 2000
# Run NUTS.
samples_1 = mcmc(num_warmup, num_samples, init_params,
potential_fn=potential_fn,
trajectory_length=10,
constrain_fn=constrain_fn)
###Output
warmup: 100%|██████████| 1000/1000 [00:12<00:00, 78.24it/s, 1 steps of size 6.99e-01. acc. prob=0.79]
sample: 100%|██████████| 2000/2000 [00:03<00:00, 515.37it/s, 3 steps of size 6.99e-01. acc. prob=0.88]
###Markdown
Posterior Distribution over the Regression ParametersWe notice that the progress bar gives us online statistics on the acceptance probability, step size and number of steps taken per sample while running NUTS. In particular, during warmup, we adapt the step size and mass matrix to achieve a certain target acceptance probability (0.8, by default). We were able to successfully adapt our step size to achieve this target in the warmup phase.During warmup, the aim is to adapt or learn values for hyper-parameters such as step size and mass matrix (the HMC algorithm is very sensitive to these hyper-parameters), and to reach the typical set (see [[6](References)] for more details). If there are any issues in the model specification, it might be reflected in low acceptance probabilities or very high number of steps. We use the sample from the end of the warmup phase to seed the MCMC chain (denoted by the second `sample` progress bar) from which we generate the desired number of samples from our target distribution.At the end of inference, NumPyro prints the mean, std and 90% CI values for each of the latent parameters. Note that since we standardized our predictors and response variable, we would expect the intercept to have mean 0, as can be seen here. It also prints other convergence diagnostics on the latent parameters in the model, including [effective sample size](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.effective_sample_size) and the [gelman rubin diagnostic](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.gelman_rubin) ($\hat{R}$). The value for these diagnostics indicates that the chain has converged to the target distribution. In our case, the "target distribution" is the posterior distribution over the latent parameters that we are interested in. Note that this is often worth verifying with multiple chains on more complicated models. In the end, `samples_1` is a collection (in our case, a `dict` since `init_samples` was a `dict`) containing samples from the posterior distribution for each of the latent parameters in the model.To look at our regression fit, let us plot the regression line using our posterior estimates for the regression parameters, along with the 90% Credibility Interval (CI). Note that the [hpdi](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.hpdi) function in NumPyro's diagnostics module can be used to compute CI. In the functions below, note that the collected samples from the posterior are all along the leading axis.We can see from the plot, that the CI broadens towards the tails where values of the predictor variables are sparse, as can be expected.
###Code
def plot_regression(x, y_mean, y_hpdi):
# Sort values for plotting by x axis
idx = np.argsort(x)
marriage = x[idx]
mean = y_mean[idx]
hpdi = y_hpdi[:, idx]
divorce = dset.DivorceScaled.values[idx]
# Plot
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 6))
ax.plot(marriage, mean)
ax.plot(marriage, divorce, 'o')
ax.fill_between(marriage, hpdi[0], hpdi[1], alpha=0.3, interpolate=True)
return ax
# Compute empirical posterior distribution over mu
posterior_mu = np.expand_dims(samples_1['a'], -1) + \
np.expand_dims(samples_1['bM'], -1) * dset.MarriageScaled.values
mean_mu = np.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_mu, hpdi_mu)
ax.set(xlabel='Marriage rate', ylabel='Divorce rate', title='Regression line with 90% CI');
###Output
_____no_output_____
###Markdown
Posterior Predictive DistributionLet us now look at the posterior predictive distribution to see how our predictive distribution looks with respect to the observed divorce rates. To get samples from the posterior predictive distribution, we need to run the model by substituting the latent parameters with samples from the posterior. This sounds complicated, but this can be easily achieved by using effect handlers from the [handlers module](https://numpyro.readthedocs.io/en/latest/handlers.html).In particular, note the use of the `substitute`, `seed` and `trace` effect handlers in the `predict` function. - The `seed` effect-handler is used to wrap a stochastic function with an initial `PRNGKey` seed. When a sample statement inside the model is called, it uses the existing seed to sample from a distribution but this effect-handler also splits the existing key to ensure that future `sample` calls in the model use the newly split key instead. This is to prevent us from having to explicitly pass in a `PRNGKey` to each `sample` statement. - The `substitute` effect handler simply substitutes the value for the site name present in the `post_samples` dict instead of sampling from the distribution, which can be useful for conditioning sample sites to certain values. - The `trace` effect handler runs the model and records the execution trace within an `OrderedDict`. This trace object contains execution metadata that is useful for computing quantities such as the log joint density. It should be clear now that the `predict` function simply runs the model by substituting the latent parameters with samples from the posterior (generated by the `mcmc` function) to generate predictions, which are samples from the posterior predictive distribution. Note the use of JAX's auto-vectorization transform called [vmap](https://github.com/google/jaxauto-vectorization-with-vmap) to vectorize predictions. If we didn't use `vmap`, we would have to use a native for loop which for each sample which is much slower. Each draw from the posterior can be used to get predictions over all the 50 states. When we vectorize this over all the samples from the posterior using `vmap`, we will get a `predictions_1` array of shape `(num_samples, 50)`. We can then compute the mean and 90% CI of these samples to plot the posterior predictive distribution.
###Code
def predict(rng, post_samples, model, *args, **kwargs):
model = substitute(seed(model, rng), post_samples)
model_trace = trace(model).get_trace(*args, **kwargs)
return model_trace['obs']['value']
# vectorize predictions via vmap
predict_fn = vmap(lambda rng, samples: predict(rng, samples, model, marriage=dset.MarriageScaled.values))
rng, rng_ = random.split(rng)
predictions_1 = predict_fn(random.split(rng_, num_samples), samples_1)
mean_pred = np.mean(predictions_1, axis=0)
hpdi_pred = hpdi(predictions_1, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel='Marriage rate', ylabel='Divorce rate', title='Predictions with 90% CI');
###Output
_____no_output_____
###Markdown
We will use the same `plot_regression` function as earlier. We notice that our CI for the predictive distribution is much broader as compared to the last plot due to the additional noise introduced by the `sigma` parameter. Note that most data points lie well within the 90% CI, which indicates a good fit. Model Log LikelihoodLikewise, making use of effect-handlers and `vmap`, we can also compute the log likelihood for this model given the dataset.
###Code
def log_lk(rng, params, model, *args, **kwargs):
model = substitute(seed(model, rng), params)
model_trace = trace(model).get_trace(*args, **kwargs)
obs_node = model_trace['obs']
return np.sum(obs_node['fn'].log_prob(obs_node['value']))
def expected_log_likelihood(rng, params, model, *args, **kwargs):
n = list(params.values())[0].shape[0]
log_lk_fn = vmap(lambda rng, params: log_lk(rng, params, model, *args, **kwargs))
log_lk_vals = log_lk_fn(random.split(rng, n), params)
return logsumexp(log_lk_vals) - np.log(n)
rng, rng_ = random.split(rng)
print('Log likelihood: {}'.format(expected_log_likelihood(rng_,
samples_1,
model,
marriage=dset.MarriageScaled.values,
divorce=dset.DivorceScaled.values)))
###Output
Log likelihood: -68.14618682861328
###Markdown
Model 2: Predictor - Median Age of MarriageWe will now model the divorce rate as a function of the median age of marriage. The computations are mostly a reproduction of what we did for Model 1. Notice the following: - Divorce rate is inversely related to the age of marriage. Hence states where the median age of marriage is low will likely have a higher divorce rate. - We get a higher log likelihood of -60.92 as compared to -68.15 with Model 2, indicating that median age of marriage is likely a much better predictor of divorce rate.
###Code
rng, rng_ = random.split(rng)
init_params, potential_fn, constrain_fn = initialize_model(rng_, model,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values)
samples_2 = mcmc(num_warmup, num_samples, init_params,
potential_fn=potential_fn,
trajectory_length=10,
constrain_fn=constrain_fn)
posterior_mu = np.expand_dims(samples_2['a'], -1) + \
np.expand_dims(samples_2['bA'], -1) * dset.AgeScaled.values
mean_mu = np.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_mu, hpdi_mu)
ax.set(xlabel='Median marriage age', ylabel='Divorce rate', title='Regression line with 90% CI');
rng, rng_ = random.split(rng)
predict_fn = vmap(lambda rng, samples: predict(rng, samples, model, age=dset.AgeScaled.values))
predictions_2 = predict_fn(random.split(rng_, num_samples), samples_2)
mean_pred = np.mean(predictions_2, axis=0)
hpdi_pred = hpdi(predictions_2, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel='Median Age', ylabel='Divorce rate', title='Predictions with 90% CI');
rng, rng_ = random.split(rng)
print('Log likelihood: {}'.format(expected_log_likelihood(rng_,
samples_2,
model,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values)))
###Output
Log likelihood: -60.926387786865234
###Markdown
Model 3: Predictor - Marriage Rate and Median Age of MarriageFinally, we will also model divorce rate as depending on both marriage rate as well as the median age of marriage. Note that there is no increase in the model's log likelihood over Model 2 which likely indicates that the marginal information from marriage rate in predicting divorce rate is low when the median age of marriage is already known.
###Code
rng, rng_ = random.split(rng)
init_params, potential_fn, constrain_fn = initialize_model(rng_, model,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values)
samples_3 = mcmc(num_warmup, num_samples, init_params,
potential_fn=potential_fn,
trajectory_length=10,
constrain_fn=constrain_fn)
rng, rng_ = random.split(rng)
print('Log likelihood: {}'.format(expected_log_likelihood(rng_,
samples_3,
model,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values)))
###Output
Log likelihood: -61.04328918457031
###Markdown
Divorce Rate Residuals by StateThe regression plots above shows that the observed divorce rates for many states differs considerably from the mean regression line. To dig deeper into how the last model (Model 3) under-predicts or over-predicts for each of the states, we will plot the posterior predictive and residuals (`Observed divorce rate - Predicted divorce rate`) for each of the states.
###Code
# Predictions for Model 3.
rng, rng_ = random.split(rng)
predict_fn = vmap(lambda rng, samples: predict(rng, samples, model,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values))
predictions_3 = predict_fn(random.split(rng_, num_samples), samples_3)
y = np.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 16))
pred_mean = np.mean(predictions_3, axis=0)
pred_hpdi = hpdi(predictions_3, 0.9)
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = np.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
idx = np.argsort(residuals_mean)
# Plot posterior predictive
ax[0].plot(np.zeros(50), y, '--')
ax[0].errorbar(pred_mean[idx], y, xerr=pred_hpdi[1, idx] - pred_mean[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
ax[0].plot(dset.DivorceScaled.values[idx], y, marker='o',
ls='none', color='gray', alpha=0.5)
ax[0].set(xlabel='Posterior Predictive', ylabel='State', title='Posterior Predictive with 90% CI')
ax[0].set_yticks(y)
ax[0].set_yticklabels(dset.Loc.values[idx], fontsize=10);
# Plot residuals
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = np.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
err = residuals_hpdi[1] - residuals_mean
ax[1].plot(np.zeros(50), y, '--')
ax[1].errorbar(residuals_mean[idx], y, xerr=err[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
ax[1].set(xlabel='Residuals', ylabel='State', title='Residuals with 90% CI')
ax[1].set_yticks(y)
ax[1].set_yticklabels(dset.Loc.values[idx], fontsize=10);
###Output
_____no_output_____
###Markdown
The plot on the left shows the mean predictions with 90% CI for each of the states using Model 3. The gray markers indicate the actual observed divorce rates. The right plot shows the residuals for each of the states, and both these plots are sorted by the residuals, i.e. at the bottom, we are looking at states where the model predictions are higher than the observed rates, whereas at the top, the reverse is true.Overall, the model fit seems good because most observed data points like within a 90% CI around the mean predictions. However, notice how the model over-predicts by a large margin for states like Idaho (bottom left), and on the other end under-predicts for states like Maine (top right). This is likely indicative of other factors that we are missing out in our model that affect divorce rate across different states. Even ignoring other socio-political variables, one such factor that we have not yet modeled is the measurement noise given by `Divorce SE` in the dataset. We will explore this in the next section. Regression Model with Measurement ErrorNote that in our previous models, each data point influences the regression line equally. Is this well justified? We will build on the previous model to incorporate measurement error given by `Divorce SE` variable in the dataset. Incorporating measurement noise will be useful in ensuring that observations that have higher confidence (i.e. lower measurement noise) have a greater impact on the regression line. On the other hand, this will also help us better model outliers with high measurement errors. For more details on modeling errors due to measurement noise, refer to Chapter 15 of [[1](References)].To do this, we will reuse Model 3, with the only change that the final observed value has a measurement error given by `divorce_sd` (notice that this has to be standardized since the `divorce` variable itself has been standardized to mean 0 and std 1).
###Code
def model_se(marriage, age, divorce_sd, divorce=None):
a = sample('a', dist.Normal(0., 0.2))
bM = sample('bM', dist.Normal(0., 0.5))
M = bM * marriage
bA = sample('bA', dist.Normal(0., 0.5))
A = bA * age
sigma = sample('sigma', dist.Exponential(1.))
mu = a + M + A
divorce_rate = sample('divorce_rate', dist.Normal(mu, sigma))
sample('obs', dist.Normal(divorce_rate, divorce_sd), obs=divorce)
rng, rng_ = random.split(rng)
# Standardize
dset['DivorceScaledSD'] = dset['Divorce SE'] / np.std(dset.Divorce.values)
init_params, potential_fn, constrain_fn = initialize_model(rng_, model_se,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values,
divorce=dset.DivorceScaled.values)
samples_4 = mcmc(num_warmup=1000,
num_samples=3000,
init_params=init_params,
potential_fn=potential_fn,
trajectory_length=10,
target_accept_prob=0.9,
constrain_fn=constrain_fn)
###Output
warmup: 100%|██████████| 1000/1000 [00:19<00:00, 50.19it/s, 15 steps of size 2.16e-01. acc. prob=0.89]
sample: 100%|██████████| 3000/3000 [00:06<00:00, 442.19it/s, 15 steps of size 2.16e-01. acc. prob=0.94]
###Markdown
Effect of Incorporating Measurement Noise on ResidualsNotice that our values for the regression coefficients is very similar to Model 3. However, introducing measurement noise allows us to more closely match our predictive distributions to the observed values. We can see this if we plot the residuals as earlier.
###Code
rng, rng_ = random.split(rng)
predict_fn = vmap(lambda rng, samples: predict(rng, samples, model_se,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values))
predictions_4 = predict_fn(random.split(rng_, 3000), samples_4)
sd = dset.DivorceScaledSD.values
residuals_4 = dset.DivorceScaled.values - predictions_4
residuals_mean = np.mean(residuals_4, axis=0)
residuals_hpdi = hpdi(residuals_4, 0.9)
err = residuals_hpdi[1] - residuals_mean
idx = np.argsort(residuals_mean)
y = np.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 16))
# Plot Residuals
ax.plot(np.zeros(50), y, '--')
ax.errorbar(residuals_mean[idx], y, xerr=err[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.4)
# Plot SD
ax.errorbar(residuals_mean[idx], y, xerr=sd[idx],
ls='none')
# Plot earlier mean residual
ax.plot(np.mean(dset.DivorceScaled.values - predictions_3, 0)[idx], y,
ls='none', marker='o', ms=5, color='gray', alpha=0.8)
ax.set(xlabel='Residuals', ylabel='State', title='Residuals with 90% CI')
ax.set_yticks(y)
ax.set_yticklabels(dset.Loc.values[idx], fontsize=10);
###Output
_____no_output_____
###Markdown
The plot above shows the residuals for each of the states, along with the measurement noise given by inner error bar. The gray dots are the mean residuals from our earlier Model 3. Notice how having an additional degree of freedom to model the measurement noise has shrunk the residuals. In particular, for Idaho and Maine, our predictions are now much closer to the observed values after incorporating measurement noise in the model.To better see how measurement noise affects the movement of the regression line, let us plot the residuals with respect to the measurement noise.
###Code
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10, 6))
x = dset.DivorceScaledSD.values
y1 = np.mean(residuals_3, 0)
y2 = np.mean(residuals_4, 0)
ax.plot(x, y1, ls='none', marker='o')
ax.plot(x, y2, ls='none', marker='o')
for i, (j, k) in enumerate(zip(y1, y2)):
ax.plot([x[i], x[i]], [j, k], '--', color='gray');
ax.set(xlabel='Measurement Noise', ylabel='Residual', title='Mean residuals (Model 4: red, Model 3: blue)');
###Output
_____no_output_____
###Markdown
Bayesian Regression Using NumPyroIn this tutorial, we will explore how to do bayesian regression in NumPyro, using a simple example adapted from Statistical Rethinking [[1](References)]. In particular, we would like to explore the following: - Write a simple model using the `sample` NumPyro primitive. - Run inference using MCMC in NumPyro, in particular, using the No U-Turn Sampler (NUTS) to get a posterior distribution over our regression parameters of interest. - Learn about inference utilities such as `Predictive` and `log_likelihood`. - Learn how we can use effect-handlers in NumPyro to generate execution traces from the model, condition on sample statements, seed models with RNG seeds, etc., and use this to implement various utilities that will be useful for MCMC. e.g. computing model log likelihood, generating empirical distribution over the posterior predictive, etc. Tutorial Outline:1. [Dataset](Dataset)2. [Regression Model to Predict Divorce Rate](Regression-Model-to-Predict-Divorce-Rate) - [Model-1: Predictor-Marriage Rate](Model-1:-Predictor---Marriage-Rate) - [Posterior Distribution over the Regression Parameters](Posterior-Distribution-over-the-Regression-Parameters) - [Posterior Predictive Distribution](Posterior-Predictive-Distribution) - [Predictive Utility With Effect Handlers](Predictive-Utility-With-Effect-Handlers) - [Model Predictive Density](Model-Predictive-Density) - [Model-2: Predictor-Median Age of Marriage](Model-2:-Predictor---Median-Age-of-Marriage) - [Model-3: Predictor-Marriage Rate and Median Age of Marriage](Model-3:-Predictor---Marriage-Rate-and-Median-Age-of-Marriage) - [Divorce Rate Residuals by State](Divorce-Rate-Residuals-by-State)3. [Regression Model with Measurement Error](Regression-Model-with-Measurement-Error) - [Effect of Incorporating Measurement Noise on Residuals](Effect-of-Incorporating-Measurement-Noise-on-Residuals)4. [References](References)
###Code
%reset -s -f
import os
from IPython.display import set_matplotlib_formats
import jax.numpy as jnp
from jax import random, vmap
from jax.scipy.special import logsumexp
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import numpyro
from numpyro.diagnostics import hpdi
import numpyro.distributions as dist
from numpyro import handlers
from numpyro.infer import MCMC, NUTS
plt.style.use('bmh')
if "NUMPYRO_SPHINXBUILD" in os.environ:
set_matplotlib_formats('svg')
assert numpyro.__version__.startswith('0.3.0')
###Output
_____no_output_____
###Markdown
DatasetFor this example, we will use the `WaffleDivorce` dataset from Chapter 05, Statistical Rethinking [[1](References)]. The dataset contains divorce rates in each of the 50 states in the USA, along with predictors such as population, median age of marriage, whether it is a Southern state and, curiously, number of Waffle Houses.
###Code
DATASET_URL = 'https://raw.githubusercontent.com/rmcelreath/rethinking/master/data/WaffleDivorce.csv'
dset = pd.read_csv(DATASET_URL, sep=';')
dset
###Output
_____no_output_____
###Markdown
Let us plot the pair-wise relationship amongst the main variables in the dataset, using `seaborn.pairplot`.
###Code
vars = ['Population', 'MedianAgeMarriage', 'Marriage', 'WaffleHouses', 'South', 'Divorce']
sns.pairplot(dset, x_vars=vars, y_vars=vars, palette='husl');
###Output
_____no_output_____
###Markdown
From the plots above, we can clearly observe that there is a relationship between divorce rates and marriage rates in a state (as might be expected), and also between divorce rates and median age of marriage. There is also a weak relationship between number of Waffle Houses and divorce rates, which is not obvious from the plot above, but will be clearer if we regress `Divorce` against `WaffleHouse` and plot the results.
###Code
sns.regplot('WaffleHouses', 'Divorce', dset);
###Output
_____no_output_____
###Markdown
This is an example of a spurious association. We do not expect the number of Waffle Houses in a state to affect the divorce rate, but it is likely correlated with other factors that have an effect on the divorce rate. We will not delve into this spurious association in this tutorial, but the interested reader is encouraged to read Chapters 5 and 6 of [[1](References)] which explores the problem of causal association in the presence of multiple predictors. For simplicity, we will primarily focus on marriage rate and the median age of marriage as our predictors for divorce rate throughout the remaining tutorial. Regression Model to Predict Divorce RateLet us now write a regressionn model in *NumPyro* to predict the divorce rate as a linear function of marriage rate and median age of marriage in each of the states. First, note that our predictor variables have somewhat different scales. It is a good practice to standardize our predictors and response variables to mean `0` and standard deviation `1`, which should result in [faster inference](https://mc-stan.org/docs/2_19/stan-users-guide/standardizing-predictors-and-outputs.html).
###Code
standardize = lambda x: (x - x.mean()) / x.std()
dset['AgeScaled'] = dset.MedianAgeMarriage.pipe(standardize)
dset['MarriageScaled'] = dset.Marriage.pipe(standardize)
dset['DivorceScaled'] = dset.Divorce.pipe(standardize)
###Output
_____no_output_____
###Markdown
We write the NumPyro model as follows. While the code should largely be self-explanatory, take note of the following: - In NumPyro, *model* code is any Python callable which can optionally accept additional arguments and keywords. For HMC which we will be using for this tutorial, these arguments and keywords remain static during inference, but we can reuse the same model to generate [predictions](Posterior-Predictive-Distribution) on new data. - In addition to regular Python statements, the model code also contains primitives like `sample`. These primitives can be interpreted with various side-effects using effect handlers. For more on effect handlers, refer to [[3](References)], [[4](References)]. For now, just remember that a `sample` statement makes this a stochastic function that samples some latent parameters from a *prior distribution*. Our goal is to infer the *posterior distribution* of these parameters conditioned on observed data. - The reason why we have kept our predictors as optional keyword arguments is to be able to reuse the same model as we vary the set of predictors. Likewise, the reason why the response variable is optional is that we would like to reuse this model to sample from the posterior predictive distribution. See the [section](Posterior-Predictive-Distribution) on plotting the posterior predictive distribution, as an example.
###Code
def model(marriage=None, age=None, divorce=None):
a = numpyro.sample('a', dist.Normal(0., 0.2))
M, A = 0., 0.
if marriage is not None:
bM = numpyro.sample('bM', dist.Normal(0., 0.5))
M = bM * marriage
if age is not None:
bA = numpyro.sample('bA', dist.Normal(0., 0.5))
A = bA * age
sigma = numpyro.sample('sigma', dist.Exponential(1.))
mu = a + M + A
numpyro.sample('obs', dist.Normal(mu, sigma), obs=divorce)
###Output
_____no_output_____
###Markdown
Model 1: Predictor - Marriage RateWe first try to model the divorce rate as depending on a single variable, marriage rate. As mentioned above, we can use the same `model` code as earlier, but only pass values for `marriage` and `divorce` keyword arguments. We will use the No U-Turn Sampler (see [[5](References)] for more details on the NUTS algorithm) to run inference on this simple model.The Hamiltonian Monte Carlo (or, the NUTS) implementation in NumPyro takes in a potential energy function. This is the negative log joint density for the model. Therefore, for our model description above, we need to construct a function which given the parameter values returns the potential energy (or negative log joint density). Additionally, the verlet integrator in HMC (or, NUTS) returns sample values simulated using Hamiltonian dynamics in the unconstrained space. As such, continuous variables with bounded support need to be transformed into unconstrained space using bijective transforms. We also need to transform these samples back to their constrained support before returning these values to the user. Thankfully, this is handled on the backend for us, within a convenience class for doing [MCMC inference](https://numpyro.readthedocs.io/en/latest/mcmc.htmlnumpyro.mcmc.MCMC) that has the following methods: - `run(...)`: runs warmup, adapts steps size and mass matrix, and does sampling using the sample from the warmup phase. - `print_summary()`: print diagnostic information like quantiles, effective sample size, and the Gelman-Rubin diagnostic. - `get_samples()`: gets samples from the posterior distribution.Note the following: - JAX uses functional PRNGs. Unlike other languages / frameworks which maintain a global random state, in JAX, every call to a sampler requires an [explicit PRNGKey](https://github.com/google/jaxrandom-numbers-are-different). We will split our initial random seed for subsequent operations, so that we do not accidentally reuse the same seed. - We run inference with the `NUTS` sampler. To run vanilla HMC, we can instead use the [HMC](https://numpyro.readthedocs.io/en/latest/mcmc.htmlnumpyro.mcmc.HMC) class.
###Code
# Start from this source of randomness. We will split keys for subsequent operations.
rng_key = random.PRNGKey(0)
rng_key, rng_key_ = random.split(rng_key)
num_warmup, num_samples = 1000, 2000
# Run NUTS.
kernel = NUTS(model)
mcmc = MCMC(kernel, num_warmup, num_samples)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_1 = mcmc.get_samples()
###Output
sample: 100%|██████████| 3000/3000 [00:06<00:00, 487.44it/s, 3 steps of size 7.35e-01. acc. prob=0.92]
###Markdown
Posterior Distribution over the Regression ParametersWe notice that the progress bar gives us online statistics on the acceptance probability, step size and number of steps taken per sample while running NUTS. In particular, during warmup, we adapt the step size and mass matrix to achieve a certain target acceptance probability which is 0.8, by default. We were able to successfully adapt our step size to achieve this target in the warmup phase.During warmup, the aim is to adapt hyper-parameters such as step size and mass matrix (the HMC algorithm is very sensitive to these hyper-parameters), and to reach the typical set (see [[6](References)] for more details). If there are any issues in the model specification, the first signal to notice would be low acceptance probabilities or very high number of steps. We use the sample from the end of the warmup phase to seed the MCMC chain (denoted by the second `sample` progress bar) from which we generate the desired number of samples from our target distribution.At the end of inference, NumPyro prints the mean, std and 90% CI values for each of the latent parameters. Note that since we standardized our predictors and response variable, we would expect the intercept to have mean 0, as can be seen here. It also prints other convergence diagnostics on the latent parameters in the model, including [effective sample size](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.effective_sample_size) and the [gelman rubin diagnostic](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.gelman_rubin) ($\hat{R}$). The value for these diagnostics indicates that the chain has converged to the target distribution. In our case, the "target distribution" is the posterior distribution over the latent parameters that we are interested in. Note that this is often worth verifying with multiple chains for more complicated models. In the end, `samples_1` is a collection (in our case, a `dict` since `init_samples` was a `dict`) containing samples from the posterior distribution for each of the latent parameters in the model.To look at our regression fit, let us plot the regression line using our posterior estimates for the regression parameters, along with the 90% Credibility Interval (CI). Note that the [hpdi](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.hpdi) function in NumPyro's diagnostics module can be used to compute CI. In the functions below, note that the collected samples from the posterior are all along the leading axis.
###Code
def plot_regression(x, y_mean, y_hpdi):
# Sort values for plotting by x axis
idx = jnp.argsort(x)
marriage = x[idx]
mean = y_mean[idx]
hpdi = y_hpdi[:, idx]
divorce = dset.DivorceScaled.values[idx]
# Plot
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 6))
ax.plot(marriage, mean)
ax.plot(marriage, divorce, 'o')
ax.fill_between(marriage, hpdi[0], hpdi[1], alpha=0.3, interpolate=True)
return ax
# Compute empirical posterior distribution over mu
posterior_mu = jnp.expand_dims(samples_1['a'], -1) + \
jnp.expand_dims(samples_1['bM'], -1) * dset.MarriageScaled.values
mean_mu = jnp.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_mu, hpdi_mu)
ax.set(xlabel='Marriage rate', ylabel='Divorce rate', title='Regression line with 90% CI');
###Output
_____no_output_____
###Markdown
We can see from the plot, that the CI broadens towards the tails where the data is relatively sparse, as can be expected. Posterior Predictive DistributionLet us now look at the posterior predictive distribution to see how our predictive distribution looks with respect to the observed divorce rates. To get samples from the posterior predictive distribution, we need to run the model by substituting the latent parameters with samples from the posterior. NumPyro provides a handy [Predictive](http://num.pyro.ai/en/latest/utilities.htmlnumpyro.infer.util.Predictive) utility for this purpose. Note that by default we generate a single prediction for each sample from the joint posterior distribution, but this can be controlled using the `num_samples` argument.
###Code
from numpyro.infer import Predictive
rng_key, rng_key_ = random.split(rng_key)
predictive = Predictive(model, samples_1)
predictions = predictive(rng_key_, marriage=dset.MarriageScaled.values)['obs']
df = dset.filter(['Location'])
df['Mean Predictions'] = jnp.mean(predictions, axis=0)
df.head()
###Output
_____no_output_____
###Markdown
Predictive Utility With Effect HandlersTo remove the magic behind `Predictive`, let us see how we can combine [effect handlers](https://numpyro.readthedocs.io/en/latest/handlers.html) with the [vmap](https://github.com/google/jaxauto-vectorization-with-vmap) JAX primitive to implement our own simplified predictive utility function that can do vectorized predictions.
###Code
def predict(rng_key, post_samples, model, *args, **kwargs):
model = handlers.condition(handlers.seed(model, rng_key), post_samples)
model_trace = handlers.trace(model).get_trace(*args, **kwargs)
return model_trace['obs']['value']
# vectorize predictions via vmap
predict_fn = vmap(lambda rng_key, samples: predict(rng_key, samples, model, marriage=dset.MarriageScaled.values))
###Output
_____no_output_____
###Markdown
Note the use of the `condition`, `seed` and `trace` effect handlers in the `predict` function. - The `seed` effect-handler is used to wrap a stochastic function with an initial `PRNGKey` seed. When a sample statement inside the model is called, it uses the existing seed to sample from a distribution but this effect-handler also splits the existing key to ensure that future `sample` calls in the model use the newly split key instead. This is to prevent us from having to explicitly pass in a `PRNGKey` to each `sample` statement in the model. - The `condition` effect handler conditions the latent sample sites to certain values. In our case, we are conditioning on values from the posterior distribution returned by MCMC. - The `trace` effect handler runs the model and records the execution trace within an `OrderedDict`. This trace object contains execution metadata that is useful for computing quantities such as the log joint density. It should be clear now that the `predict` function simply runs the model by substituting the latent parameters with samples from the posterior (generated by the `mcmc` function) to generate predictions. Note the use of JAX's auto-vectorization transform called [vmap](https://github.com/google/jaxauto-vectorization-with-vmap) to vectorize predictions. Note that if we didn't use `vmap`, we would have to use a native for loop which for each sample which is much slower. Each draw from the posterior can be used to get predictions over all the 50 states. When we vectorize this over all the samples from the posterior using `vmap`, we will get a `predictions_1` array of shape `(num_samples, 50)`. We can then compute the mean and 90% CI of these samples to plot the posterior predictive distribution. We note that our mean predictions match those obtained from the `Predictive` utility class.
###Code
# Using the same key as we used for Predictive - note that the results are identical.
predictions_1 = predict_fn(random.split(rng_key_, num_samples), samples_1)
mean_pred = jnp.mean(predictions_1, axis=0)
df = dset.filter(['Location'])
df['Mean Predictions'] = jnp.mean(predictions, axis=0)
df.head()
hpdi_pred = hpdi(predictions_1, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel='Marriage rate', ylabel='Divorce rate', title='Predictions with 90% CI');
###Output
_____no_output_____
###Markdown
We have used the same `plot_regression` function as earlier. We notice that our CI for the predictive distribution is much broader as compared to the last plot due to the additional noise introduced by the `sigma` parameter. Most data points lie well within the 90% CI, which indicates a good fit. Posterior Predictive DensityLikewise, making use of effect-handlers and `vmap`, we can also compute the log likelihood for this model given the dataset, and the log posterior predictive density [[6](References)] which is given by $$ log \prod_{i=1}^{n} \int p(y_i | \theta) p_{post}(\theta) d\theta \approx \sum_{i=1}^n log \frac{\sum_s p(\theta^{s})}{S} \\= \sum_{i=1}^n (log \sum_s p(\theta^{s}) - log(S))$$.Here, $i$ indexes the observed data points $y$ and $s$ indexes the posterior samples over the latent parameters $\theta$. If the posterior predictive density for a model has a comparatively high value, it indicates that the observed data-points have higher probability under the given model.
###Code
def log_likelihood(rng_key, params, model, *args, **kwargs):
model = handlers.condition(model, params)
model_trace = handlers.trace(model).get_trace(*args, **kwargs)
obs_node = model_trace['obs']
return obs_node['fn'].log_prob(obs_node['value'])
def log_pred_density(rng_key, params, model, *args, **kwargs):
n = list(params.values())[0].shape[0]
log_lk_fn = vmap(lambda rng_key, params: log_likelihood(rng_key, params, model, *args, **kwargs))
log_lk_vals = log_lk_fn(random.split(rng_key, n), params)
return (logsumexp(log_lk_vals, 0) - jnp.log(n)).sum()
###Output
_____no_output_____
###Markdown
Note that NumPyro provides the [log_likelihood](http://num.pyro.ai/en/latest/utilities.htmllog-likelihood) utility function that can be used directly for computing `log likelihood` as in the first function for any general model. In this tutorial, we would like to emphasize that there is nothing magical about such utility functions, and you can roll out your own inference utilities using NumPyro's effect handling stack.
###Code
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(log_pred_density(rng_key_,
samples_1,
model,
marriage=dset.MarriageScaled.values,
divorce=dset.DivorceScaled.values)))
###Output
Log posterior predictive density: -66.70353698730469
###Markdown
Model 2: Predictor - Median Age of MarriageWe will now model the divorce rate as a function of the median age of marriage. The computations are mostly a reproduction of what we did for Model 1. Notice the following: - Divorce rate is inversely related to the age of marriage. Hence states where the median age of marriage is low will likely have a higher divorce rate. - We get a higher log likelihood as compared to Model 2, indicating that median age of marriage is likely a much better predictor of divorce rate.
###Code
rng_key, rng_key_ = random.split(rng_key)
mcmc.run(rng_key_, age=dset.AgeScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_2 = mcmc.get_samples()
posterior_mu = jnp.expand_dims(samples_2['a'], -1) + \
jnp.expand_dims(samples_2['bA'], -1) * dset.AgeScaled.values
mean_mu = jnp.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_mu, hpdi_mu)
ax.set(xlabel='Median marriage age', ylabel='Divorce rate', title='Regression line with 90% CI');
rng_key, rng_key_ = random.split(rng_key)
predictions_2 = Predictive(model, samples_2)(rng_key_,
age=dset.AgeScaled.values)['obs']
mean_pred = jnp.mean(predictions_2, axis=0)
hpdi_pred = hpdi(predictions_2, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel='Median Age', ylabel='Divorce rate', title='Predictions with 90% CI');
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(log_pred_density(rng_key_,
samples_2,
model,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values)))
###Output
Log posterior predictive density: -59.22672653198242
###Markdown
Model 3: Predictor - Marriage Rate and Median Age of MarriageFinally, we will also model divorce rate as depending on both marriage rate as well as the median age of marriage. Note that the model's posterior predictive density is similar to Model 2 which likely indicates that the marginal information from marriage rate in predicting divorce rate is low when the median age of marriage is already known.
###Code
rng_key, rng_key_ = random.split(rng_key)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_3 = mcmc.get_samples()
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(
log_pred_density(rng_key_,
samples_3,
model,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values)
))
###Output
Log posterior predictive density: -59.12065505981445
###Markdown
Divorce Rate Residuals by StateThe regression plots above shows that the observed divorce rates for many states differs considerably from the mean regression line. To dig deeper into how the last model (Model 3) under-predicts or over-predicts for each of the states, we will plot the posterior predictive and residuals (`Observed divorce rate - Predicted divorce rate`) for each of the states.
###Code
# Predictions for Model 3.
rng_key, rng_key_ = random.split(rng_key)
predictions_3 = Predictive(model, samples_3)(rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values)['obs']
y = jnp.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 16))
pred_mean = jnp.mean(predictions_3, axis=0)
pred_hpdi = hpdi(predictions_3, 0.9)
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = jnp.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
idx = jnp.argsort(residuals_mean)
# Plot posterior predictive
ax[0].plot(jnp.zeros(50), y, '--')
ax[0].errorbar(pred_mean[idx], y, xerr=pred_hpdi[1, idx] - pred_mean[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
ax[0].plot(dset.DivorceScaled.values[idx], y, marker='o',
ls='none', color='gray')
ax[0].set(xlabel='Posterior Predictive (red) vs. Actuals (gray)', ylabel='State',
title='Posterior Predictive with 90% CI')
ax[0].set_yticks(y)
ax[0].set_yticklabels(dset.Loc.values[idx], fontsize=10);
# Plot residuals
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = jnp.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
err = residuals_hpdi[1] - residuals_mean
ax[1].plot(jnp.zeros(50), y, '--')
ax[1].errorbar(residuals_mean[idx], y, xerr=err[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
ax[1].set(xlabel='Residuals', ylabel='State', title='Residuals with 90% CI')
ax[1].set_yticks(y)
ax[1].set_yticklabels(dset.Loc.values[idx], fontsize=10);
###Output
_____no_output_____
###Markdown
The plot on the left shows the mean predictions with 90% CI for each of the states using Model 3. The gray markers indicate the actual observed divorce rates. The right plot shows the residuals for each of the states, and both these plots are sorted by the residuals, i.e. at the bottom, we are looking at states where the model predictions are higher than the observed rates, whereas at the top, the reverse is true.Overall, the model fit seems good because most observed data points like within a 90% CI around the mean predictions. However, notice how the model over-predicts by a large margin for states like Idaho (bottom left), and on the other end under-predicts for states like Maine (top right). This is likely indicative of other factors that we are missing out in our model that affect divorce rate across different states. Even ignoring other socio-political variables, one such factor that we have not yet modeled is the measurement noise given by `Divorce SE` in the dataset. We will explore this in the next section. Regression Model with Measurement ErrorNote that in our previous models, each data point influences the regression line equally. Is this well justified? We will build on the previous model to incorporate measurement error given by `Divorce SE` variable in the dataset. Incorporating measurement noise will be useful in ensuring that observations that have higher confidence (i.e. lower measurement noise) have a greater impact on the regression line. On the other hand, this will also help us better model outliers with high measurement errors. For more details on modeling errors due to measurement noise, refer to Chapter 15 of [[1](References)].To do this, we will reuse Model 3, with the only change that the final observed value has a measurement error given by `divorce_sd` (notice that this has to be standardized since the `divorce` variable itself has been standardized to mean 0 and std 1).
###Code
def model_se(marriage, age, divorce_sd, divorce=None):
a = numpyro.sample('a', dist.Normal(0., 0.2))
bM = numpyro.sample('bM', dist.Normal(0., 0.5))
M = bM * marriage
bA = numpyro.sample('bA', dist.Normal(0., 0.5))
A = bA * age
sigma = numpyro.sample('sigma', dist.Exponential(1.))
mu = a + M + A
divorce_rate = numpyro.sample('divorce_rate', dist.Normal(mu, sigma))
numpyro.sample('obs', dist.Normal(divorce_rate, divorce_sd), obs=divorce)
# Standardize
dset['DivorceScaledSD'] = dset['Divorce SE'] / jnp.std(dset.Divorce.values)
rng_key, rng_key_ = random.split(rng_key)
kernel = NUTS(model_se, target_accept_prob=0.9)
mcmc = MCMC(kernel, num_warmup=1000, num_samples=3000)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values, age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_4 = mcmc.get_samples()
###Output
sample: 100%|██████████| 4000/4000 [00:10<00:00, 379.65it/s, 15 steps of size 2.93e-01. acc. prob=0.92]
###Markdown
Effect of Incorporating Measurement Noise on ResidualsNotice that our values for the regression coefficients is very similar to Model 3. However, introducing measurement noise allows us to more closely match our predictive distribution to the observed values. We can see this if we plot the residuals as earlier.
###Code
rng_key, rng_key_ = random.split(rng_key)
predictions_4 = Predictive(model_se, samples_4)(rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values)['obs']
sd = dset.DivorceScaledSD.values
residuals_4 = dset.DivorceScaled.values - predictions_4
residuals_mean = jnp.mean(residuals_4, axis=0)
residuals_hpdi = hpdi(residuals_4, 0.9)
err = residuals_hpdi[1] - residuals_mean
idx = jnp.argsort(residuals_mean)
y = jnp.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 16))
# Plot Residuals
ax.plot(jnp.zeros(50), y, '--')
ax.errorbar(residuals_mean[idx], y, xerr=err[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
# Plot SD
ax.errorbar(residuals_mean[idx], y, xerr=sd[idx],
ls='none', color='orange', alpha=0.9)
# Plot earlier mean residual
ax.plot(jnp.mean(dset.DivorceScaled.values - predictions_3, 0)[idx], y,
ls='none', marker='o', ms=6, color='black', alpha=0.6)
ax.set(xlabel='Residuals', ylabel='State', title='Residuals with 90% CI')
ax.set_yticks(y)
ax.set_yticklabels(dset.Loc.values[idx], fontsize=10);
ax.text(-2.8, -7, 'Residuals (with error-bars) from current model (in red). '
'Black marker \nshows residuals from the previous model (Model 3). '
'Measurement \nerror is indicated by orange bar.');
###Output
_____no_output_____
###Markdown
The plot above shows the residuals for each of the states, along with the measurement noise given by inner error bar. The gray dots are the mean residuals from our earlier Model 3. Notice how having an additional degree of freedom to model the measurement noise has shrunk the residuals. In particular, for Idaho and Maine, our predictions are now much closer to the observed values after incorporating measurement noise in the model.To better see how measurement noise affects the movement of the regression line, let us plot the residuals with respect to the measurement noise.
###Code
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10, 6))
x = dset.DivorceScaledSD.values
y1 = jnp.mean(residuals_3, 0)
y2 = jnp.mean(residuals_4, 0)
ax.plot(x, y1, ls='none', marker='o')
ax.plot(x, y2, ls='none', marker='o')
for i, (j, k) in enumerate(zip(y1, y2)):
ax.plot([x[i], x[i]], [j, k], '--', color='gray');
ax.set(xlabel='Measurement Noise', ylabel='Residual', title='Mean residuals (Model 4: red, Model 3: blue)');
###Output
_____no_output_____
###Markdown
Bayesian Regression Using NumPyroIn this tutorial, we will explore how to do bayesian regression in NumPyro, using a simple example adapted from Statistical Rethinking [[1](References)]. In particular, we would like to explore the following: - Write a simple model using the `sample` NumPyro primitive. - Run inference using MCMC in NumPyro, in particular, using the No U-Turn Sampler (NUTS) to get a posterior distribution over our regression parameters of interest. - Learn about inference utilities such as `Predictive` and `log_likelihood`. - Learn how we can use effect-handlers in NumPyro to generate execution traces from the model, condition on sample statements, seed models with RNG seeds, etc., and use this to implement various utilities that will be useful for MCMC. e.g. computing model log likelihood, generating empirical distribution over the posterior predictive, etc. Tutorial Outline:1. [Dataset](Dataset)2. [Regression Model to Predict Divorce Rate](Regression-Model-to-Predict-Divorce-Rate) - [Model-1: Predictor-Marriage Rate](Model-1:-Predictor---Marriage-Rate) - [Posterior Distribution over the Regression Parameters](Posterior-Distribution-over-the-Regression-Parameters) - [Posterior Predictive Distribution](Posterior-Predictive-Distribution) - [Predictive Utility With Effect Handlers](Predictive-Utility-With-Effect-Handlers) - [Model Predictive Density](Model-Predictive-Density) - [Model-2: Predictor-Median Age of Marriage](Model-2:-Predictor---Median-Age-of-Marriage) - [Model-3: Predictor-Marriage Rate and Median Age of Marriage](Model-3:-Predictor---Marriage-Rate-and-Median-Age-of-Marriage) - [Divorce Rate Residuals by State](Divorce-Rate-Residuals-by-State)3. [Regression Model with Measurement Error](Regression-Model-with-Measurement-Error) - [Effect of Incorporating Measurement Noise on Residuals](Effect-of-Incorporating-Measurement-Noise-on-Residuals)4. [References](References)
###Code
%reset -s -f
import os
from IPython.display import set_matplotlib_formats
import jax.numpy as jnp
from jax import random, vmap
from jax.scipy.special import logsumexp
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import numpyro
from numpyro.diagnostics import hpdi
import numpyro.distributions as dist
from numpyro import handlers
from numpyro.infer import MCMC, NUTS
plt.style.use('bmh')
if "NUMPYRO_SPHINXBUILD" in os.environ:
set_matplotlib_formats('svg')
assert numpyro.__version__.startswith('0.4.1')
###Output
_____no_output_____
###Markdown
DatasetFor this example, we will use the `WaffleDivorce` dataset from Chapter 05, Statistical Rethinking [[1](References)]. The dataset contains divorce rates in each of the 50 states in the USA, along with predictors such as population, median age of marriage, whether it is a Southern state and, curiously, number of Waffle Houses.
###Code
DATASET_URL = 'https://raw.githubusercontent.com/rmcelreath/rethinking/master/data/WaffleDivorce.csv'
dset = pd.read_csv(DATASET_URL, sep=';')
dset
###Output
_____no_output_____
###Markdown
Let us plot the pair-wise relationship amongst the main variables in the dataset, using `seaborn.pairplot`.
###Code
vars = ['Population', 'MedianAgeMarriage', 'Marriage', 'WaffleHouses', 'South', 'Divorce']
sns.pairplot(dset, x_vars=vars, y_vars=vars, palette='husl');
###Output
_____no_output_____
###Markdown
From the plots above, we can clearly observe that there is a relationship between divorce rates and marriage rates in a state (as might be expected), and also between divorce rates and median age of marriage. There is also a weak relationship between number of Waffle Houses and divorce rates, which is not obvious from the plot above, but will be clearer if we regress `Divorce` against `WaffleHouse` and plot the results.
###Code
sns.regplot('WaffleHouses', 'Divorce', dset);
###Output
_____no_output_____
###Markdown
This is an example of a spurious association. We do not expect the number of Waffle Houses in a state to affect the divorce rate, but it is likely correlated with other factors that have an effect on the divorce rate. We will not delve into this spurious association in this tutorial, but the interested reader is encouraged to read Chapters 5 and 6 of [[1](References)] which explores the problem of causal association in the presence of multiple predictors. For simplicity, we will primarily focus on marriage rate and the median age of marriage as our predictors for divorce rate throughout the remaining tutorial. Regression Model to Predict Divorce RateLet us now write a regressionn model in *NumPyro* to predict the divorce rate as a linear function of marriage rate and median age of marriage in each of the states. First, note that our predictor variables have somewhat different scales. It is a good practice to standardize our predictors and response variables to mean `0` and standard deviation `1`, which should result in [faster inference](https://mc-stan.org/docs/2_19/stan-users-guide/standardizing-predictors-and-outputs.html).
###Code
standardize = lambda x: (x - x.mean()) / x.std()
dset['AgeScaled'] = dset.MedianAgeMarriage.pipe(standardize)
dset['MarriageScaled'] = dset.Marriage.pipe(standardize)
dset['DivorceScaled'] = dset.Divorce.pipe(standardize)
###Output
_____no_output_____
###Markdown
We write the NumPyro model as follows. While the code should largely be self-explanatory, take note of the following: - In NumPyro, *model* code is any Python callable which can optionally accept additional arguments and keywords. For HMC which we will be using for this tutorial, these arguments and keywords remain static during inference, but we can reuse the same model to generate [predictions](Posterior-Predictive-Distribution) on new data. - In addition to regular Python statements, the model code also contains primitives like `sample`. These primitives can be interpreted with various side-effects using effect handlers. For more on effect handlers, refer to [[3](References)], [[4](References)]. For now, just remember that a `sample` statement makes this a stochastic function that samples some latent parameters from a *prior distribution*. Our goal is to infer the *posterior distribution* of these parameters conditioned on observed data. - The reason why we have kept our predictors as optional keyword arguments is to be able to reuse the same model as we vary the set of predictors. Likewise, the reason why the response variable is optional is that we would like to reuse this model to sample from the posterior predictive distribution. See the [section](Posterior-Predictive-Distribution) on plotting the posterior predictive distribution, as an example.
###Code
def model(marriage=None, age=None, divorce=None):
a = numpyro.sample('a', dist.Normal(0., 0.2))
M, A = 0., 0.
if marriage is not None:
bM = numpyro.sample('bM', dist.Normal(0., 0.5))
M = bM * marriage
if age is not None:
bA = numpyro.sample('bA', dist.Normal(0., 0.5))
A = bA * age
sigma = numpyro.sample('sigma', dist.Exponential(1.))
mu = a + M + A
numpyro.sample('obs', dist.Normal(mu, sigma), obs=divorce)
###Output
_____no_output_____
###Markdown
Model 1: Predictor - Marriage RateWe first try to model the divorce rate as depending on a single variable, marriage rate. As mentioned above, we can use the same `model` code as earlier, but only pass values for `marriage` and `divorce` keyword arguments. We will use the No U-Turn Sampler (see [[5](References)] for more details on the NUTS algorithm) to run inference on this simple model.The Hamiltonian Monte Carlo (or, the NUTS) implementation in NumPyro takes in a potential energy function. This is the negative log joint density for the model. Therefore, for our model description above, we need to construct a function which given the parameter values returns the potential energy (or negative log joint density). Additionally, the verlet integrator in HMC (or, NUTS) returns sample values simulated using Hamiltonian dynamics in the unconstrained space. As such, continuous variables with bounded support need to be transformed into unconstrained space using bijective transforms. We also need to transform these samples back to their constrained support before returning these values to the user. Thankfully, this is handled on the backend for us, within a convenience class for doing [MCMC inference](https://numpyro.readthedocs.io/en/latest/mcmc.htmlnumpyro.mcmc.MCMC) that has the following methods: - `run(...)`: runs warmup, adapts steps size and mass matrix, and does sampling using the sample from the warmup phase. - `print_summary()`: print diagnostic information like quantiles, effective sample size, and the Gelman-Rubin diagnostic. - `get_samples()`: gets samples from the posterior distribution.Note the following: - JAX uses functional PRNGs. Unlike other languages / frameworks which maintain a global random state, in JAX, every call to a sampler requires an [explicit PRNGKey](https://github.com/google/jaxrandom-numbers-are-different). We will split our initial random seed for subsequent operations, so that we do not accidentally reuse the same seed. - We run inference with the `NUTS` sampler. To run vanilla HMC, we can instead use the [HMC](https://numpyro.readthedocs.io/en/latest/mcmc.htmlnumpyro.mcmc.HMC) class.
###Code
# Start from this source of randomness. We will split keys for subsequent operations.
rng_key = random.PRNGKey(0)
rng_key, rng_key_ = random.split(rng_key)
num_warmup, num_samples = 1000, 2000
# Run NUTS.
kernel = NUTS(model)
mcmc = MCMC(kernel, num_warmup, num_samples)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_1 = mcmc.get_samples()
###Output
sample: 100%|██████████| 3000/3000 [00:04<00:00, 669.73it/s, 3 steps of size 7.48e-01. acc. prob=0.91]
###Markdown
Posterior Distribution over the Regression ParametersWe notice that the progress bar gives us online statistics on the acceptance probability, step size and number of steps taken per sample while running NUTS. In particular, during warmup, we adapt the step size and mass matrix to achieve a certain target acceptance probability which is 0.8, by default. We were able to successfully adapt our step size to achieve this target in the warmup phase.During warmup, the aim is to adapt hyper-parameters such as step size and mass matrix (the HMC algorithm is very sensitive to these hyper-parameters), and to reach the typical set (see [[6](References)] for more details). If there are any issues in the model specification, the first signal to notice would be low acceptance probabilities or very high number of steps. We use the sample from the end of the warmup phase to seed the MCMC chain (denoted by the second `sample` progress bar) from which we generate the desired number of samples from our target distribution.At the end of inference, NumPyro prints the mean, std and 90% CI values for each of the latent parameters. Note that since we standardized our predictors and response variable, we would expect the intercept to have mean 0, as can be seen here. It also prints other convergence diagnostics on the latent parameters in the model, including [effective sample size](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.effective_sample_size) and the [gelman rubin diagnostic](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.gelman_rubin) ($\hat{R}$). The value for these diagnostics indicates that the chain has converged to the target distribution. In our case, the "target distribution" is the posterior distribution over the latent parameters that we are interested in. Note that this is often worth verifying with multiple chains for more complicated models. In the end, `samples_1` is a collection (in our case, a `dict` since `init_samples` was a `dict`) containing samples from the posterior distribution for each of the latent parameters in the model.To look at our regression fit, let us plot the regression line using our posterior estimates for the regression parameters, along with the 90% Credibility Interval (CI). Note that the [hpdi](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.hpdi) function in NumPyro's diagnostics module can be used to compute CI. In the functions below, note that the collected samples from the posterior are all along the leading axis.
###Code
def plot_regression(x, y_mean, y_hpdi):
# Sort values for plotting by x axis
idx = jnp.argsort(x)
marriage = x[idx]
mean = y_mean[idx]
hpdi = y_hpdi[:, idx]
divorce = dset.DivorceScaled.values[idx]
# Plot
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 6))
ax.plot(marriage, mean)
ax.plot(marriage, divorce, 'o')
ax.fill_between(marriage, hpdi[0], hpdi[1], alpha=0.3, interpolate=True)
return ax
# Compute empirical posterior distribution over mu
posterior_mu = jnp.expand_dims(samples_1['a'], -1) + \
jnp.expand_dims(samples_1['bM'], -1) * dset.MarriageScaled.values
mean_mu = jnp.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_mu, hpdi_mu)
ax.set(xlabel='Marriage rate', ylabel='Divorce rate', title='Regression line with 90% CI');
###Output
_____no_output_____
###Markdown
We can see from the plot, that the CI broadens towards the tails where the data is relatively sparse, as can be expected. Posterior Predictive DistributionLet us now look at the posterior predictive distribution to see how our predictive distribution looks with respect to the observed divorce rates. To get samples from the posterior predictive distribution, we need to run the model by substituting the latent parameters with samples from the posterior. NumPyro provides a handy [Predictive](http://num.pyro.ai/en/latest/utilities.htmlnumpyro.infer.util.Predictive) utility for this purpose. Note that by default we generate a single prediction for each sample from the joint posterior distribution, but this can be controlled using the `num_samples` argument.
###Code
from numpyro.infer import Predictive
rng_key, rng_key_ = random.split(rng_key)
predictive = Predictive(model, samples_1)
predictions = predictive(rng_key_, marriage=dset.MarriageScaled.values)['obs']
df = dset.filter(['Location'])
df['Mean Predictions'] = jnp.mean(predictions, axis=0)
df.head()
###Output
_____no_output_____
###Markdown
Predictive Utility With Effect HandlersTo remove the magic behind `Predictive`, let us see how we can combine [effect handlers](https://numpyro.readthedocs.io/en/latest/handlers.html) with the [vmap](https://github.com/google/jaxauto-vectorization-with-vmap) JAX primitive to implement our own simplified predictive utility function that can do vectorized predictions.
###Code
def predict(rng_key, post_samples, model, *args, **kwargs):
model = handlers.condition(handlers.seed(model, rng_key), post_samples)
model_trace = handlers.trace(model).get_trace(*args, **kwargs)
return model_trace['obs']['value']
# vectorize predictions via vmap
predict_fn = vmap(lambda rng_key, samples: predict(rng_key, samples, model, marriage=dset.MarriageScaled.values))
###Output
_____no_output_____
###Markdown
Note the use of the `condition`, `seed` and `trace` effect handlers in the `predict` function. - The `seed` effect-handler is used to wrap a stochastic function with an initial `PRNGKey` seed. When a sample statement inside the model is called, it uses the existing seed to sample from a distribution but this effect-handler also splits the existing key to ensure that future `sample` calls in the model use the newly split key instead. This is to prevent us from having to explicitly pass in a `PRNGKey` to each `sample` statement in the model. - The `condition` effect handler conditions the latent sample sites to certain values. In our case, we are conditioning on values from the posterior distribution returned by MCMC. - The `trace` effect handler runs the model and records the execution trace within an `OrderedDict`. This trace object contains execution metadata that is useful for computing quantities such as the log joint density. It should be clear now that the `predict` function simply runs the model by substituting the latent parameters with samples from the posterior (generated by the `mcmc` function) to generate predictions. Note the use of JAX's auto-vectorization transform called [vmap](https://github.com/google/jaxauto-vectorization-with-vmap) to vectorize predictions. Note that if we didn't use `vmap`, we would have to use a native for loop which for each sample which is much slower. Each draw from the posterior can be used to get predictions over all the 50 states. When we vectorize this over all the samples from the posterior using `vmap`, we will get a `predictions_1` array of shape `(num_samples, 50)`. We can then compute the mean and 90% CI of these samples to plot the posterior predictive distribution. We note that our mean predictions match those obtained from the `Predictive` utility class.
###Code
# Using the same key as we used for Predictive - note that the results are identical.
predictions_1 = predict_fn(random.split(rng_key_, num_samples), samples_1)
mean_pred = jnp.mean(predictions_1, axis=0)
df = dset.filter(['Location'])
df['Mean Predictions'] = jnp.mean(predictions, axis=0)
df.head()
hpdi_pred = hpdi(predictions_1, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel='Marriage rate', ylabel='Divorce rate', title='Predictions with 90% CI');
###Output
_____no_output_____
###Markdown
We have used the same `plot_regression` function as earlier. We notice that our CI for the predictive distribution is much broader as compared to the last plot due to the additional noise introduced by the `sigma` parameter. Most data points lie well within the 90% CI, which indicates a good fit. Posterior Predictive DensityLikewise, making use of effect-handlers and `vmap`, we can also compute the log likelihood for this model given the dataset, and the log posterior predictive density [[6](References)] which is given by $$ log \prod_{i=1}^{n} \int p(y_i | \theta) p_{post}(\theta) d\theta \approx \sum_{i=1}^n log \frac{\sum_s p(\theta^{s})}{S} \\= \sum_{i=1}^n (log \sum_s p(\theta^{s}) - log(S))$$.Here, $i$ indexes the observed data points $y$ and $s$ indexes the posterior samples over the latent parameters $\theta$. If the posterior predictive density for a model has a comparatively high value, it indicates that the observed data-points have higher probability under the given model.
###Code
def log_likelihood(rng_key, params, model, *args, **kwargs):
model = handlers.condition(model, params)
model_trace = handlers.trace(model).get_trace(*args, **kwargs)
obs_node = model_trace['obs']
return obs_node['fn'].log_prob(obs_node['value'])
def log_pred_density(rng_key, params, model, *args, **kwargs):
n = list(params.values())[0].shape[0]
log_lk_fn = vmap(lambda rng_key, params: log_likelihood(rng_key, params, model, *args, **kwargs))
log_lk_vals = log_lk_fn(random.split(rng_key, n), params)
return (logsumexp(log_lk_vals, 0) - jnp.log(n)).sum()
###Output
_____no_output_____
###Markdown
Note that NumPyro provides the [log_likelihood](http://num.pyro.ai/en/latest/utilities.htmllog-likelihood) utility function that can be used directly for computing `log likelihood` as in the first function for any general model. In this tutorial, we would like to emphasize that there is nothing magical about such utility functions, and you can roll out your own inference utilities using NumPyro's effect handling stack.
###Code
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(log_pred_density(rng_key_,
samples_1,
model,
marriage=dset.MarriageScaled.values,
divorce=dset.DivorceScaled.values)))
###Output
Log posterior predictive density: -66.65252685546875
###Markdown
Model 2: Predictor - Median Age of MarriageWe will now model the divorce rate as a function of the median age of marriage. The computations are mostly a reproduction of what we did for Model 1. Notice the following: - Divorce rate is inversely related to the age of marriage. Hence states where the median age of marriage is low will likely have a higher divorce rate. - We get a higher log likelihood as compared to Model 2, indicating that median age of marriage is likely a much better predictor of divorce rate.
###Code
rng_key, rng_key_ = random.split(rng_key)
mcmc.run(rng_key_, age=dset.AgeScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_2 = mcmc.get_samples()
posterior_mu = jnp.expand_dims(samples_2['a'], -1) + \
jnp.expand_dims(samples_2['bA'], -1) * dset.AgeScaled.values
mean_mu = jnp.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_mu, hpdi_mu)
ax.set(xlabel='Median marriage age', ylabel='Divorce rate', title='Regression line with 90% CI');
rng_key, rng_key_ = random.split(rng_key)
predictions_2 = Predictive(model, samples_2)(rng_key_,
age=dset.AgeScaled.values)['obs']
mean_pred = jnp.mean(predictions_2, axis=0)
hpdi_pred = hpdi(predictions_2, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel='Median Age', ylabel='Divorce rate', title='Predictions with 90% CI');
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(log_pred_density(rng_key_,
samples_2,
model,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values)))
###Output
Log posterior predictive density: -59.238067626953125
###Markdown
Model 3: Predictor - Marriage Rate and Median Age of MarriageFinally, we will also model divorce rate as depending on both marriage rate as well as the median age of marriage. Note that the model's posterior predictive density is similar to Model 2 which likely indicates that the marginal information from marriage rate in predicting divorce rate is low when the median age of marriage is already known.
###Code
rng_key, rng_key_ = random.split(rng_key)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_3 = mcmc.get_samples()
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(
log_pred_density(rng_key_,
samples_3,
model,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values)
))
###Output
Log posterior predictive density: -59.06075668334961
###Markdown
Divorce Rate Residuals by StateThe regression plots above shows that the observed divorce rates for many states differs considerably from the mean regression line. To dig deeper into how the last model (Model 3) under-predicts or over-predicts for each of the states, we will plot the posterior predictive and residuals (`Observed divorce rate - Predicted divorce rate`) for each of the states.
###Code
# Predictions for Model 3.
rng_key, rng_key_ = random.split(rng_key)
predictions_3 = Predictive(model, samples_3)(rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values)['obs']
y = jnp.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 16))
pred_mean = jnp.mean(predictions_3, axis=0)
pred_hpdi = hpdi(predictions_3, 0.9)
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = jnp.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
idx = jnp.argsort(residuals_mean)
# Plot posterior predictive
ax[0].plot(jnp.zeros(50), y, '--')
ax[0].errorbar(pred_mean[idx], y, xerr=pred_hpdi[1, idx] - pred_mean[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
ax[0].plot(dset.DivorceScaled.values[idx], y, marker='o',
ls='none', color='gray')
ax[0].set(xlabel='Posterior Predictive (red) vs. Actuals (gray)', ylabel='State',
title='Posterior Predictive with 90% CI')
ax[0].set_yticks(y)
ax[0].set_yticklabels(dset.Loc.values[idx], fontsize=10);
# Plot residuals
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = jnp.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
err = residuals_hpdi[1] - residuals_mean
ax[1].plot(jnp.zeros(50), y, '--')
ax[1].errorbar(residuals_mean[idx], y, xerr=err[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
ax[1].set(xlabel='Residuals', ylabel='State', title='Residuals with 90% CI')
ax[1].set_yticks(y)
ax[1].set_yticklabels(dset.Loc.values[idx], fontsize=10);
###Output
_____no_output_____
###Markdown
The plot on the left shows the mean predictions with 90% CI for each of the states using Model 3. The gray markers indicate the actual observed divorce rates. The right plot shows the residuals for each of the states, and both these plots are sorted by the residuals, i.e. at the bottom, we are looking at states where the model predictions are higher than the observed rates, whereas at the top, the reverse is true.Overall, the model fit seems good because most observed data points like within a 90% CI around the mean predictions. However, notice how the model over-predicts by a large margin for states like Idaho (bottom left), and on the other end under-predicts for states like Maine (top right). This is likely indicative of other factors that we are missing out in our model that affect divorce rate across different states. Even ignoring other socio-political variables, one such factor that we have not yet modeled is the measurement noise given by `Divorce SE` in the dataset. We will explore this in the next section. Regression Model with Measurement ErrorNote that in our previous models, each data point influences the regression line equally. Is this well justified? We will build on the previous model to incorporate measurement error given by `Divorce SE` variable in the dataset. Incorporating measurement noise will be useful in ensuring that observations that have higher confidence (i.e. lower measurement noise) have a greater impact on the regression line. On the other hand, this will also help us better model outliers with high measurement errors. For more details on modeling errors due to measurement noise, refer to Chapter 14 of [[1](References)].To do this, we will reuse Model 3, with the only change that the final observed value has a measurement error given by `divorce_sd` (notice that this has to be standardized since the `divorce` variable itself has been standardized to mean 0 and std 1).
###Code
def model_se(marriage, age, divorce_sd, divorce=None):
a = numpyro.sample('a', dist.Normal(0., 0.2))
bM = numpyro.sample('bM', dist.Normal(0., 0.5))
M = bM * marriage
bA = numpyro.sample('bA', dist.Normal(0., 0.5))
A = bA * age
sigma = numpyro.sample('sigma', dist.Exponential(1.))
mu = a + M + A
divorce_rate = numpyro.sample('divorce_rate', dist.Normal(mu, sigma))
numpyro.sample('obs', dist.Normal(divorce_rate, divorce_sd), obs=divorce)
# Standardize
dset['DivorceScaledSD'] = dset['Divorce SE'] / jnp.std(dset.Divorce.values)
rng_key, rng_key_ = random.split(rng_key)
kernel = NUTS(model_se, target_accept_prob=0.9)
mcmc = MCMC(kernel, num_warmup=1000, num_samples=3000)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values, age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_4 = mcmc.get_samples()
###Output
sample: 100%|██████████| 4000/4000 [00:06<00:00, 650.76it/s, 15 steps of size 3.00e-01. acc. prob=0.91]
###Markdown
Effect of Incorporating Measurement Noise on ResidualsNotice that our values for the regression coefficients is very similar to Model 3. However, introducing measurement noise allows us to more closely match our predictive distribution to the observed values. We can see this if we plot the residuals as earlier.
###Code
rng_key, rng_key_ = random.split(rng_key)
predictions_4 = Predictive(model_se, samples_4)(rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values)['obs']
sd = dset.DivorceScaledSD.values
residuals_4 = dset.DivorceScaled.values - predictions_4
residuals_mean = jnp.mean(residuals_4, axis=0)
residuals_hpdi = hpdi(residuals_4, 0.9)
err = residuals_hpdi[1] - residuals_mean
idx = jnp.argsort(residuals_mean)
y = jnp.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 16))
# Plot Residuals
ax.plot(jnp.zeros(50), y, '--')
ax.errorbar(residuals_mean[idx], y, xerr=err[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
# Plot SD
ax.errorbar(residuals_mean[idx], y, xerr=sd[idx],
ls='none', color='orange', alpha=0.9)
# Plot earlier mean residual
ax.plot(jnp.mean(dset.DivorceScaled.values - predictions_3, 0)[idx], y,
ls='none', marker='o', ms=6, color='black', alpha=0.6)
ax.set(xlabel='Residuals', ylabel='State', title='Residuals with 90% CI')
ax.set_yticks(y)
ax.set_yticklabels(dset.Loc.values[idx], fontsize=10);
ax.text(-2.8, -7, 'Residuals (with error-bars) from current model (in red). '
'Black marker \nshows residuals from the previous model (Model 3). '
'Measurement \nerror is indicated by orange bar.');
###Output
_____no_output_____
###Markdown
The plot above shows the residuals for each of the states, along with the measurement noise given by inner error bar. The gray dots are the mean residuals from our earlier Model 3. Notice how having an additional degree of freedom to model the measurement noise has shrunk the residuals. In particular, for Idaho and Maine, our predictions are now much closer to the observed values after incorporating measurement noise in the model.To better see how measurement noise affects the movement of the regression line, let us plot the residuals with respect to the measurement noise.
###Code
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10, 6))
x = dset.DivorceScaledSD.values
y1 = jnp.mean(residuals_3, 0)
y2 = jnp.mean(residuals_4, 0)
ax.plot(x, y1, ls='none', marker='o')
ax.plot(x, y2, ls='none', marker='o')
for i, (j, k) in enumerate(zip(y1, y2)):
ax.plot([x[i], x[i]], [j, k], '--', color='gray');
ax.set(xlabel='Measurement Noise', ylabel='Residual', title='Mean residuals (Model 4: red, Model 3: blue)');
###Output
_____no_output_____
###Markdown
Bayesian Regression Using NumPyroIn this tutorial, we will explore how to do bayesian regression in NumPyro, using a simple example adapted from Statistical Rethinking [[1](References)]. In particular, we would like to explore the following: - Write a simple model using the `sample` NumPyro primitive. - Run inference using MCMC in NumPyro, in particular, using the No U-Turn Sampler (NUTS) to get a posterior distribution over our regression parameters of interest. - Learn about inference utilities such as `Predictive` and `log_likelihood`. - Learn how we can use effect-handlers in NumPyro to generate execution traces from the model, condition on sample statements, seed models with RNG seeds, etc., and use this to implement various utilities that will be useful for MCMC. e.g. computing model log likelihood, generating empirical distribution over the posterior predictive, etc. Tutorial Outline:1. [Dataset](Dataset)2. [Regression Model to Predict Divorce Rate](Regression-Model-to-Predict-Divorce-Rate) - [Model-1: Predictor-Marriage Rate](Model-1:-Predictor---Marriage-Rate) - [Posterior Distribution over the Regression Parameters](Posterior-Distribution-over-the-Regression-Parameters) - [Posterior Predictive Distribution](Posterior-Predictive-Distribution) - [Predictive Utility With Effect Handlers](Predictive-Utility-With-Effect-Handlers) - [Model Predictive Density](Model-Predictive-Density) - [Model-2: Predictor-Median Age of Marriage](Model-2:-Predictor---Median-Age-of-Marriage) - [Model-3: Predictor-Marriage Rate and Median Age of Marriage](Model-3:-Predictor---Marriage-Rate-and-Median-Age-of-Marriage) - [Divorce Rate Residuals by State](Divorce-Rate-Residuals-by-State)3. [Regression Model with Measurement Error](Regression-Model-with-Measurement-Error) - [Effect of Incorporating Measurement Noise on Residuals](Effect-of-Incorporating-Measurement-Noise-on-Residuals)4. [References](References)
###Code
!pip install -q numpyro@git+https://github.com/pyro-ppl/numpyro
import os
from IPython.display import set_matplotlib_formats
import jax.numpy as jnp
from jax import random, vmap
from jax.scipy.special import logsumexp
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import numpyro
from numpyro.diagnostics import hpdi
import numpyro.distributions as dist
from numpyro import handlers
from numpyro.infer import MCMC, NUTS
plt.style.use("bmh")
if "NUMPYRO_SPHINXBUILD" in os.environ:
set_matplotlib_formats("svg")
assert numpyro.__version__.startswith("0.8.0")
###Output
_____no_output_____
###Markdown
DatasetFor this example, we will use the `WaffleDivorce` dataset from Chapter 05, Statistical Rethinking [[1](References)]. The dataset contains divorce rates in each of the 50 states in the USA, along with predictors such as population, median age of marriage, whether it is a Southern state and, curiously, number of Waffle Houses.
###Code
DATASET_URL = "https://raw.githubusercontent.com/rmcelreath/rethinking/master/data/WaffleDivorce.csv"
dset = pd.read_csv(DATASET_URL, sep=";")
dset
###Output
_____no_output_____
###Markdown
Let us plot the pair-wise relationship amongst the main variables in the dataset, using `seaborn.pairplot`.
###Code
vars = [
"Population",
"MedianAgeMarriage",
"Marriage",
"WaffleHouses",
"South",
"Divorce",
]
sns.pairplot(dset, x_vars=vars, y_vars=vars, palette="husl");
###Output
_____no_output_____
###Markdown
From the plots above, we can clearly observe that there is a relationship between divorce rates and marriage rates in a state (as might be expected), and also between divorce rates and median age of marriage. There is also a weak relationship between number of Waffle Houses and divorce rates, which is not obvious from the plot above, but will be clearer if we regress `Divorce` against `WaffleHouse` and plot the results.
###Code
sns.regplot(x="WaffleHouses", y="Divorce", data=dset);
###Output
_____no_output_____
###Markdown
This is an example of a spurious association. We do not expect the number of Waffle Houses in a state to affect the divorce rate, but it is likely correlated with other factors that have an effect on the divorce rate. We will not delve into this spurious association in this tutorial, but the interested reader is encouraged to read Chapters 5 and 6 of [[1](References)] which explores the problem of causal association in the presence of multiple predictors. For simplicity, we will primarily focus on marriage rate and the median age of marriage as our predictors for divorce rate throughout the remaining tutorial. Regression Model to Predict Divorce RateLet us now write a regressionn model in *NumPyro* to predict the divorce rate as a linear function of marriage rate and median age of marriage in each of the states. First, note that our predictor variables have somewhat different scales. It is a good practice to standardize our predictors and response variables to mean `0` and standard deviation `1`, which should result in [faster inference](https://mc-stan.org/docs/2_19/stan-users-guide/standardizing-predictors-and-outputs.html).
###Code
standardize = lambda x: (x - x.mean()) / x.std()
dset["AgeScaled"] = dset.MedianAgeMarriage.pipe(standardize)
dset["MarriageScaled"] = dset.Marriage.pipe(standardize)
dset["DivorceScaled"] = dset.Divorce.pipe(standardize)
###Output
_____no_output_____
###Markdown
We write the NumPyro model as follows. While the code should largely be self-explanatory, take note of the following: - In NumPyro, *model* code is any Python callable which can optionally accept additional arguments and keywords. For HMC which we will be using for this tutorial, these arguments and keywords remain static during inference, but we can reuse the same model to generate [predictions](Posterior-Predictive-Distribution) on new data. - In addition to regular Python statements, the model code also contains primitives like `sample`. These primitives can be interpreted with various side-effects using effect handlers. For more on effect handlers, refer to [[3](References)], [[4](References)]. For now, just remember that a `sample` statement makes this a stochastic function that samples some latent parameters from a *prior distribution*. Our goal is to infer the *posterior distribution* of these parameters conditioned on observed data. - The reason why we have kept our predictors as optional keyword arguments is to be able to reuse the same model as we vary the set of predictors. Likewise, the reason why the response variable is optional is that we would like to reuse this model to sample from the posterior predictive distribution. See the [section](Posterior-Predictive-Distribution) on plotting the posterior predictive distribution, as an example.
###Code
def model(marriage=None, age=None, divorce=None):
a = numpyro.sample("a", dist.Normal(0.0, 0.2))
M, A = 0.0, 0.0
if marriage is not None:
bM = numpyro.sample("bM", dist.Normal(0.0, 0.5))
M = bM * marriage
if age is not None:
bA = numpyro.sample("bA", dist.Normal(0.0, 0.5))
A = bA * age
sigma = numpyro.sample("sigma", dist.Exponential(1.0))
mu = a + M + A
numpyro.sample("obs", dist.Normal(mu, sigma), obs=divorce)
###Output
_____no_output_____
###Markdown
Model 1: Predictor - Marriage RateWe first try to model the divorce rate as depending on a single variable, marriage rate. As mentioned above, we can use the same `model` code as earlier, but only pass values for `marriage` and `divorce` keyword arguments. We will use the No U-Turn Sampler (see [[5](References)] for more details on the NUTS algorithm) to run inference on this simple model.The Hamiltonian Monte Carlo (or, the NUTS) implementation in NumPyro takes in a potential energy function. This is the negative log joint density for the model. Therefore, for our model description above, we need to construct a function which given the parameter values returns the potential energy (or negative log joint density). Additionally, the verlet integrator in HMC (or, NUTS) returns sample values simulated using Hamiltonian dynamics in the unconstrained space. As such, continuous variables with bounded support need to be transformed into unconstrained space using bijective transforms. We also need to transform these samples back to their constrained support before returning these values to the user. Thankfully, this is handled on the backend for us, within a convenience class for doing [MCMC inference](https://numpyro.readthedocs.io/en/latest/mcmc.htmlnumpyro.mcmc.MCMC) that has the following methods: - `run(...)`: runs warmup, adapts steps size and mass matrix, and does sampling using the sample from the warmup phase. - `print_summary()`: print diagnostic information like quantiles, effective sample size, and the Gelman-Rubin diagnostic. - `get_samples()`: gets samples from the posterior distribution.Note the following: - JAX uses functional PRNGs. Unlike other languages / frameworks which maintain a global random state, in JAX, every call to a sampler requires an [explicit PRNGKey](https://github.com/google/jaxrandom-numbers-are-different). We will split our initial random seed for subsequent operations, so that we do not accidentally reuse the same seed. - We run inference with the `NUTS` sampler. To run vanilla HMC, we can instead use the [HMC](https://numpyro.readthedocs.io/en/latest/mcmc.htmlnumpyro.mcmc.HMC) class.
###Code
# Start from this source of randomness. We will split keys for subsequent operations.
rng_key = random.PRNGKey(0)
rng_key, rng_key_ = random.split(rng_key)
# Run NUTS.
kernel = NUTS(model)
num_samples = 2000
mcmc = MCMC(kernel, num_warmup=1000, num_samples=num_samples)
mcmc.run(
rng_key_, marriage=dset.MarriageScaled.values, divorce=dset.DivorceScaled.values
)
mcmc.print_summary()
samples_1 = mcmc.get_samples()
###Output
sample: 100%|██████████| 3000/3000 [00:04<00:00, 748.14it/s, 7 steps of size 7.41e-01. acc. prob=0.92]
###Markdown
Posterior Distribution over the Regression ParametersWe notice that the progress bar gives us online statistics on the acceptance probability, step size and number of steps taken per sample while running NUTS. In particular, during warmup, we adapt the step size and mass matrix to achieve a certain target acceptance probability which is 0.8, by default. We were able to successfully adapt our step size to achieve this target in the warmup phase.During warmup, the aim is to adapt hyper-parameters such as step size and mass matrix (the HMC algorithm is very sensitive to these hyper-parameters), and to reach the typical set (see [[6](References)] for more details). If there are any issues in the model specification, the first signal to notice would be low acceptance probabilities or very high number of steps. We use the sample from the end of the warmup phase to seed the MCMC chain (denoted by the second `sample` progress bar) from which we generate the desired number of samples from our target distribution.At the end of inference, NumPyro prints the mean, std and 90% CI values for each of the latent parameters. Note that since we standardized our predictors and response variable, we would expect the intercept to have mean 0, as can be seen here. It also prints other convergence diagnostics on the latent parameters in the model, including [effective sample size](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.effective_sample_size) and the [gelman rubin diagnostic](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.gelman_rubin) ($\hat{R}$). The value for these diagnostics indicates that the chain has converged to the target distribution. In our case, the "target distribution" is the posterior distribution over the latent parameters that we are interested in. Note that this is often worth verifying with multiple chains for more complicated models. In the end, `samples_1` is a collection (in our case, a `dict` since `init_samples` was a `dict`) containing samples from the posterior distribution for each of the latent parameters in the model.To look at our regression fit, let us plot the regression line using our posterior estimates for the regression parameters, along with the 90% Credibility Interval (CI). Note that the [hpdi](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.hpdi) function in NumPyro's diagnostics module can be used to compute CI. In the functions below, note that the collected samples from the posterior are all along the leading axis.
###Code
def plot_regression(x, y_mean, y_hpdi):
# Sort values for plotting by x axis
idx = jnp.argsort(x)
marriage = x[idx]
mean = y_mean[idx]
hpdi = y_hpdi[:, idx]
divorce = dset.DivorceScaled.values[idx]
# Plot
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 6))
ax.plot(marriage, mean)
ax.plot(marriage, divorce, "o")
ax.fill_between(marriage, hpdi[0], hpdi[1], alpha=0.3, interpolate=True)
return ax
# Compute empirical posterior distribution over mu
posterior_mu = (
jnp.expand_dims(samples_1["a"], -1)
+ jnp.expand_dims(samples_1["bM"], -1) * dset.MarriageScaled.values
)
mean_mu = jnp.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_mu, hpdi_mu)
ax.set(
xlabel="Marriage rate", ylabel="Divorce rate", title="Regression line with 90% CI"
);
###Output
_____no_output_____
###Markdown
We can see from the plot, that the CI broadens towards the tails where the data is relatively sparse, as can be expected. Prior Predictive DistributionLet us check that we have set sensible priors by sampling from the prior predictive distribution. NumPyro provides a handy [Predictive](http://num.pyro.ai/en/latest/utilities.htmlnumpyro.infer.util.Predictive) utility for this purpose.
###Code
from numpyro.infer import Predictive
rng_key, rng_key_ = random.split(rng_key)
prior_predictive = Predictive(model, num_samples=100)
prior_predictions = prior_predictive(rng_key_, marriage=dset.MarriageScaled.values)[
"obs"
]
mean_prior_pred = jnp.mean(prior_predictions, axis=0)
hpdi_prior_pred = hpdi(prior_predictions, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_prior_pred, hpdi_prior_pred)
ax.set(xlabel="Marriage rate", ylabel="Divorce rate", title="Predictions with 90% CI");
###Output
_____no_output_____
###Markdown
Posterior Predictive DistributionLet us now look at the posterior predictive distribution to see how our predictive distribution looks with respect to the observed divorce rates. To get samples from the posterior predictive distribution, we need to run the model by substituting the latent parameters with samples from the posterior. Note that by default we generate a single prediction for each sample from the joint posterior distribution, but this can be controlled using the `num_samples` argument.
###Code
rng_key, rng_key_ = random.split(rng_key)
predictive = Predictive(model, samples_1)
predictions = predictive(rng_key_, marriage=dset.MarriageScaled.values)["obs"]
df = dset.filter(["Location"])
df["Mean Predictions"] = jnp.mean(predictions, axis=0)
df.head()
###Output
_____no_output_____
###Markdown
Predictive Utility With Effect HandlersTo remove the magic behind `Predictive`, let us see how we can combine [effect handlers](https://numpyro.readthedocs.io/en/latest/handlers.html) with the [vmap](https://github.com/google/jaxauto-vectorization-with-vmap) JAX primitive to implement our own simplified predictive utility function that can do vectorized predictions.
###Code
def predict(rng_key, post_samples, model, *args, **kwargs):
model = handlers.seed(handlers.condition(model, post_samples), rng_key)
model_trace = handlers.trace(model).get_trace(*args, **kwargs)
return model_trace["obs"]["value"]
# vectorize predictions via vmap
predict_fn = vmap(
lambda rng_key, samples: predict(
rng_key, samples, model, marriage=dset.MarriageScaled.values
)
)
###Output
_____no_output_____
###Markdown
Note the use of the `condition`, `seed` and `trace` effect handlers in the `predict` function. - The `seed` effect-handler is used to wrap a stochastic function with an initial `PRNGKey` seed. When a sample statement inside the model is called, it uses the existing seed to sample from a distribution but this effect-handler also splits the existing key to ensure that future `sample` calls in the model use the newly split key instead. This is to prevent us from having to explicitly pass in a `PRNGKey` to each `sample` statement in the model. - The `condition` effect handler conditions the latent sample sites to certain values. In our case, we are conditioning on values from the posterior distribution returned by MCMC. - The `trace` effect handler runs the model and records the execution trace within an `OrderedDict`. This trace object contains execution metadata that is useful for computing quantities such as the log joint density. It should be clear now that the `predict` function simply runs the model by substituting the latent parameters with samples from the posterior (generated by the `mcmc` function) to generate predictions. Note the use of JAX's auto-vectorization transform called [vmap](https://github.com/google/jaxauto-vectorization-with-vmap) to vectorize predictions. Note that if we didn't use `vmap`, we would have to use a native for loop which for each sample which is much slower. Each draw from the posterior can be used to get predictions over all the 50 states. When we vectorize this over all the samples from the posterior using `vmap`, we will get a `predictions_1` array of shape `(num_samples, 50)`. We can then compute the mean and 90% CI of these samples to plot the posterior predictive distribution. We note that our mean predictions match those obtained from the `Predictive` utility class.
###Code
# Using the same key as we used for Predictive - note that the results are identical.
predictions_1 = predict_fn(random.split(rng_key_, num_samples), samples_1)
mean_pred = jnp.mean(predictions_1, axis=0)
df = dset.filter(["Location"])
df["Mean Predictions"] = mean_pred
df.head()
hpdi_pred = hpdi(predictions_1, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel="Marriage rate", ylabel="Divorce rate", title="Predictions with 90% CI");
###Output
_____no_output_____
###Markdown
We have used the same `plot_regression` function as earlier. We notice that our CI for the predictive distribution is much broader as compared to the last plot due to the additional noise introduced by the `sigma` parameter. Most data points lie well within the 90% CI, which indicates a good fit. Posterior Predictive DensityLikewise, making use of effect-handlers and `vmap`, we can also compute the log likelihood for this model given the dataset, and the log posterior predictive density [[6](References)] which is given by $$ log \prod_{i=1}^{n} \int p(y_i | \theta) p_{post}(\theta) d\theta \approx \sum_{i=1}^n log \frac{\sum_s p(\theta^{s})}{S} \\= \sum_{i=1}^n (log \sum_s p(\theta^{s}) - log(S))$$.Here, $i$ indexes the observed data points $y$ and $s$ indexes the posterior samples over the latent parameters $\theta$. If the posterior predictive density for a model has a comparatively high value, it indicates that the observed data-points have higher probability under the given model.
###Code
def log_likelihood(rng_key, params, model, *args, **kwargs):
model = handlers.condition(model, params)
model_trace = handlers.trace(model).get_trace(*args, **kwargs)
obs_node = model_trace["obs"]
return obs_node["fn"].log_prob(obs_node["value"])
def log_pred_density(rng_key, params, model, *args, **kwargs):
n = list(params.values())[0].shape[0]
log_lk_fn = vmap(
lambda rng_key, params: log_likelihood(rng_key, params, model, *args, **kwargs)
)
log_lk_vals = log_lk_fn(random.split(rng_key, n), params)
return (logsumexp(log_lk_vals, 0) - jnp.log(n)).sum()
###Output
_____no_output_____
###Markdown
Note that NumPyro provides the [log_likelihood](http://num.pyro.ai/en/latest/utilities.htmllog-likelihood) utility function that can be used directly for computing `log likelihood` as in the first function for any general model. In this tutorial, we would like to emphasize that there is nothing magical about such utility functions, and you can roll out your own inference utilities using NumPyro's effect handling stack.
###Code
rng_key, rng_key_ = random.split(rng_key)
print(
"Log posterior predictive density: {}".format(
log_pred_density(
rng_key_,
samples_1,
model,
marriage=dset.MarriageScaled.values,
divorce=dset.DivorceScaled.values,
)
)
)
###Output
Log posterior predictive density: -66.70008087158203
###Markdown
Model 2: Predictor - Median Age of MarriageWe will now model the divorce rate as a function of the median age of marriage. The computations are mostly a reproduction of what we did for Model 1. Notice the following: - Divorce rate is inversely related to the age of marriage. Hence states where the median age of marriage is low will likely have a higher divorce rate. - We get a higher log likelihood as compared to Model 2, indicating that median age of marriage is likely a much better predictor of divorce rate.
###Code
rng_key, rng_key_ = random.split(rng_key)
mcmc.run(rng_key_, age=dset.AgeScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_2 = mcmc.get_samples()
posterior_mu = (
jnp.expand_dims(samples_2["a"], -1)
+ jnp.expand_dims(samples_2["bA"], -1) * dset.AgeScaled.values
)
mean_mu = jnp.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_mu, hpdi_mu)
ax.set(
xlabel="Median marriage age",
ylabel="Divorce rate",
title="Regression line with 90% CI",
);
rng_key, rng_key_ = random.split(rng_key)
predictions_2 = Predictive(model, samples_2)(rng_key_, age=dset.AgeScaled.values)["obs"]
mean_pred = jnp.mean(predictions_2, axis=0)
hpdi_pred = hpdi(predictions_2, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel="Median Age", ylabel="Divorce rate", title="Predictions with 90% CI");
rng_key, rng_key_ = random.split(rng_key)
print(
"Log posterior predictive density: {}".format(
log_pred_density(
rng_key_,
samples_2,
model,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values,
)
)
)
###Output
Log posterior predictive density: -59.251956939697266
###Markdown
Model 3: Predictor - Marriage Rate and Median Age of MarriageFinally, we will also model divorce rate as depending on both marriage rate as well as the median age of marriage. Note that the model's posterior predictive density is similar to Model 2 which likely indicates that the marginal information from marriage rate in predicting divorce rate is low when the median age of marriage is already known.
###Code
rng_key, rng_key_ = random.split(rng_key)
mcmc.run(
rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values,
)
mcmc.print_summary()
samples_3 = mcmc.get_samples()
rng_key, rng_key_ = random.split(rng_key)
print(
"Log posterior predictive density: {}".format(
log_pred_density(
rng_key_,
samples_3,
model,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values,
)
)
)
###Output
Log posterior predictive density: -59.06374740600586
###Markdown
Divorce Rate Residuals by StateThe regression plots above shows that the observed divorce rates for many states differs considerably from the mean regression line. To dig deeper into how the last model (Model 3) under-predicts or over-predicts for each of the states, we will plot the posterior predictive and residuals (`Observed divorce rate - Predicted divorce rate`) for each of the states.
###Code
# Predictions for Model 3.
rng_key, rng_key_ = random.split(rng_key)
predictions_3 = Predictive(model, samples_3)(
rng_key_, marriage=dset.MarriageScaled.values, age=dset.AgeScaled.values
)["obs"]
y = jnp.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 16))
pred_mean = jnp.mean(predictions_3, axis=0)
pred_hpdi = hpdi(predictions_3, 0.9)
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = jnp.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
idx = jnp.argsort(residuals_mean)
# Plot posterior predictive
ax[0].plot(jnp.zeros(50), y, "--")
ax[0].errorbar(
pred_mean[idx],
y,
xerr=pred_hpdi[1, idx] - pred_mean[idx],
marker="o",
ms=5,
mew=4,
ls="none",
alpha=0.8,
)
ax[0].plot(dset.DivorceScaled.values[idx], y, marker="o", ls="none", color="gray")
ax[0].set(
xlabel="Posterior Predictive (red) vs. Actuals (gray)",
ylabel="State",
title="Posterior Predictive with 90% CI",
)
ax[0].set_yticks(y)
ax[0].set_yticklabels(dset.Loc.values[idx], fontsize=10)
# Plot residuals
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = jnp.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
err = residuals_hpdi[1] - residuals_mean
ax[1].plot(jnp.zeros(50), y, "--")
ax[1].errorbar(
residuals_mean[idx], y, xerr=err[idx], marker="o", ms=5, mew=4, ls="none", alpha=0.8
)
ax[1].set(xlabel="Residuals", ylabel="State", title="Residuals with 90% CI")
ax[1].set_yticks(y)
ax[1].set_yticklabels(dset.Loc.values[idx], fontsize=10);
###Output
_____no_output_____
###Markdown
The plot on the left shows the mean predictions with 90% CI for each of the states using Model 3. The gray markers indicate the actual observed divorce rates. The right plot shows the residuals for each of the states, and both these plots are sorted by the residuals, i.e. at the bottom, we are looking at states where the model predictions are higher than the observed rates, whereas at the top, the reverse is true.Overall, the model fit seems good because most observed data points like within a 90% CI around the mean predictions. However, notice how the model over-predicts by a large margin for states like Idaho (bottom left), and on the other end under-predicts for states like Maine (top right). This is likely indicative of other factors that we are missing out in our model that affect divorce rate across different states. Even ignoring other socio-political variables, one such factor that we have not yet modeled is the measurement noise given by `Divorce SE` in the dataset. We will explore this in the next section. Regression Model with Measurement ErrorNote that in our previous models, each data point influences the regression line equally. Is this well justified? We will build on the previous model to incorporate measurement error given by `Divorce SE` variable in the dataset. Incorporating measurement noise will be useful in ensuring that observations that have higher confidence (i.e. lower measurement noise) have a greater impact on the regression line. On the other hand, this will also help us better model outliers with high measurement errors. For more details on modeling errors due to measurement noise, refer to Chapter 14 of [[1](References)].To do this, we will reuse Model 3, with the only change that the final observed value has a measurement error given by `divorce_sd` (notice that this has to be standardized since the `divorce` variable itself has been standardized to mean 0 and std 1).
###Code
def model_se(marriage, age, divorce_sd, divorce=None):
a = numpyro.sample("a", dist.Normal(0.0, 0.2))
bM = numpyro.sample("bM", dist.Normal(0.0, 0.5))
M = bM * marriage
bA = numpyro.sample("bA", dist.Normal(0.0, 0.5))
A = bA * age
sigma = numpyro.sample("sigma", dist.Exponential(1.0))
mu = a + M + A
divorce_rate = numpyro.sample("divorce_rate", dist.Normal(mu, sigma))
numpyro.sample("obs", dist.Normal(divorce_rate, divorce_sd), obs=divorce)
# Standardize
dset["DivorceScaledSD"] = dset["Divorce SE"] / jnp.std(dset.Divorce.values)
rng_key, rng_key_ = random.split(rng_key)
kernel = NUTS(model_se, target_accept_prob=0.9)
mcmc = MCMC(kernel, num_warmup=1000, num_samples=3000)
mcmc.run(
rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values,
divorce=dset.DivorceScaled.values,
)
mcmc.print_summary()
samples_4 = mcmc.get_samples()
###Output
sample: 100%|██████████| 4000/4000 [00:06<00:00, 578.19it/s, 15 steps of size 2.58e-01. acc. prob=0.93]
###Markdown
Effect of Incorporating Measurement Noise on ResidualsNotice that our values for the regression coefficients is very similar to Model 3. However, introducing measurement noise allows us to more closely match our predictive distribution to the observed values. We can see this if we plot the residuals as earlier.
###Code
rng_key, rng_key_ = random.split(rng_key)
predictions_4 = Predictive(model_se, samples_4)(
rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values,
)["obs"]
sd = dset.DivorceScaledSD.values
residuals_4 = dset.DivorceScaled.values - predictions_4
residuals_mean = jnp.mean(residuals_4, axis=0)
residuals_hpdi = hpdi(residuals_4, 0.9)
err = residuals_hpdi[1] - residuals_mean
idx = jnp.argsort(residuals_mean)
y = jnp.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 16))
# Plot Residuals
ax.plot(jnp.zeros(50), y, "--")
ax.errorbar(
residuals_mean[idx], y, xerr=err[idx], marker="o", ms=5, mew=4, ls="none", alpha=0.8
)
# Plot SD
ax.errorbar(residuals_mean[idx], y, xerr=sd[idx], ls="none", color="orange", alpha=0.9)
# Plot earlier mean residual
ax.plot(
jnp.mean(dset.DivorceScaled.values - predictions_3, 0)[idx],
y,
ls="none",
marker="o",
ms=6,
color="black",
alpha=0.6,
)
ax.set(xlabel="Residuals", ylabel="State", title="Residuals with 90% CI")
ax.set_yticks(y)
ax.set_yticklabels(dset.Loc.values[idx], fontsize=10)
ax.text(
-2.8,
-7,
"Residuals (with error-bars) from current model (in red). "
"Black marker \nshows residuals from the previous model (Model 3). "
"Measurement \nerror is indicated by orange bar.",
);
###Output
_____no_output_____
###Markdown
The plot above shows the residuals for each of the states, along with the measurement noise given by inner error bar. The gray dots are the mean residuals from our earlier Model 3. Notice how having an additional degree of freedom to model the measurement noise has shrunk the residuals. In particular, for Idaho and Maine, our predictions are now much closer to the observed values after incorporating measurement noise in the model.To better see how measurement noise affects the movement of the regression line, let us plot the residuals with respect to the measurement noise.
###Code
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10, 6))
x = dset.DivorceScaledSD.values
y1 = jnp.mean(residuals_3, 0)
y2 = jnp.mean(residuals_4, 0)
ax.plot(x, y1, ls="none", marker="o")
ax.plot(x, y2, ls="none", marker="o")
for i, (j, k) in enumerate(zip(y1, y2)):
ax.plot([x[i], x[i]], [j, k], "--", color="gray")
ax.set(
xlabel="Measurement Noise",
ylabel="Residual",
title="Mean residuals (Model 4: red, Model 3: blue)",
);
###Output
_____no_output_____
###Markdown
Bayesian Regression Using NumPyroIn this tutorial, we will explore how to do bayesian regression in NumPyro, using a simple example adapted from Statistical Rethinking [[1](References)]. In particular, we would like to explore the following: - Write a simple model using the `sample` NumPyro primitive. - Run inference using MCMC in NumPyro, in particular, using the No U-Turn Sampler (NUTS) to get a posterior distribution over our regression parameters of interest. - Learn about inference utilities such as `Predictive` and `log_likelihood`. - Learn how we can use effect-handlers in NumPyro to generate execution traces from the model, condition on sample statements, seed models with RNG seeds, etc., and use this to implement various utilities that will be useful for MCMC. e.g. computing model log likelihood, generating empirical distribution over the posterior predictive, etc. Tutorial Outline:1. [Dataset](Dataset)2. [Regression Model to Predict Divorce Rate](Regression-Model-to-Predict-Divorce-Rate) - [Model-1: Predictor-Marriage Rate](Model-1:-Predictor---Marriage-Rate) - [Posterior Distribution over the Regression Parameters](Posterior-Distribution-over-the-Regression-Parameters) - [Posterior Predictive Distribution](Posterior-Predictive-Distribution) - [Predictive Utility With Effect Handlers](Predictive-Utility-With-Effect-Handlers) - [Model Predictive Density](Model-Predictive-Density) - [Model-2: Predictor-Median Age of Marriage](Model-2:-Predictor---Median-Age-of-Marriage) - [Model-3: Predictor-Marriage Rate and Median Age of Marriage](Model-3:-Predictor---Marriage-Rate-and-Median-Age-of-Marriage) - [Divorce Rate Residuals by State](Divorce-Rate-Residuals-by-State)3. [Regression Model with Measurement Error](Regression-Model-with-Measurement-Error) - [Effect of Incorporating Measurement Noise on Residuals](Effect-of-Incorporating-Measurement-Noise-on-Residuals)4. [References](References)
###Code
%reset -s -f
import os
from IPython.display import set_matplotlib_formats
import jax.numpy as jnp
from jax import random, vmap
from jax.scipy.special import logsumexp
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import numpyro
from numpyro.diagnostics import hpdi
import numpyro.distributions as dist
from numpyro import handlers
from numpyro.infer import MCMC, NUTS
plt.style.use('bmh')
if "NUMPYRO_SPHINXBUILD" in os.environ:
set_matplotlib_formats('svg')
assert numpyro.__version__.startswith('0.4.1')
###Output
_____no_output_____
###Markdown
DatasetFor this example, we will use the `WaffleDivorce` dataset from Chapter 05, Statistical Rethinking [[1](References)]. The dataset contains divorce rates in each of the 50 states in the USA, along with predictors such as population, median age of marriage, whether it is a Southern state and, curiously, number of Waffle Houses.
###Code
DATASET_URL = 'https://raw.githubusercontent.com/rmcelreath/rethinking/master/data/WaffleDivorce.csv'
dset = pd.read_csv(DATASET_URL, sep=';')
dset
###Output
_____no_output_____
###Markdown
Let us plot the pair-wise relationship amongst the main variables in the dataset, using `seaborn.pairplot`.
###Code
vars = ['Population', 'MedianAgeMarriage', 'Marriage', 'WaffleHouses', 'South', 'Divorce']
sns.pairplot(dset, x_vars=vars, y_vars=vars, palette='husl');
###Output
_____no_output_____
###Markdown
From the plots above, we can clearly observe that there is a relationship between divorce rates and marriage rates in a state (as might be expected), and also between divorce rates and median age of marriage. There is also a weak relationship between number of Waffle Houses and divorce rates, which is not obvious from the plot above, but will be clearer if we regress `Divorce` against `WaffleHouse` and plot the results.
###Code
sns.regplot('WaffleHouses', 'Divorce', dset);
###Output
_____no_output_____
###Markdown
This is an example of a spurious association. We do not expect the number of Waffle Houses in a state to affect the divorce rate, but it is likely correlated with other factors that have an effect on the divorce rate. We will not delve into this spurious association in this tutorial, but the interested reader is encouraged to read Chapters 5 and 6 of [[1](References)] which explores the problem of causal association in the presence of multiple predictors. For simplicity, we will primarily focus on marriage rate and the median age of marriage as our predictors for divorce rate throughout the remaining tutorial. Regression Model to Predict Divorce RateLet us now write a regressionn model in *NumPyro* to predict the divorce rate as a linear function of marriage rate and median age of marriage in each of the states. First, note that our predictor variables have somewhat different scales. It is a good practice to standardize our predictors and response variables to mean `0` and standard deviation `1`, which should result in [faster inference](https://mc-stan.org/docs/2_19/stan-users-guide/standardizing-predictors-and-outputs.html).
###Code
standardize = lambda x: (x - x.mean()) / x.std()
dset['AgeScaled'] = dset.MedianAgeMarriage.pipe(standardize)
dset['MarriageScaled'] = dset.Marriage.pipe(standardize)
dset['DivorceScaled'] = dset.Divorce.pipe(standardize)
###Output
_____no_output_____
###Markdown
We write the NumPyro model as follows. While the code should largely be self-explanatory, take note of the following: - In NumPyro, *model* code is any Python callable which can optionally accept additional arguments and keywords. For HMC which we will be using for this tutorial, these arguments and keywords remain static during inference, but we can reuse the same model to generate [predictions](Posterior-Predictive-Distribution) on new data. - In addition to regular Python statements, the model code also contains primitives like `sample`. These primitives can be interpreted with various side-effects using effect handlers. For more on effect handlers, refer to [[3](References)], [[4](References)]. For now, just remember that a `sample` statement makes this a stochastic function that samples some latent parameters from a *prior distribution*. Our goal is to infer the *posterior distribution* of these parameters conditioned on observed data. - The reason why we have kept our predictors as optional keyword arguments is to be able to reuse the same model as we vary the set of predictors. Likewise, the reason why the response variable is optional is that we would like to reuse this model to sample from the posterior predictive distribution. See the [section](Posterior-Predictive-Distribution) on plotting the posterior predictive distribution, as an example.
###Code
def model(marriage=None, age=None, divorce=None):
a = numpyro.sample('a', dist.Normal(0., 0.2))
M, A = 0., 0.
if marriage is not None:
bM = numpyro.sample('bM', dist.Normal(0., 0.5))
M = bM * marriage
if age is not None:
bA = numpyro.sample('bA', dist.Normal(0., 0.5))
A = bA * age
sigma = numpyro.sample('sigma', dist.Exponential(1.))
mu = a + M + A
numpyro.sample('obs', dist.Normal(mu, sigma), obs=divorce)
###Output
_____no_output_____
###Markdown
Model 1: Predictor - Marriage RateWe first try to model the divorce rate as depending on a single variable, marriage rate. As mentioned above, we can use the same `model` code as earlier, but only pass values for `marriage` and `divorce` keyword arguments. We will use the No U-Turn Sampler (see [[5](References)] for more details on the NUTS algorithm) to run inference on this simple model.The Hamiltonian Monte Carlo (or, the NUTS) implementation in NumPyro takes in a potential energy function. This is the negative log joint density for the model. Therefore, for our model description above, we need to construct a function which given the parameter values returns the potential energy (or negative log joint density). Additionally, the verlet integrator in HMC (or, NUTS) returns sample values simulated using Hamiltonian dynamics in the unconstrained space. As such, continuous variables with bounded support need to be transformed into unconstrained space using bijective transforms. We also need to transform these samples back to their constrained support before returning these values to the user. Thankfully, this is handled on the backend for us, within a convenience class for doing [MCMC inference](https://numpyro.readthedocs.io/en/latest/mcmc.htmlnumpyro.mcmc.MCMC) that has the following methods: - `run(...)`: runs warmup, adapts steps size and mass matrix, and does sampling using the sample from the warmup phase. - `print_summary()`: print diagnostic information like quantiles, effective sample size, and the Gelman-Rubin diagnostic. - `get_samples()`: gets samples from the posterior distribution.Note the following: - JAX uses functional PRNGs. Unlike other languages / frameworks which maintain a global random state, in JAX, every call to a sampler requires an [explicit PRNGKey](https://github.com/google/jaxrandom-numbers-are-different). We will split our initial random seed for subsequent operations, so that we do not accidentally reuse the same seed. - We run inference with the `NUTS` sampler. To run vanilla HMC, we can instead use the [HMC](https://numpyro.readthedocs.io/en/latest/mcmc.htmlnumpyro.mcmc.HMC) class.
###Code
# Start from this source of randomness. We will split keys for subsequent operations.
rng_key = random.PRNGKey(0)
rng_key, rng_key_ = random.split(rng_key)
num_warmup, num_samples = 1000, 2000
# Run NUTS.
kernel = NUTS(model)
mcmc = MCMC(kernel, num_warmup, num_samples)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_1 = mcmc.get_samples()
###Output
sample: 100%|██████████| 3000/3000 [00:04<00:00, 669.73it/s, 3 steps of size 7.48e-01. acc. prob=0.91]
###Markdown
Posterior Distribution over the Regression ParametersWe notice that the progress bar gives us online statistics on the acceptance probability, step size and number of steps taken per sample while running NUTS. In particular, during warmup, we adapt the step size and mass matrix to achieve a certain target acceptance probability which is 0.8, by default. We were able to successfully adapt our step size to achieve this target in the warmup phase.During warmup, the aim is to adapt hyper-parameters such as step size and mass matrix (the HMC algorithm is very sensitive to these hyper-parameters), and to reach the typical set (see [[6](References)] for more details). If there are any issues in the model specification, the first signal to notice would be low acceptance probabilities or very high number of steps. We use the sample from the end of the warmup phase to seed the MCMC chain (denoted by the second `sample` progress bar) from which we generate the desired number of samples from our target distribution.At the end of inference, NumPyro prints the mean, std and 90% CI values for each of the latent parameters. Note that since we standardized our predictors and response variable, we would expect the intercept to have mean 0, as can be seen here. It also prints other convergence diagnostics on the latent parameters in the model, including [effective sample size](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.effective_sample_size) and the [gelman rubin diagnostic](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.gelman_rubin) ($\hat{R}$). The value for these diagnostics indicates that the chain has converged to the target distribution. In our case, the "target distribution" is the posterior distribution over the latent parameters that we are interested in. Note that this is often worth verifying with multiple chains for more complicated models. In the end, `samples_1` is a collection (in our case, a `dict` since `init_samples` was a `dict`) containing samples from the posterior distribution for each of the latent parameters in the model.To look at our regression fit, let us plot the regression line using our posterior estimates for the regression parameters, along with the 90% Credibility Interval (CI). Note that the [hpdi](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.hpdi) function in NumPyro's diagnostics module can be used to compute CI. In the functions below, note that the collected samples from the posterior are all along the leading axis.
###Code
def plot_regression(x, y_mean, y_hpdi):
# Sort values for plotting by x axis
idx = jnp.argsort(x)
marriage = x[idx]
mean = y_mean[idx]
hpdi = y_hpdi[:, idx]
divorce = dset.DivorceScaled.values[idx]
# Plot
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 6))
ax.plot(marriage, mean)
ax.plot(marriage, divorce, 'o')
ax.fill_between(marriage, hpdi[0], hpdi[1], alpha=0.3, interpolate=True)
return ax
# Compute empirical posterior distribution over mu
posterior_mu = jnp.expand_dims(samples_1['a'], -1) + \
jnp.expand_dims(samples_1['bM'], -1) * dset.MarriageScaled.values
mean_mu = jnp.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_mu, hpdi_mu)
ax.set(xlabel='Marriage rate', ylabel='Divorce rate', title='Regression line with 90% CI');
###Output
_____no_output_____
###Markdown
We can see from the plot, that the CI broadens towards the tails where the data is relatively sparse, as can be expected. Posterior Predictive DistributionLet us now look at the posterior predictive distribution to see how our predictive distribution looks with respect to the observed divorce rates. To get samples from the posterior predictive distribution, we need to run the model by substituting the latent parameters with samples from the posterior. NumPyro provides a handy [Predictive](http://num.pyro.ai/en/latest/utilities.htmlnumpyro.infer.util.Predictive) utility for this purpose. Note that by default we generate a single prediction for each sample from the joint posterior distribution, but this can be controlled using the `num_samples` argument.
###Code
from numpyro.infer import Predictive
rng_key, rng_key_ = random.split(rng_key)
predictive = Predictive(model, samples_1)
predictions = predictive(rng_key_, marriage=dset.MarriageScaled.values)['obs']
df = dset.filter(['Location'])
df['Mean Predictions'] = jnp.mean(predictions, axis=0)
df.head()
###Output
_____no_output_____
###Markdown
Predictive Utility With Effect HandlersTo remove the magic behind `Predictive`, let us see how we can combine [effect handlers](https://numpyro.readthedocs.io/en/latest/handlers.html) with the [vmap](https://github.com/google/jaxauto-vectorization-with-vmap) JAX primitive to implement our own simplified predictive utility function that can do vectorized predictions.
###Code
def predict(rng_key, post_samples, model, *args, **kwargs):
model = handlers.condition(handlers.seed(model, rng_key), post_samples)
model_trace = handlers.trace(model).get_trace(*args, **kwargs)
return model_trace['obs']['value']
# vectorize predictions via vmap
predict_fn = vmap(lambda rng_key, samples: predict(rng_key, samples, model, marriage=dset.MarriageScaled.values))
###Output
_____no_output_____
###Markdown
Note the use of the `condition`, `seed` and `trace` effect handlers in the `predict` function. - The `seed` effect-handler is used to wrap a stochastic function with an initial `PRNGKey` seed. When a sample statement inside the model is called, it uses the existing seed to sample from a distribution but this effect-handler also splits the existing key to ensure that future `sample` calls in the model use the newly split key instead. This is to prevent us from having to explicitly pass in a `PRNGKey` to each `sample` statement in the model. - The `condition` effect handler conditions the latent sample sites to certain values. In our case, we are conditioning on values from the posterior distribution returned by MCMC. - The `trace` effect handler runs the model and records the execution trace within an `OrderedDict`. This trace object contains execution metadata that is useful for computing quantities such as the log joint density. It should be clear now that the `predict` function simply runs the model by substituting the latent parameters with samples from the posterior (generated by the `mcmc` function) to generate predictions. Note the use of JAX's auto-vectorization transform called [vmap](https://github.com/google/jaxauto-vectorization-with-vmap) to vectorize predictions. Note that if we didn't use `vmap`, we would have to use a native for loop which for each sample which is much slower. Each draw from the posterior can be used to get predictions over all the 50 states. When we vectorize this over all the samples from the posterior using `vmap`, we will get a `predictions_1` array of shape `(num_samples, 50)`. We can then compute the mean and 90% CI of these samples to plot the posterior predictive distribution. We note that our mean predictions match those obtained from the `Predictive` utility class.
###Code
# Using the same key as we used for Predictive - note that the results are identical.
predictions_1 = predict_fn(random.split(rng_key_, num_samples), samples_1)
mean_pred = jnp.mean(predictions_1, axis=0)
df = dset.filter(['Location'])
df['Mean Predictions'] = jnp.mean(predictions, axis=0)
df.head()
hpdi_pred = hpdi(predictions_1, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel='Marriage rate', ylabel='Divorce rate', title='Predictions with 90% CI');
###Output
_____no_output_____
###Markdown
We have used the same `plot_regression` function as earlier. We notice that our CI for the predictive distribution is much broader as compared to the last plot due to the additional noise introduced by the `sigma` parameter. Most data points lie well within the 90% CI, which indicates a good fit. Posterior Predictive DensityLikewise, making use of effect-handlers and `vmap`, we can also compute the log likelihood for this model given the dataset, and the log posterior predictive density [[6](References)] which is given by $$ log \prod_{i=1}^{n} \int p(y_i | \theta) p_{post}(\theta) d\theta \approx \sum_{i=1}^n log \frac{\sum_s p(\theta^{s})}{S} \\= \sum_{i=1}^n (log \sum_s p(\theta^{s}) - log(S))$$.Here, $i$ indexes the observed data points $y$ and $s$ indexes the posterior samples over the latent parameters $\theta$. If the posterior predictive density for a model has a comparatively high value, it indicates that the observed data-points have higher probability under the given model.
###Code
def log_likelihood(rng_key, params, model, *args, **kwargs):
model = handlers.condition(model, params)
model_trace = handlers.trace(model).get_trace(*args, **kwargs)
obs_node = model_trace['obs']
return obs_node['fn'].log_prob(obs_node['value'])
def log_pred_density(rng_key, params, model, *args, **kwargs):
n = list(params.values())[0].shape[0]
log_lk_fn = vmap(lambda rng_key, params: log_likelihood(rng_key, params, model, *args, **kwargs))
log_lk_vals = log_lk_fn(random.split(rng_key, n), params)
return (logsumexp(log_lk_vals, 0) - jnp.log(n)).sum()
###Output
_____no_output_____
###Markdown
Note that NumPyro provides the [log_likelihood](http://num.pyro.ai/en/latest/utilities.htmllog-likelihood) utility function that can be used directly for computing `log likelihood` as in the first function for any general model. In this tutorial, we would like to emphasize that there is nothing magical about such utility functions, and you can roll out your own inference utilities using NumPyro's effect handling stack.
###Code
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(log_pred_density(rng_key_,
samples_1,
model,
marriage=dset.MarriageScaled.values,
divorce=dset.DivorceScaled.values)))
###Output
Log posterior predictive density: -66.65252685546875
###Markdown
Model 2: Predictor - Median Age of MarriageWe will now model the divorce rate as a function of the median age of marriage. The computations are mostly a reproduction of what we did for Model 1. Notice the following: - Divorce rate is inversely related to the age of marriage. Hence states where the median age of marriage is low will likely have a higher divorce rate. - We get a higher log likelihood as compared to Model 2, indicating that median age of marriage is likely a much better predictor of divorce rate.
###Code
rng_key, rng_key_ = random.split(rng_key)
mcmc.run(rng_key_, age=dset.AgeScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_2 = mcmc.get_samples()
posterior_mu = jnp.expand_dims(samples_2['a'], -1) + \
jnp.expand_dims(samples_2['bA'], -1) * dset.AgeScaled.values
mean_mu = jnp.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_mu, hpdi_mu)
ax.set(xlabel='Median marriage age', ylabel='Divorce rate', title='Regression line with 90% CI');
rng_key, rng_key_ = random.split(rng_key)
predictions_2 = Predictive(model, samples_2)(rng_key_,
age=dset.AgeScaled.values)['obs']
mean_pred = jnp.mean(predictions_2, axis=0)
hpdi_pred = hpdi(predictions_2, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel='Median Age', ylabel='Divorce rate', title='Predictions with 90% CI');
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(log_pred_density(rng_key_,
samples_2,
model,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values)))
###Output
Log posterior predictive density: -59.238067626953125
###Markdown
Model 3: Predictor - Marriage Rate and Median Age of MarriageFinally, we will also model divorce rate as depending on both marriage rate as well as the median age of marriage. Note that the model's posterior predictive density is similar to Model 2 which likely indicates that the marginal information from marriage rate in predicting divorce rate is low when the median age of marriage is already known.
###Code
rng_key, rng_key_ = random.split(rng_key)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_3 = mcmc.get_samples()
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(
log_pred_density(rng_key_,
samples_3,
model,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values)
))
###Output
Log posterior predictive density: -59.06075668334961
###Markdown
Divorce Rate Residuals by StateThe regression plots above shows that the observed divorce rates for many states differs considerably from the mean regression line. To dig deeper into how the last model (Model 3) under-predicts or over-predicts for each of the states, we will plot the posterior predictive and residuals (`Observed divorce rate - Predicted divorce rate`) for each of the states.
###Code
# Predictions for Model 3.
rng_key, rng_key_ = random.split(rng_key)
predictions_3 = Predictive(model, samples_3)(rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values)['obs']
y = jnp.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 16))
pred_mean = jnp.mean(predictions_3, axis=0)
pred_hpdi = hpdi(predictions_3, 0.9)
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = jnp.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
idx = jnp.argsort(residuals_mean)
# Plot posterior predictive
ax[0].plot(jnp.zeros(50), y, '--')
ax[0].errorbar(pred_mean[idx], y, xerr=pred_hpdi[1, idx] - pred_mean[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
ax[0].plot(dset.DivorceScaled.values[idx], y, marker='o',
ls='none', color='gray')
ax[0].set(xlabel='Posterior Predictive (red) vs. Actuals (gray)', ylabel='State',
title='Posterior Predictive with 90% CI')
ax[0].set_yticks(y)
ax[0].set_yticklabels(dset.Loc.values[idx], fontsize=10);
# Plot residuals
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = jnp.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
err = residuals_hpdi[1] - residuals_mean
ax[1].plot(jnp.zeros(50), y, '--')
ax[1].errorbar(residuals_mean[idx], y, xerr=err[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
ax[1].set(xlabel='Residuals', ylabel='State', title='Residuals with 90% CI')
ax[1].set_yticks(y)
ax[1].set_yticklabels(dset.Loc.values[idx], fontsize=10);
###Output
_____no_output_____
###Markdown
The plot on the left shows the mean predictions with 90% CI for each of the states using Model 3. The gray markers indicate the actual observed divorce rates. The right plot shows the residuals for each of the states, and both these plots are sorted by the residuals, i.e. at the bottom, we are looking at states where the model predictions are higher than the observed rates, whereas at the top, the reverse is true.Overall, the model fit seems good because most observed data points like within a 90% CI around the mean predictions. However, notice how the model over-predicts by a large margin for states like Idaho (bottom left), and on the other end under-predicts for states like Maine (top right). This is likely indicative of other factors that we are missing out in our model that affect divorce rate across different states. Even ignoring other socio-political variables, one such factor that we have not yet modeled is the measurement noise given by `Divorce SE` in the dataset. We will explore this in the next section. Regression Model with Measurement ErrorNote that in our previous models, each data point influences the regression line equally. Is this well justified? We will build on the previous model to incorporate measurement error given by `Divorce SE` variable in the dataset. Incorporating measurement noise will be useful in ensuring that observations that have higher confidence (i.e. lower measurement noise) have a greater impact on the regression line. On the other hand, this will also help us better model outliers with high measurement errors. For more details on modeling errors due to measurement noise, refer to Chapter 15 of [[1](References)].To do this, we will reuse Model 3, with the only change that the final observed value has a measurement error given by `divorce_sd` (notice that this has to be standardized since the `divorce` variable itself has been standardized to mean 0 and std 1).
###Code
def model_se(marriage, age, divorce_sd, divorce=None):
a = numpyro.sample('a', dist.Normal(0., 0.2))
bM = numpyro.sample('bM', dist.Normal(0., 0.5))
M = bM * marriage
bA = numpyro.sample('bA', dist.Normal(0., 0.5))
A = bA * age
sigma = numpyro.sample('sigma', dist.Exponential(1.))
mu = a + M + A
divorce_rate = numpyro.sample('divorce_rate', dist.Normal(mu, sigma))
numpyro.sample('obs', dist.Normal(divorce_rate, divorce_sd), obs=divorce)
# Standardize
dset['DivorceScaledSD'] = dset['Divorce SE'] / jnp.std(dset.Divorce.values)
rng_key, rng_key_ = random.split(rng_key)
kernel = NUTS(model_se, target_accept_prob=0.9)
mcmc = MCMC(kernel, num_warmup=1000, num_samples=3000)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values, age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_4 = mcmc.get_samples()
###Output
sample: 100%|██████████| 4000/4000 [00:06<00:00, 650.76it/s, 15 steps of size 3.00e-01. acc. prob=0.91]
###Markdown
Effect of Incorporating Measurement Noise on ResidualsNotice that our values for the regression coefficients is very similar to Model 3. However, introducing measurement noise allows us to more closely match our predictive distribution to the observed values. We can see this if we plot the residuals as earlier.
###Code
rng_key, rng_key_ = random.split(rng_key)
predictions_4 = Predictive(model_se, samples_4)(rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values)['obs']
sd = dset.DivorceScaledSD.values
residuals_4 = dset.DivorceScaled.values - predictions_4
residuals_mean = jnp.mean(residuals_4, axis=0)
residuals_hpdi = hpdi(residuals_4, 0.9)
err = residuals_hpdi[1] - residuals_mean
idx = jnp.argsort(residuals_mean)
y = jnp.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 16))
# Plot Residuals
ax.plot(jnp.zeros(50), y, '--')
ax.errorbar(residuals_mean[idx], y, xerr=err[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
# Plot SD
ax.errorbar(residuals_mean[idx], y, xerr=sd[idx],
ls='none', color='orange', alpha=0.9)
# Plot earlier mean residual
ax.plot(jnp.mean(dset.DivorceScaled.values - predictions_3, 0)[idx], y,
ls='none', marker='o', ms=6, color='black', alpha=0.6)
ax.set(xlabel='Residuals', ylabel='State', title='Residuals with 90% CI')
ax.set_yticks(y)
ax.set_yticklabels(dset.Loc.values[idx], fontsize=10);
ax.text(-2.8, -7, 'Residuals (with error-bars) from current model (in red). '
'Black marker \nshows residuals from the previous model (Model 3). '
'Measurement \nerror is indicated by orange bar.');
###Output
_____no_output_____
###Markdown
The plot above shows the residuals for each of the states, along with the measurement noise given by inner error bar. The gray dots are the mean residuals from our earlier Model 3. Notice how having an additional degree of freedom to model the measurement noise has shrunk the residuals. In particular, for Idaho and Maine, our predictions are now much closer to the observed values after incorporating measurement noise in the model.To better see how measurement noise affects the movement of the regression line, let us plot the residuals with respect to the measurement noise.
###Code
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10, 6))
x = dset.DivorceScaledSD.values
y1 = jnp.mean(residuals_3, 0)
y2 = jnp.mean(residuals_4, 0)
ax.plot(x, y1, ls='none', marker='o')
ax.plot(x, y2, ls='none', marker='o')
for i, (j, k) in enumerate(zip(y1, y2)):
ax.plot([x[i], x[i]], [j, k], '--', color='gray');
ax.set(xlabel='Measurement Noise', ylabel='Residual', title='Mean residuals (Model 4: red, Model 3: blue)');
###Output
_____no_output_____
###Markdown
Bayesian Regression Using NumPyroIn this tutorial, we will explore how to do bayesian regression in NumPyro, using a simple example adapted from Statistical Rethinking [[1](References)]. In particular, we would like to explore the following: - Write a simple model using the `sample` NumPyro primitive. - Run inference using MCMC in NumPyro, in particular, using the No U-Turn Sampler (NUTS) to get a posterior distribution over our regression parameters of interest. - Learn about inference utilities such as `Predictive` and `log_likelihood`. - Learn how we can use effect-handlers in NumPyro to generate execution traces from the model, condition on sample statements, seed models with RNG seeds, etc., and use this to implement various utilities that will be useful for MCMC. e.g. computing model log likelihood, generating empirical distribution over the posterior predictive, etc. Tutorial Outline:1. [Dataset](Dataset)2. [Regression Model to Predict Divorce Rate](Regression-Model-to-Predict-Divorce-Rate) - [Model-1: Predictor-Marriage Rate](Model-1:-Predictor---Marriage-Rate) - [Posterior Distribution over the Regression Parameters](Posterior-Distribution-over-the-Regression-Parameters) - [Posterior Predictive Distribution](Posterior-Predictive-Distribution) - [Predictive Utility With Effect Handlers](Predictive-Utility-With-Effect-Handlers) - [Model Predictive Density](Model-Predictive-Density) - [Model-2: Predictor-Median Age of Marriage](Model-2:-Predictor---Median-Age-of-Marriage) - [Model-3: Predictor-Marriage Rate and Median Age of Marriage](Model-3:-Predictor---Marriage-Rate-and-Median-Age-of-Marriage) - [Divorce Rate Residuals by State](Divorce-Rate-Residuals-by-State)3. [Regression Model with Measurement Error](Regression-Model-with-Measurement-Error) - [Effect of Incorporating Measurement Noise on Residuals](Effect-of-Incorporating-Measurement-Noise-on-Residuals)4. [References](References)
###Code
!pip install -q numpyro@git+https://github.com/pyro-ppl/numpyro
import os
from IPython.display import set_matplotlib_formats
import jax.numpy as jnp
from jax import random, vmap
from jax.scipy.special import logsumexp
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import numpyro
from numpyro.diagnostics import hpdi
import numpyro.distributions as dist
from numpyro import handlers
from numpyro.infer import MCMC, NUTS
plt.style.use('bmh')
if "NUMPYRO_SPHINXBUILD" in os.environ:
set_matplotlib_formats('svg')
assert numpyro.__version__.startswith('0.7.2')
###Output
_____no_output_____
###Markdown
DatasetFor this example, we will use the `WaffleDivorce` dataset from Chapter 05, Statistical Rethinking [[1](References)]. The dataset contains divorce rates in each of the 50 states in the USA, along with predictors such as population, median age of marriage, whether it is a Southern state and, curiously, number of Waffle Houses.
###Code
DATASET_URL = 'https://raw.githubusercontent.com/rmcelreath/rethinking/master/data/WaffleDivorce.csv'
dset = pd.read_csv(DATASET_URL, sep=';')
dset
###Output
_____no_output_____
###Markdown
Let us plot the pair-wise relationship amongst the main variables in the dataset, using `seaborn.pairplot`.
###Code
vars = ['Population', 'MedianAgeMarriage', 'Marriage', 'WaffleHouses', 'South', 'Divorce']
sns.pairplot(dset, x_vars=vars, y_vars=vars, palette='husl');
###Output
_____no_output_____
###Markdown
From the plots above, we can clearly observe that there is a relationship between divorce rates and marriage rates in a state (as might be expected), and also between divorce rates and median age of marriage. There is also a weak relationship between number of Waffle Houses and divorce rates, which is not obvious from the plot above, but will be clearer if we regress `Divorce` against `WaffleHouse` and plot the results.
###Code
sns.regplot(x='WaffleHouses', y='Divorce', data=dset);
###Output
_____no_output_____
###Markdown
This is an example of a spurious association. We do not expect the number of Waffle Houses in a state to affect the divorce rate, but it is likely correlated with other factors that have an effect on the divorce rate. We will not delve into this spurious association in this tutorial, but the interested reader is encouraged to read Chapters 5 and 6 of [[1](References)] which explores the problem of causal association in the presence of multiple predictors. For simplicity, we will primarily focus on marriage rate and the median age of marriage as our predictors for divorce rate throughout the remaining tutorial. Regression Model to Predict Divorce RateLet us now write a regressionn model in *NumPyro* to predict the divorce rate as a linear function of marriage rate and median age of marriage in each of the states. First, note that our predictor variables have somewhat different scales. It is a good practice to standardize our predictors and response variables to mean `0` and standard deviation `1`, which should result in [faster inference](https://mc-stan.org/docs/2_19/stan-users-guide/standardizing-predictors-and-outputs.html).
###Code
standardize = lambda x: (x - x.mean()) / x.std()
dset['AgeScaled'] = dset.MedianAgeMarriage.pipe(standardize)
dset['MarriageScaled'] = dset.Marriage.pipe(standardize)
dset['DivorceScaled'] = dset.Divorce.pipe(standardize)
###Output
_____no_output_____
###Markdown
We write the NumPyro model as follows. While the code should largely be self-explanatory, take note of the following: - In NumPyro, *model* code is any Python callable which can optionally accept additional arguments and keywords. For HMC which we will be using for this tutorial, these arguments and keywords remain static during inference, but we can reuse the same model to generate [predictions](Posterior-Predictive-Distribution) on new data. - In addition to regular Python statements, the model code also contains primitives like `sample`. These primitives can be interpreted with various side-effects using effect handlers. For more on effect handlers, refer to [[3](References)], [[4](References)]. For now, just remember that a `sample` statement makes this a stochastic function that samples some latent parameters from a *prior distribution*. Our goal is to infer the *posterior distribution* of these parameters conditioned on observed data. - The reason why we have kept our predictors as optional keyword arguments is to be able to reuse the same model as we vary the set of predictors. Likewise, the reason why the response variable is optional is that we would like to reuse this model to sample from the posterior predictive distribution. See the [section](Posterior-Predictive-Distribution) on plotting the posterior predictive distribution, as an example.
###Code
def model(marriage=None, age=None, divorce=None):
a = numpyro.sample('a', dist.Normal(0., 0.2))
M, A = 0., 0.
if marriage is not None:
bM = numpyro.sample('bM', dist.Normal(0., 0.5))
M = bM * marriage
if age is not None:
bA = numpyro.sample('bA', dist.Normal(0., 0.5))
A = bA * age
sigma = numpyro.sample('sigma', dist.Exponential(1.))
mu = a + M + A
numpyro.sample('obs', dist.Normal(mu, sigma), obs=divorce)
###Output
_____no_output_____
###Markdown
Model 1: Predictor - Marriage RateWe first try to model the divorce rate as depending on a single variable, marriage rate. As mentioned above, we can use the same `model` code as earlier, but only pass values for `marriage` and `divorce` keyword arguments. We will use the No U-Turn Sampler (see [[5](References)] for more details on the NUTS algorithm) to run inference on this simple model.The Hamiltonian Monte Carlo (or, the NUTS) implementation in NumPyro takes in a potential energy function. This is the negative log joint density for the model. Therefore, for our model description above, we need to construct a function which given the parameter values returns the potential energy (or negative log joint density). Additionally, the verlet integrator in HMC (or, NUTS) returns sample values simulated using Hamiltonian dynamics in the unconstrained space. As such, continuous variables with bounded support need to be transformed into unconstrained space using bijective transforms. We also need to transform these samples back to their constrained support before returning these values to the user. Thankfully, this is handled on the backend for us, within a convenience class for doing [MCMC inference](https://numpyro.readthedocs.io/en/latest/mcmc.htmlnumpyro.mcmc.MCMC) that has the following methods: - `run(...)`: runs warmup, adapts steps size and mass matrix, and does sampling using the sample from the warmup phase. - `print_summary()`: print diagnostic information like quantiles, effective sample size, and the Gelman-Rubin diagnostic. - `get_samples()`: gets samples from the posterior distribution.Note the following: - JAX uses functional PRNGs. Unlike other languages / frameworks which maintain a global random state, in JAX, every call to a sampler requires an [explicit PRNGKey](https://github.com/google/jaxrandom-numbers-are-different). We will split our initial random seed for subsequent operations, so that we do not accidentally reuse the same seed. - We run inference with the `NUTS` sampler. To run vanilla HMC, we can instead use the [HMC](https://numpyro.readthedocs.io/en/latest/mcmc.htmlnumpyro.mcmc.HMC) class.
###Code
# Start from this source of randomness. We will split keys for subsequent operations.
rng_key = random.PRNGKey(0)
rng_key, rng_key_ = random.split(rng_key)
# Run NUTS.
kernel = NUTS(model)
num_samples = 2000
mcmc = MCMC(kernel, num_warmup=1000, num_samples=num_samples)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_1 = mcmc.get_samples()
###Output
sample: 100%|██████████| 3000/3000 [00:04<00:00, 748.14it/s, 7 steps of size 7.41e-01. acc. prob=0.92]
###Markdown
Posterior Distribution over the Regression ParametersWe notice that the progress bar gives us online statistics on the acceptance probability, step size and number of steps taken per sample while running NUTS. In particular, during warmup, we adapt the step size and mass matrix to achieve a certain target acceptance probability which is 0.8, by default. We were able to successfully adapt our step size to achieve this target in the warmup phase.During warmup, the aim is to adapt hyper-parameters such as step size and mass matrix (the HMC algorithm is very sensitive to these hyper-parameters), and to reach the typical set (see [[6](References)] for more details). If there are any issues in the model specification, the first signal to notice would be low acceptance probabilities or very high number of steps. We use the sample from the end of the warmup phase to seed the MCMC chain (denoted by the second `sample` progress bar) from which we generate the desired number of samples from our target distribution.At the end of inference, NumPyro prints the mean, std and 90% CI values for each of the latent parameters. Note that since we standardized our predictors and response variable, we would expect the intercept to have mean 0, as can be seen here. It also prints other convergence diagnostics on the latent parameters in the model, including [effective sample size](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.effective_sample_size) and the [gelman rubin diagnostic](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.gelman_rubin) ($\hat{R}$). The value for these diagnostics indicates that the chain has converged to the target distribution. In our case, the "target distribution" is the posterior distribution over the latent parameters that we are interested in. Note that this is often worth verifying with multiple chains for more complicated models. In the end, `samples_1` is a collection (in our case, a `dict` since `init_samples` was a `dict`) containing samples from the posterior distribution for each of the latent parameters in the model.To look at our regression fit, let us plot the regression line using our posterior estimates for the regression parameters, along with the 90% Credibility Interval (CI). Note that the [hpdi](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.hpdi) function in NumPyro's diagnostics module can be used to compute CI. In the functions below, note that the collected samples from the posterior are all along the leading axis.
###Code
def plot_regression(x, y_mean, y_hpdi):
# Sort values for plotting by x axis
idx = jnp.argsort(x)
marriage = x[idx]
mean = y_mean[idx]
hpdi = y_hpdi[:, idx]
divorce = dset.DivorceScaled.values[idx]
# Plot
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 6))
ax.plot(marriage, mean)
ax.plot(marriage, divorce, 'o')
ax.fill_between(marriage, hpdi[0], hpdi[1], alpha=0.3, interpolate=True)
return ax
# Compute empirical posterior distribution over mu
posterior_mu = jnp.expand_dims(samples_1['a'], -1) + \
jnp.expand_dims(samples_1['bM'], -1) * dset.MarriageScaled.values
mean_mu = jnp.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_mu, hpdi_mu)
ax.set(xlabel='Marriage rate', ylabel='Divorce rate', title='Regression line with 90% CI');
###Output
_____no_output_____
###Markdown
We can see from the plot, that the CI broadens towards the tails where the data is relatively sparse, as can be expected. Prior Predictive DistributionLet us check that we have set sensible priors by sampling from the prior predictive distribution. NumPyro provides a handy [Predictive](http://num.pyro.ai/en/latest/utilities.htmlnumpyro.infer.util.Predictive) utility for this purpose.
###Code
from numpyro.infer import Predictive
rng_key, rng_key_ = random.split(rng_key)
prior_predictive = Predictive(model, num_samples=100)
prior_predictions = prior_predictive(rng_key_, marriage=dset.MarriageScaled.values)['obs']
mean_prior_pred = jnp.mean(prior_predictions, axis=0)
hpdi_prior_pred = hpdi(prior_predictions, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_prior_pred, hpdi_prior_pred)
ax.set(xlabel='Marriage rate', ylabel='Divorce rate', title='Predictions with 90% CI');
###Output
_____no_output_____
###Markdown
Posterior Predictive DistributionLet us now look at the posterior predictive distribution to see how our predictive distribution looks with respect to the observed divorce rates. To get samples from the posterior predictive distribution, we need to run the model by substituting the latent parameters with samples from the posterior. Note that by default we generate a single prediction for each sample from the joint posterior distribution, but this can be controlled using the `num_samples` argument.
###Code
rng_key, rng_key_ = random.split(rng_key)
predictive = Predictive(model, samples_1)
predictions = predictive(rng_key_, marriage=dset.MarriageScaled.values)['obs']
df = dset.filter(['Location'])
df['Mean Predictions'] = jnp.mean(predictions, axis=0)
df.head()
###Output
_____no_output_____
###Markdown
Predictive Utility With Effect HandlersTo remove the magic behind `Predictive`, let us see how we can combine [effect handlers](https://numpyro.readthedocs.io/en/latest/handlers.html) with the [vmap](https://github.com/google/jaxauto-vectorization-with-vmap) JAX primitive to implement our own simplified predictive utility function that can do vectorized predictions.
###Code
def predict(rng_key, post_samples, model, *args, **kwargs):
model = handlers.seed(handlers.condition(model, post_samples), rng_key)
model_trace = handlers.trace(model).get_trace(*args, **kwargs)
return model_trace['obs']['value']
# vectorize predictions via vmap
predict_fn = vmap(lambda rng_key, samples: predict(rng_key, samples, model, marriage=dset.MarriageScaled.values))
###Output
_____no_output_____
###Markdown
Note the use of the `condition`, `seed` and `trace` effect handlers in the `predict` function. - The `seed` effect-handler is used to wrap a stochastic function with an initial `PRNGKey` seed. When a sample statement inside the model is called, it uses the existing seed to sample from a distribution but this effect-handler also splits the existing key to ensure that future `sample` calls in the model use the newly split key instead. This is to prevent us from having to explicitly pass in a `PRNGKey` to each `sample` statement in the model. - The `condition` effect handler conditions the latent sample sites to certain values. In our case, we are conditioning on values from the posterior distribution returned by MCMC. - The `trace` effect handler runs the model and records the execution trace within an `OrderedDict`. This trace object contains execution metadata that is useful for computing quantities such as the log joint density. It should be clear now that the `predict` function simply runs the model by substituting the latent parameters with samples from the posterior (generated by the `mcmc` function) to generate predictions. Note the use of JAX's auto-vectorization transform called [vmap](https://github.com/google/jaxauto-vectorization-with-vmap) to vectorize predictions. Note that if we didn't use `vmap`, we would have to use a native for loop which for each sample which is much slower. Each draw from the posterior can be used to get predictions over all the 50 states. When we vectorize this over all the samples from the posterior using `vmap`, we will get a `predictions_1` array of shape `(num_samples, 50)`. We can then compute the mean and 90% CI of these samples to plot the posterior predictive distribution. We note that our mean predictions match those obtained from the `Predictive` utility class.
###Code
# Using the same key as we used for Predictive - note that the results are identical.
predictions_1 = predict_fn(random.split(rng_key_, num_samples), samples_1)
mean_pred = jnp.mean(predictions_1, axis=0)
df = dset.filter(['Location'])
df['Mean Predictions'] = mean_pred
df.head()
hpdi_pred = hpdi(predictions_1, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel='Marriage rate', ylabel='Divorce rate', title='Predictions with 90% CI');
###Output
_____no_output_____
###Markdown
We have used the same `plot_regression` function as earlier. We notice that our CI for the predictive distribution is much broader as compared to the last plot due to the additional noise introduced by the `sigma` parameter. Most data points lie well within the 90% CI, which indicates a good fit. Posterior Predictive DensityLikewise, making use of effect-handlers and `vmap`, we can also compute the log likelihood for this model given the dataset, and the log posterior predictive density [[6](References)] which is given by $$ log \prod_{i=1}^{n} \int p(y_i | \theta) p_{post}(\theta) d\theta \approx \sum_{i=1}^n log \frac{\sum_s p(\theta^{s})}{S} \\= \sum_{i=1}^n (log \sum_s p(\theta^{s}) - log(S))$$.Here, $i$ indexes the observed data points $y$ and $s$ indexes the posterior samples over the latent parameters $\theta$. If the posterior predictive density for a model has a comparatively high value, it indicates that the observed data-points have higher probability under the given model.
###Code
def log_likelihood(rng_key, params, model, *args, **kwargs):
model = handlers.condition(model, params)
model_trace = handlers.trace(model).get_trace(*args, **kwargs)
obs_node = model_trace['obs']
return obs_node['fn'].log_prob(obs_node['value'])
def log_pred_density(rng_key, params, model, *args, **kwargs):
n = list(params.values())[0].shape[0]
log_lk_fn = vmap(lambda rng_key, params: log_likelihood(rng_key, params, model, *args, **kwargs))
log_lk_vals = log_lk_fn(random.split(rng_key, n), params)
return (logsumexp(log_lk_vals, 0) - jnp.log(n)).sum()
###Output
_____no_output_____
###Markdown
Note that NumPyro provides the [log_likelihood](http://num.pyro.ai/en/latest/utilities.htmllog-likelihood) utility function that can be used directly for computing `log likelihood` as in the first function for any general model. In this tutorial, we would like to emphasize that there is nothing magical about such utility functions, and you can roll out your own inference utilities using NumPyro's effect handling stack.
###Code
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(log_pred_density(rng_key_,
samples_1,
model,
marriage=dset.MarriageScaled.values,
divorce=dset.DivorceScaled.values)))
###Output
Log posterior predictive density: -66.70008087158203
###Markdown
Model 2: Predictor - Median Age of MarriageWe will now model the divorce rate as a function of the median age of marriage. The computations are mostly a reproduction of what we did for Model 1. Notice the following: - Divorce rate is inversely related to the age of marriage. Hence states where the median age of marriage is low will likely have a higher divorce rate. - We get a higher log likelihood as compared to Model 2, indicating that median age of marriage is likely a much better predictor of divorce rate.
###Code
rng_key, rng_key_ = random.split(rng_key)
mcmc.run(rng_key_, age=dset.AgeScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_2 = mcmc.get_samples()
posterior_mu = jnp.expand_dims(samples_2['a'], -1) + \
jnp.expand_dims(samples_2['bA'], -1) * dset.AgeScaled.values
mean_mu = jnp.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_mu, hpdi_mu)
ax.set(xlabel='Median marriage age', ylabel='Divorce rate', title='Regression line with 90% CI');
rng_key, rng_key_ = random.split(rng_key)
predictions_2 = Predictive(model, samples_2)(rng_key_,
age=dset.AgeScaled.values)['obs']
mean_pred = jnp.mean(predictions_2, axis=0)
hpdi_pred = hpdi(predictions_2, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel='Median Age', ylabel='Divorce rate', title='Predictions with 90% CI');
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(log_pred_density(rng_key_,
samples_2,
model,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values)))
###Output
Log posterior predictive density: -59.251956939697266
###Markdown
Model 3: Predictor - Marriage Rate and Median Age of MarriageFinally, we will also model divorce rate as depending on both marriage rate as well as the median age of marriage. Note that the model's posterior predictive density is similar to Model 2 which likely indicates that the marginal information from marriage rate in predicting divorce rate is low when the median age of marriage is already known.
###Code
rng_key, rng_key_ = random.split(rng_key)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_3 = mcmc.get_samples()
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(
log_pred_density(rng_key_,
samples_3,
model,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values)
))
###Output
Log posterior predictive density: -59.06374740600586
###Markdown
Divorce Rate Residuals by StateThe regression plots above shows that the observed divorce rates for many states differs considerably from the mean regression line. To dig deeper into how the last model (Model 3) under-predicts or over-predicts for each of the states, we will plot the posterior predictive and residuals (`Observed divorce rate - Predicted divorce rate`) for each of the states.
###Code
# Predictions for Model 3.
rng_key, rng_key_ = random.split(rng_key)
predictions_3 = Predictive(model, samples_3)(rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values)['obs']
y = jnp.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 16))
pred_mean = jnp.mean(predictions_3, axis=0)
pred_hpdi = hpdi(predictions_3, 0.9)
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = jnp.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
idx = jnp.argsort(residuals_mean)
# Plot posterior predictive
ax[0].plot(jnp.zeros(50), y, '--')
ax[0].errorbar(pred_mean[idx], y, xerr=pred_hpdi[1, idx] - pred_mean[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
ax[0].plot(dset.DivorceScaled.values[idx], y, marker='o',
ls='none', color='gray')
ax[0].set(xlabel='Posterior Predictive (red) vs. Actuals (gray)', ylabel='State',
title='Posterior Predictive with 90% CI')
ax[0].set_yticks(y)
ax[0].set_yticklabels(dset.Loc.values[idx], fontsize=10);
# Plot residuals
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = jnp.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
err = residuals_hpdi[1] - residuals_mean
ax[1].plot(jnp.zeros(50), y, '--')
ax[1].errorbar(residuals_mean[idx], y, xerr=err[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
ax[1].set(xlabel='Residuals', ylabel='State', title='Residuals with 90% CI')
ax[1].set_yticks(y)
ax[1].set_yticklabels(dset.Loc.values[idx], fontsize=10);
###Output
_____no_output_____
###Markdown
The plot on the left shows the mean predictions with 90% CI for each of the states using Model 3. The gray markers indicate the actual observed divorce rates. The right plot shows the residuals for each of the states, and both these plots are sorted by the residuals, i.e. at the bottom, we are looking at states where the model predictions are higher than the observed rates, whereas at the top, the reverse is true.Overall, the model fit seems good because most observed data points like within a 90% CI around the mean predictions. However, notice how the model over-predicts by a large margin for states like Idaho (bottom left), and on the other end under-predicts for states like Maine (top right). This is likely indicative of other factors that we are missing out in our model that affect divorce rate across different states. Even ignoring other socio-political variables, one such factor that we have not yet modeled is the measurement noise given by `Divorce SE` in the dataset. We will explore this in the next section. Regression Model with Measurement ErrorNote that in our previous models, each data point influences the regression line equally. Is this well justified? We will build on the previous model to incorporate measurement error given by `Divorce SE` variable in the dataset. Incorporating measurement noise will be useful in ensuring that observations that have higher confidence (i.e. lower measurement noise) have a greater impact on the regression line. On the other hand, this will also help us better model outliers with high measurement errors. For more details on modeling errors due to measurement noise, refer to Chapter 14 of [[1](References)].To do this, we will reuse Model 3, with the only change that the final observed value has a measurement error given by `divorce_sd` (notice that this has to be standardized since the `divorce` variable itself has been standardized to mean 0 and std 1).
###Code
def model_se(marriage, age, divorce_sd, divorce=None):
a = numpyro.sample('a', dist.Normal(0., 0.2))
bM = numpyro.sample('bM', dist.Normal(0., 0.5))
M = bM * marriage
bA = numpyro.sample('bA', dist.Normal(0., 0.5))
A = bA * age
sigma = numpyro.sample('sigma', dist.Exponential(1.))
mu = a + M + A
divorce_rate = numpyro.sample('divorce_rate', dist.Normal(mu, sigma))
numpyro.sample('obs', dist.Normal(divorce_rate, divorce_sd), obs=divorce)
# Standardize
dset['DivorceScaledSD'] = dset['Divorce SE'] / jnp.std(dset.Divorce.values)
rng_key, rng_key_ = random.split(rng_key)
kernel = NUTS(model_se, target_accept_prob=0.9)
mcmc = MCMC(kernel, num_warmup=1000, num_samples=3000)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values, age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_4 = mcmc.get_samples()
###Output
sample: 100%|██████████| 4000/4000 [00:06<00:00, 578.19it/s, 15 steps of size 2.58e-01. acc. prob=0.93]
###Markdown
Effect of Incorporating Measurement Noise on ResidualsNotice that our values for the regression coefficients is very similar to Model 3. However, introducing measurement noise allows us to more closely match our predictive distribution to the observed values. We can see this if we plot the residuals as earlier.
###Code
rng_key, rng_key_ = random.split(rng_key)
predictions_4 = Predictive(model_se, samples_4)(rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values)['obs']
sd = dset.DivorceScaledSD.values
residuals_4 = dset.DivorceScaled.values - predictions_4
residuals_mean = jnp.mean(residuals_4, axis=0)
residuals_hpdi = hpdi(residuals_4, 0.9)
err = residuals_hpdi[1] - residuals_mean
idx = jnp.argsort(residuals_mean)
y = jnp.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 16))
# Plot Residuals
ax.plot(jnp.zeros(50), y, '--')
ax.errorbar(residuals_mean[idx], y, xerr=err[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
# Plot SD
ax.errorbar(residuals_mean[idx], y, xerr=sd[idx],
ls='none', color='orange', alpha=0.9)
# Plot earlier mean residual
ax.plot(jnp.mean(dset.DivorceScaled.values - predictions_3, 0)[idx], y,
ls='none', marker='o', ms=6, color='black', alpha=0.6)
ax.set(xlabel='Residuals', ylabel='State', title='Residuals with 90% CI')
ax.set_yticks(y)
ax.set_yticklabels(dset.Loc.values[idx], fontsize=10);
ax.text(-2.8, -7, 'Residuals (with error-bars) from current model (in red). '
'Black marker \nshows residuals from the previous model (Model 3). '
'Measurement \nerror is indicated by orange bar.');
###Output
_____no_output_____
###Markdown
The plot above shows the residuals for each of the states, along with the measurement noise given by inner error bar. The gray dots are the mean residuals from our earlier Model 3. Notice how having an additional degree of freedom to model the measurement noise has shrunk the residuals. In particular, for Idaho and Maine, our predictions are now much closer to the observed values after incorporating measurement noise in the model.To better see how measurement noise affects the movement of the regression line, let us plot the residuals with respect to the measurement noise.
###Code
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10, 6))
x = dset.DivorceScaledSD.values
y1 = jnp.mean(residuals_3, 0)
y2 = jnp.mean(residuals_4, 0)
ax.plot(x, y1, ls='none', marker='o')
ax.plot(x, y2, ls='none', marker='o')
for i, (j, k) in enumerate(zip(y1, y2)):
ax.plot([x[i], x[i]], [j, k], '--', color='gray');
ax.set(xlabel='Measurement Noise', ylabel='Residual', title='Mean residuals (Model 4: red, Model 3: blue)');
###Output
_____no_output_____
###Markdown
Bayesian Regression Using NumPyroIn this tutorial, we will explore how to do bayesian regression in NumPyro, using a simple example adapted from Statistical Rethinking [[1](References)]. In particular, we would like to explore the following: - Write a simple model using the `sample` NumPyro primitive. - Run inference using MCMC in NumPyro, in particular, using the No U-Turn Sampler (NUTS) to get a posterior distribution over our regression parameters of interest. - Learn about inference utilities such as `Predictive` and `log_likelihood`. - Learn how we can use effect-handlers in NumPyro to generate execution traces from the model, condition on sample statements, seed models with RNG seeds, etc., and use this to implement various utilities that will be useful for MCMC. e.g. computing model log likelihood, generating empirical distribution over the posterior predictive, etc. Tutorial Outline:1. [Dataset](Dataset)2. [Regression Model to Predict Divorce Rate](Regression-Model-to-Predict-Divorce-Rate) - [Model-1: Predictor-Marriage Rate](Model-1:-Predictor---Marriage-Rate) - [Posterior Distribution over the Regression Parameters](Posterior-Distribution-over-the-Regression-Parameters) - [Posterior Predictive Distribution](Posterior-Predictive-Distribution) - [Predictive Utility With Effect Handlers](Predictive-Utility-With-Effect-Handlers) - [Model Predictive Density](Model-Predictive-Density) - [Model-2: Predictor-Median Age of Marriage](Model-2:-Predictor---Median-Age-of-Marriage) - [Model-3: Predictor-Marriage Rate and Median Age of Marriage](Model-3:-Predictor---Marriage-Rate-and-Median-Age-of-Marriage) - [Divorce Rate Residuals by State](Divorce-Rate-Residuals-by-State)3. [Regression Model with Measurement Error](Regression-Model-with-Measurement-Error) - [Effect of Incorporating Measurement Noise on Residuals](Effect-of-Incorporating-Measurement-Noise-on-Residuals)4. [References](References)
###Code
%reset -s -f
import os
from IPython.display import set_matplotlib_formats
import jax.numpy as jnp
from jax import random, vmap
from jax.scipy.special import logsumexp
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import numpyro
from numpyro.diagnostics import hpdi
import numpyro.distributions as dist
from numpyro import handlers
from numpyro.infer import MCMC, NUTS
plt.style.use('bmh')
if "NUMPYRO_SPHINXBUILD" in os.environ:
set_matplotlib_formats('svg')
assert numpyro.__version__.startswith('0.5.0')
###Output
_____no_output_____
###Markdown
DatasetFor this example, we will use the `WaffleDivorce` dataset from Chapter 05, Statistical Rethinking [[1](References)]. The dataset contains divorce rates in each of the 50 states in the USA, along with predictors such as population, median age of marriage, whether it is a Southern state and, curiously, number of Waffle Houses.
###Code
DATASET_URL = 'https://raw.githubusercontent.com/rmcelreath/rethinking/master/data/WaffleDivorce.csv'
dset = pd.read_csv(DATASET_URL, sep=';')
dset
###Output
_____no_output_____
###Markdown
Let us plot the pair-wise relationship amongst the main variables in the dataset, using `seaborn.pairplot`.
###Code
vars = ['Population', 'MedianAgeMarriage', 'Marriage', 'WaffleHouses', 'South', 'Divorce']
sns.pairplot(dset, x_vars=vars, y_vars=vars, palette='husl');
###Output
_____no_output_____
###Markdown
From the plots above, we can clearly observe that there is a relationship between divorce rates and marriage rates in a state (as might be expected), and also between divorce rates and median age of marriage. There is also a weak relationship between number of Waffle Houses and divorce rates, which is not obvious from the plot above, but will be clearer if we regress `Divorce` against `WaffleHouse` and plot the results.
###Code
sns.regplot('WaffleHouses', 'Divorce', dset);
###Output
_____no_output_____
###Markdown
This is an example of a spurious association. We do not expect the number of Waffle Houses in a state to affect the divorce rate, but it is likely correlated with other factors that have an effect on the divorce rate. We will not delve into this spurious association in this tutorial, but the interested reader is encouraged to read Chapters 5 and 6 of [[1](References)] which explores the problem of causal association in the presence of multiple predictors. For simplicity, we will primarily focus on marriage rate and the median age of marriage as our predictors for divorce rate throughout the remaining tutorial. Regression Model to Predict Divorce RateLet us now write a regressionn model in *NumPyro* to predict the divorce rate as a linear function of marriage rate and median age of marriage in each of the states. First, note that our predictor variables have somewhat different scales. It is a good practice to standardize our predictors and response variables to mean `0` and standard deviation `1`, which should result in [faster inference](https://mc-stan.org/docs/2_19/stan-users-guide/standardizing-predictors-and-outputs.html).
###Code
standardize = lambda x: (x - x.mean()) / x.std()
dset['AgeScaled'] = dset.MedianAgeMarriage.pipe(standardize)
dset['MarriageScaled'] = dset.Marriage.pipe(standardize)
dset['DivorceScaled'] = dset.Divorce.pipe(standardize)
###Output
_____no_output_____
###Markdown
We write the NumPyro model as follows. While the code should largely be self-explanatory, take note of the following: - In NumPyro, *model* code is any Python callable which can optionally accept additional arguments and keywords. For HMC which we will be using for this tutorial, these arguments and keywords remain static during inference, but we can reuse the same model to generate [predictions](Posterior-Predictive-Distribution) on new data. - In addition to regular Python statements, the model code also contains primitives like `sample`. These primitives can be interpreted with various side-effects using effect handlers. For more on effect handlers, refer to [[3](References)], [[4](References)]. For now, just remember that a `sample` statement makes this a stochastic function that samples some latent parameters from a *prior distribution*. Our goal is to infer the *posterior distribution* of these parameters conditioned on observed data. - The reason why we have kept our predictors as optional keyword arguments is to be able to reuse the same model as we vary the set of predictors. Likewise, the reason why the response variable is optional is that we would like to reuse this model to sample from the posterior predictive distribution. See the [section](Posterior-Predictive-Distribution) on plotting the posterior predictive distribution, as an example.
###Code
def model(marriage=None, age=None, divorce=None):
a = numpyro.sample('a', dist.Normal(0., 0.2))
M, A = 0., 0.
if marriage is not None:
bM = numpyro.sample('bM', dist.Normal(0., 0.5))
M = bM * marriage
if age is not None:
bA = numpyro.sample('bA', dist.Normal(0., 0.5))
A = bA * age
sigma = numpyro.sample('sigma', dist.Exponential(1.))
mu = a + M + A
numpyro.sample('obs', dist.Normal(mu, sigma), obs=divorce)
###Output
_____no_output_____
###Markdown
Model 1: Predictor - Marriage RateWe first try to model the divorce rate as depending on a single variable, marriage rate. As mentioned above, we can use the same `model` code as earlier, but only pass values for `marriage` and `divorce` keyword arguments. We will use the No U-Turn Sampler (see [[5](References)] for more details on the NUTS algorithm) to run inference on this simple model.The Hamiltonian Monte Carlo (or, the NUTS) implementation in NumPyro takes in a potential energy function. This is the negative log joint density for the model. Therefore, for our model description above, we need to construct a function which given the parameter values returns the potential energy (or negative log joint density). Additionally, the verlet integrator in HMC (or, NUTS) returns sample values simulated using Hamiltonian dynamics in the unconstrained space. As such, continuous variables with bounded support need to be transformed into unconstrained space using bijective transforms. We also need to transform these samples back to their constrained support before returning these values to the user. Thankfully, this is handled on the backend for us, within a convenience class for doing [MCMC inference](https://numpyro.readthedocs.io/en/latest/mcmc.htmlnumpyro.mcmc.MCMC) that has the following methods: - `run(...)`: runs warmup, adapts steps size and mass matrix, and does sampling using the sample from the warmup phase. - `print_summary()`: print diagnostic information like quantiles, effective sample size, and the Gelman-Rubin diagnostic. - `get_samples()`: gets samples from the posterior distribution.Note the following: - JAX uses functional PRNGs. Unlike other languages / frameworks which maintain a global random state, in JAX, every call to a sampler requires an [explicit PRNGKey](https://github.com/google/jaxrandom-numbers-are-different). We will split our initial random seed for subsequent operations, so that we do not accidentally reuse the same seed. - We run inference with the `NUTS` sampler. To run vanilla HMC, we can instead use the [HMC](https://numpyro.readthedocs.io/en/latest/mcmc.htmlnumpyro.mcmc.HMC) class.
###Code
# Start from this source of randomness. We will split keys for subsequent operations.
rng_key = random.PRNGKey(0)
rng_key, rng_key_ = random.split(rng_key)
num_warmup, num_samples = 1000, 2000
# Run NUTS.
kernel = NUTS(model)
mcmc = MCMC(kernel, num_warmup, num_samples)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_1 = mcmc.get_samples()
###Output
sample: 100%|██████████| 3000/3000 [00:06<00:00, 429.13it/s, 3 steps of size 7.48e-01. acc. prob=0.91]
###Markdown
Posterior Distribution over the Regression ParametersWe notice that the progress bar gives us online statistics on the acceptance probability, step size and number of steps taken per sample while running NUTS. In particular, during warmup, we adapt the step size and mass matrix to achieve a certain target acceptance probability which is 0.8, by default. We were able to successfully adapt our step size to achieve this target in the warmup phase.During warmup, the aim is to adapt hyper-parameters such as step size and mass matrix (the HMC algorithm is very sensitive to these hyper-parameters), and to reach the typical set (see [[6](References)] for more details). If there are any issues in the model specification, the first signal to notice would be low acceptance probabilities or very high number of steps. We use the sample from the end of the warmup phase to seed the MCMC chain (denoted by the second `sample` progress bar) from which we generate the desired number of samples from our target distribution.At the end of inference, NumPyro prints the mean, std and 90% CI values for each of the latent parameters. Note that since we standardized our predictors and response variable, we would expect the intercept to have mean 0, as can be seen here. It also prints other convergence diagnostics on the latent parameters in the model, including [effective sample size](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.effective_sample_size) and the [gelman rubin diagnostic](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.gelman_rubin) ($\hat{R}$). The value for these diagnostics indicates that the chain has converged to the target distribution. In our case, the "target distribution" is the posterior distribution over the latent parameters that we are interested in. Note that this is often worth verifying with multiple chains for more complicated models. In the end, `samples_1` is a collection (in our case, a `dict` since `init_samples` was a `dict`) containing samples from the posterior distribution for each of the latent parameters in the model.To look at our regression fit, let us plot the regression line using our posterior estimates for the regression parameters, along with the 90% Credibility Interval (CI). Note that the [hpdi](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.hpdi) function in NumPyro's diagnostics module can be used to compute CI. In the functions below, note that the collected samples from the posterior are all along the leading axis.
###Code
def plot_regression(x, y_mean, y_hpdi):
# Sort values for plotting by x axis
idx = jnp.argsort(x)
marriage = x[idx]
mean = y_mean[idx]
hpdi = y_hpdi[:, idx]
divorce = dset.DivorceScaled.values[idx]
# Plot
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 6))
ax.plot(marriage, mean)
ax.plot(marriage, divorce, 'o')
ax.fill_between(marriage, hpdi[0], hpdi[1], alpha=0.3, interpolate=True)
return ax
# Compute empirical posterior distribution over mu
posterior_mu = jnp.expand_dims(samples_1['a'], -1) + \
jnp.expand_dims(samples_1['bM'], -1) * dset.MarriageScaled.values
mean_mu = jnp.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_mu, hpdi_mu)
ax.set(xlabel='Marriage rate', ylabel='Divorce rate', title='Regression line with 90% CI');
###Output
_____no_output_____
###Markdown
We can see from the plot, that the CI broadens towards the tails where the data is relatively sparse, as can be expected. Posterior Predictive DistributionLet us now look at the posterior predictive distribution to see how our predictive distribution looks with respect to the observed divorce rates. To get samples from the posterior predictive distribution, we need to run the model by substituting the latent parameters with samples from the posterior. NumPyro provides a handy [Predictive](http://num.pyro.ai/en/latest/utilities.htmlnumpyro.infer.util.Predictive) utility for this purpose. Note that by default we generate a single prediction for each sample from the joint posterior distribution, but this can be controlled using the `num_samples` argument.
###Code
from numpyro.infer import Predictive
rng_key, rng_key_ = random.split(rng_key)
predictive = Predictive(model, samples_1)
predictions = predictive(rng_key_, marriage=dset.MarriageScaled.values)['obs']
df = dset.filter(['Location'])
df['Mean Predictions'] = jnp.mean(predictions, axis=0)
df.head()
###Output
_____no_output_____
###Markdown
Predictive Utility With Effect HandlersTo remove the magic behind `Predictive`, let us see how we can combine [effect handlers](https://numpyro.readthedocs.io/en/latest/handlers.html) with the [vmap](https://github.com/google/jaxauto-vectorization-with-vmap) JAX primitive to implement our own simplified predictive utility function that can do vectorized predictions.
###Code
def predict(rng_key, post_samples, model, *args, **kwargs):
model = handlers.seed(handlers.condition(model, post_samples), rng_key)
model_trace = handlers.trace(model).get_trace(*args, **kwargs)
return model_trace['obs']['value']
# vectorize predictions via vmap
predict_fn = vmap(lambda rng_key, samples: predict(rng_key, samples, model, marriage=dset.MarriageScaled.values))
###Output
_____no_output_____
###Markdown
Note the use of the `condition`, `seed` and `trace` effect handlers in the `predict` function. - The `seed` effect-handler is used to wrap a stochastic function with an initial `PRNGKey` seed. When a sample statement inside the model is called, it uses the existing seed to sample from a distribution but this effect-handler also splits the existing key to ensure that future `sample` calls in the model use the newly split key instead. This is to prevent us from having to explicitly pass in a `PRNGKey` to each `sample` statement in the model. - The `condition` effect handler conditions the latent sample sites to certain values. In our case, we are conditioning on values from the posterior distribution returned by MCMC. - The `trace` effect handler runs the model and records the execution trace within an `OrderedDict`. This trace object contains execution metadata that is useful for computing quantities such as the log joint density. It should be clear now that the `predict` function simply runs the model by substituting the latent parameters with samples from the posterior (generated by the `mcmc` function) to generate predictions. Note the use of JAX's auto-vectorization transform called [vmap](https://github.com/google/jaxauto-vectorization-with-vmap) to vectorize predictions. Note that if we didn't use `vmap`, we would have to use a native for loop which for each sample which is much slower. Each draw from the posterior can be used to get predictions over all the 50 states. When we vectorize this over all the samples from the posterior using `vmap`, we will get a `predictions_1` array of shape `(num_samples, 50)`. We can then compute the mean and 90% CI of these samples to plot the posterior predictive distribution. We note that our mean predictions match those obtained from the `Predictive` utility class.
###Code
# Using the same key as we used for Predictive - note that the results are identical.
predictions_1 = predict_fn(random.split(rng_key_, num_samples), samples_1)
mean_pred = jnp.mean(predictions_1, axis=0)
df = dset.filter(['Location'])
df['Mean Predictions'] = mean_pred
df.head()
hpdi_pred = hpdi(predictions_1, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel='Marriage rate', ylabel='Divorce rate', title='Predictions with 90% CI');
###Output
_____no_output_____
###Markdown
We have used the same `plot_regression` function as earlier. We notice that our CI for the predictive distribution is much broader as compared to the last plot due to the additional noise introduced by the `sigma` parameter. Most data points lie well within the 90% CI, which indicates a good fit. Posterior Predictive DensityLikewise, making use of effect-handlers and `vmap`, we can also compute the log likelihood for this model given the dataset, and the log posterior predictive density [[6](References)] which is given by $$ log \prod_{i=1}^{n} \int p(y_i | \theta) p_{post}(\theta) d\theta \approx \sum_{i=1}^n log \frac{\sum_s p(\theta^{s})}{S} \\= \sum_{i=1}^n (log \sum_s p(\theta^{s}) - log(S))$$.Here, $i$ indexes the observed data points $y$ and $s$ indexes the posterior samples over the latent parameters $\theta$. If the posterior predictive density for a model has a comparatively high value, it indicates that the observed data-points have higher probability under the given model.
###Code
def log_likelihood(rng_key, params, model, *args, **kwargs):
model = handlers.condition(model, params)
model_trace = handlers.trace(model).get_trace(*args, **kwargs)
obs_node = model_trace['obs']
return obs_node['fn'].log_prob(obs_node['value'])
def log_pred_density(rng_key, params, model, *args, **kwargs):
n = list(params.values())[0].shape[0]
log_lk_fn = vmap(lambda rng_key, params: log_likelihood(rng_key, params, model, *args, **kwargs))
log_lk_vals = log_lk_fn(random.split(rng_key, n), params)
return (logsumexp(log_lk_vals, 0) - jnp.log(n)).sum()
###Output
_____no_output_____
###Markdown
Note that NumPyro provides the [log_likelihood](http://num.pyro.ai/en/latest/utilities.htmllog-likelihood) utility function that can be used directly for computing `log likelihood` as in the first function for any general model. In this tutorial, we would like to emphasize that there is nothing magical about such utility functions, and you can roll out your own inference utilities using NumPyro's effect handling stack.
###Code
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(log_pred_density(rng_key_,
samples_1,
model,
marriage=dset.MarriageScaled.values,
divorce=dset.DivorceScaled.values)))
###Output
Log posterior predictive density: -66.65252685546875
###Markdown
Model 2: Predictor - Median Age of MarriageWe will now model the divorce rate as a function of the median age of marriage. The computations are mostly a reproduction of what we did for Model 1. Notice the following: - Divorce rate is inversely related to the age of marriage. Hence states where the median age of marriage is low will likely have a higher divorce rate. - We get a higher log likelihood as compared to Model 2, indicating that median age of marriage is likely a much better predictor of divorce rate.
###Code
rng_key, rng_key_ = random.split(rng_key)
mcmc.run(rng_key_, age=dset.AgeScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_2 = mcmc.get_samples()
posterior_mu = jnp.expand_dims(samples_2['a'], -1) + \
jnp.expand_dims(samples_2['bA'], -1) * dset.AgeScaled.values
mean_mu = jnp.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_mu, hpdi_mu)
ax.set(xlabel='Median marriage age', ylabel='Divorce rate', title='Regression line with 90% CI');
rng_key, rng_key_ = random.split(rng_key)
predictions_2 = Predictive(model, samples_2)(rng_key_,
age=dset.AgeScaled.values)['obs']
mean_pred = jnp.mean(predictions_2, axis=0)
hpdi_pred = hpdi(predictions_2, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel='Median Age', ylabel='Divorce rate', title='Predictions with 90% CI');
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(log_pred_density(rng_key_,
samples_2,
model,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values)))
###Output
Log posterior predictive density: -59.238067626953125
###Markdown
Model 3: Predictor - Marriage Rate and Median Age of MarriageFinally, we will also model divorce rate as depending on both marriage rate as well as the median age of marriage. Note that the model's posterior predictive density is similar to Model 2 which likely indicates that the marginal information from marriage rate in predicting divorce rate is low when the median age of marriage is already known.
###Code
rng_key, rng_key_ = random.split(rng_key)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_3 = mcmc.get_samples()
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(
log_pred_density(rng_key_,
samples_3,
model,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values)
))
###Output
Log posterior predictive density: -59.06075668334961
###Markdown
Divorce Rate Residuals by StateThe regression plots above shows that the observed divorce rates for many states differs considerably from the mean regression line. To dig deeper into how the last model (Model 3) under-predicts or over-predicts for each of the states, we will plot the posterior predictive and residuals (`Observed divorce rate - Predicted divorce rate`) for each of the states.
###Code
# Predictions for Model 3.
rng_key, rng_key_ = random.split(rng_key)
predictions_3 = Predictive(model, samples_3)(rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values)['obs']
y = jnp.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 16))
pred_mean = jnp.mean(predictions_3, axis=0)
pred_hpdi = hpdi(predictions_3, 0.9)
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = jnp.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
idx = jnp.argsort(residuals_mean)
# Plot posterior predictive
ax[0].plot(jnp.zeros(50), y, '--')
ax[0].errorbar(pred_mean[idx], y, xerr=pred_hpdi[1, idx] - pred_mean[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
ax[0].plot(dset.DivorceScaled.values[idx], y, marker='o',
ls='none', color='gray')
ax[0].set(xlabel='Posterior Predictive (red) vs. Actuals (gray)', ylabel='State',
title='Posterior Predictive with 90% CI')
ax[0].set_yticks(y)
ax[0].set_yticklabels(dset.Loc.values[idx], fontsize=10);
# Plot residuals
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = jnp.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
err = residuals_hpdi[1] - residuals_mean
ax[1].plot(jnp.zeros(50), y, '--')
ax[1].errorbar(residuals_mean[idx], y, xerr=err[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
ax[1].set(xlabel='Residuals', ylabel='State', title='Residuals with 90% CI')
ax[1].set_yticks(y)
ax[1].set_yticklabels(dset.Loc.values[idx], fontsize=10);
###Output
_____no_output_____
###Markdown
The plot on the left shows the mean predictions with 90% CI for each of the states using Model 3. The gray markers indicate the actual observed divorce rates. The right plot shows the residuals for each of the states, and both these plots are sorted by the residuals, i.e. at the bottom, we are looking at states where the model predictions are higher than the observed rates, whereas at the top, the reverse is true.Overall, the model fit seems good because most observed data points like within a 90% CI around the mean predictions. However, notice how the model over-predicts by a large margin for states like Idaho (bottom left), and on the other end under-predicts for states like Maine (top right). This is likely indicative of other factors that we are missing out in our model that affect divorce rate across different states. Even ignoring other socio-political variables, one such factor that we have not yet modeled is the measurement noise given by `Divorce SE` in the dataset. We will explore this in the next section. Regression Model with Measurement ErrorNote that in our previous models, each data point influences the regression line equally. Is this well justified? We will build on the previous model to incorporate measurement error given by `Divorce SE` variable in the dataset. Incorporating measurement noise will be useful in ensuring that observations that have higher confidence (i.e. lower measurement noise) have a greater impact on the regression line. On the other hand, this will also help us better model outliers with high measurement errors. For more details on modeling errors due to measurement noise, refer to Chapter 14 of [[1](References)].To do this, we will reuse Model 3, with the only change that the final observed value has a measurement error given by `divorce_sd` (notice that this has to be standardized since the `divorce` variable itself has been standardized to mean 0 and std 1).
###Code
def model_se(marriage, age, divorce_sd, divorce=None):
a = numpyro.sample('a', dist.Normal(0., 0.2))
bM = numpyro.sample('bM', dist.Normal(0., 0.5))
M = bM * marriage
bA = numpyro.sample('bA', dist.Normal(0., 0.5))
A = bA * age
sigma = numpyro.sample('sigma', dist.Exponential(1.))
mu = a + M + A
divorce_rate = numpyro.sample('divorce_rate', dist.Normal(mu, sigma))
numpyro.sample('obs', dist.Normal(divorce_rate, divorce_sd), obs=divorce)
# Standardize
dset['DivorceScaledSD'] = dset['Divorce SE'] / jnp.std(dset.Divorce.values)
rng_key, rng_key_ = random.split(rng_key)
kernel = NUTS(model_se, target_accept_prob=0.9)
mcmc = MCMC(kernel, num_warmup=1000, num_samples=3000)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values, age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_4 = mcmc.get_samples()
###Output
sample: 100%|██████████| 4000/4000 [00:09<00:00, 408.70it/s, 15 steps of size 3.00e-01. acc. prob=0.91]
###Markdown
Effect of Incorporating Measurement Noise on ResidualsNotice that our values for the regression coefficients is very similar to Model 3. However, introducing measurement noise allows us to more closely match our predictive distribution to the observed values. We can see this if we plot the residuals as earlier.
###Code
rng_key, rng_key_ = random.split(rng_key)
predictions_4 = Predictive(model_se, samples_4)(rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values)['obs']
sd = dset.DivorceScaledSD.values
residuals_4 = dset.DivorceScaled.values - predictions_4
residuals_mean = jnp.mean(residuals_4, axis=0)
residuals_hpdi = hpdi(residuals_4, 0.9)
err = residuals_hpdi[1] - residuals_mean
idx = jnp.argsort(residuals_mean)
y = jnp.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 16))
# Plot Residuals
ax.plot(jnp.zeros(50), y, '--')
ax.errorbar(residuals_mean[idx], y, xerr=err[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
# Plot SD
ax.errorbar(residuals_mean[idx], y, xerr=sd[idx],
ls='none', color='orange', alpha=0.9)
# Plot earlier mean residual
ax.plot(jnp.mean(dset.DivorceScaled.values - predictions_3, 0)[idx], y,
ls='none', marker='o', ms=6, color='black', alpha=0.6)
ax.set(xlabel='Residuals', ylabel='State', title='Residuals with 90% CI')
ax.set_yticks(y)
ax.set_yticklabels(dset.Loc.values[idx], fontsize=10);
ax.text(-2.8, -7, 'Residuals (with error-bars) from current model (in red). '
'Black marker \nshows residuals from the previous model (Model 3). '
'Measurement \nerror is indicated by orange bar.');
###Output
_____no_output_____
###Markdown
The plot above shows the residuals for each of the states, along with the measurement noise given by inner error bar. The gray dots are the mean residuals from our earlier Model 3. Notice how having an additional degree of freedom to model the measurement noise has shrunk the residuals. In particular, for Idaho and Maine, our predictions are now much closer to the observed values after incorporating measurement noise in the model.To better see how measurement noise affects the movement of the regression line, let us plot the residuals with respect to the measurement noise.
###Code
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10, 6))
x = dset.DivorceScaledSD.values
y1 = jnp.mean(residuals_3, 0)
y2 = jnp.mean(residuals_4, 0)
ax.plot(x, y1, ls='none', marker='o')
ax.plot(x, y2, ls='none', marker='o')
for i, (j, k) in enumerate(zip(y1, y2)):
ax.plot([x[i], x[i]], [j, k], '--', color='gray');
ax.set(xlabel='Measurement Noise', ylabel='Residual', title='Mean residuals (Model 4: red, Model 3: blue)');
###Output
_____no_output_____
###Markdown
Bayesian Regression Using NumPyroIn this tutorial, we will explore how to do bayesian regression in NumPyro, using a simple example adapted from Statistical Rethinking [[1](References)]. In particular, we would like to explore the following: - Write a simple model using the `sample` NumPyro primitive. - Run inference using MCMC in NumPyro, in particular, using the No U-Turn Sampler (NUTS) to get a posterior distribution over our regression parameters of interest. - Learn about inference utilities such as `Predictive` and `log_likelihood`. - Learn how we can use effect-handlers in NumPyro to generate execution traces from the model, condition on sample statements, seed models with RNG seeds, etc., and use this to implement various utilities that will be useful for MCMC. e.g. computing model log likelihood, generating empirical distribution over the posterior predictive, etc. Tutorial Outline:1. [Dataset](Dataset)2. [Regression Model to Predict Divorce Rate](Regression-Model-to-Predict-Divorce-Rate) - [Model-1: Predictor-Marriage Rate](Model-1:-Predictor---Marriage-Rate) - [Posterior Distribution over the Regression Parameters](Posterior-Distribution-over-the-Regression-Parameters) - [Posterior Predictive Distribution](Posterior-Predictive-Distribution) - [Predictive Utility With Effect Handlers](Predictive-Utility-With-Effect-Handlers) - [Model Predictive Density](Model-Predictive-Density) - [Model-2: Predictor-Median Age of Marriage](Model-2:-Predictor---Median-Age-of-Marriage) - [Model-3: Predictor-Marriage Rate and Median Age of Marriage](Model-3:-Predictor---Marriage-Rate-and-Median-Age-of-Marriage) - [Divorce Rate Residuals by State](Divorce-Rate-Residuals-by-State)3. [Regression Model with Measurement Error](Regression-Model-with-Measurement-Error) - [Effect of Incorporating Measurement Noise on Residuals](Effect-of-Incorporating-Measurement-Noise-on-Residuals)4. [References](References)
###Code
%reset -s -f
import os
from IPython.display import set_matplotlib_formats
import jax.numpy as jnp
from jax import random, vmap
from jax.scipy.special import logsumexp
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import numpyro
from numpyro.diagnostics import hpdi
import numpyro.distributions as dist
from numpyro import handlers
from numpyro.infer import MCMC, NUTS
plt.style.use('bmh')
if "NUMPYRO_SPHINXBUILD" in os.environ:
set_matplotlib_formats('svg')
assert numpyro.__version__.startswith('0.2.4')
###Output
_____no_output_____
###Markdown
DatasetFor this example, we will use the `WaffleDivorce` dataset from Chapter 05, Statistical Rethinking [[1](References)]. The dataset contains divorce rates in each of the 50 states in the USA, along with predictors such as population, median age of marriage, whether it is a Southern state and, curiously, number of Waffle Houses.
###Code
DATASET_URL = 'https://raw.githubusercontent.com/rmcelreath/rethinking/master/data/WaffleDivorce.csv'
dset = pd.read_csv(DATASET_URL, sep=';')
dset
###Output
_____no_output_____
###Markdown
Let us plot the pair-wise relationship amongst the main variables in the dataset, using `seaborn.pairplot`.
###Code
vars = ['Population', 'MedianAgeMarriage', 'Marriage', 'WaffleHouses', 'South', 'Divorce']
sns.pairplot(dset, x_vars=vars, y_vars=vars, palette='husl');
###Output
_____no_output_____
###Markdown
From the plots above, we can clearly observe that there is a relationship between divorce rates and marriage rates in a state (as might be expected), and also between divorce rates and median age of marriage. There is also a weak relationship between number of Waffle Houses and divorce rates, which is not obvious from the plot above, but will be clearer if we regress `Divorce` against `WaffleHouse` and plot the results.
###Code
sns.regplot('WaffleHouses', 'Divorce', dset);
###Output
_____no_output_____
###Markdown
This is an example of a spurious association. We do not expect the number of Waffle Houses in a state to affect the divorce rate, but it is likely correlated with other factors that have an effect on the divorce rate. We will not delve into this spurious association in this tutorial, but the interested reader is encouraged to read Chapters 5 and 6 of [[1](References)] which explores the problem of causal association in the presence of multiple predictors. For simplicity, we will primarily focus on marriage rate and the median age of marriage as our predictors for divorce rate throughout the remaining tutorial. Regression Model to Predict Divorce RateLet us now write a regressionn model in *NumPyro* to predict the divorce rate as a linear function of marriage rate and median age of marriage in each of the states. First, note that our predictor variables have somewhat different scales. It is a good practice to standardize our predictors and response variables to mean `0` and standard deviation `1`, which should result in [faster inference](https://mc-stan.org/docs/2_19/stan-users-guide/standardizing-predictors-and-outputs.html).
###Code
standardize = lambda x: (x - x.mean()) / x.std()
dset['AgeScaled'] = dset.MedianAgeMarriage.pipe(standardize)
dset['MarriageScaled'] = dset.Marriage.pipe(standardize)
dset['DivorceScaled'] = dset.Divorce.pipe(standardize)
###Output
_____no_output_____
###Markdown
We write the NumPyro model as follows. While the code should largely be self-explanatory, take note of the following: - In NumPyro, *model* code is any Python callable which can optionally accept additional arguments and keywords. For HMC which we will be using for this tutorial, these arguments and keywords remain static during inference, but we can reuse the same model to generate [predictions](Posterior-Predictive-Distribution) on new data. - In addition to regular Python statements, the model code also contains primitives like `sample`. These primitives can be interpreted with various side-effects using effect handlers. For more on effect handlers, refer to [[3](References)], [[4](References)]. For now, just remember that a `sample` statement makes this a stochastic function that samples some latent parameters from a *prior distribution*. Our goal is to infer the *posterior distribution* of these parameters conditioned on observed data. - The reason why we have kept our predictors as optional keyword arguments is to be able to reuse the same model as we vary the set of predictors. Likewise, the reason why the response variable is optional is that we would like to reuse this model to sample from the posterior predictive distribution. See the [section](Posterior-Predictive-Distribution) on plotting the posterior predictive distribution, as an example.
###Code
def model(marriage=None, age=None, divorce=None):
a = numpyro.sample('a', dist.Normal(0., 0.2))
M, A = 0., 0.
if marriage is not None:
bM = numpyro.sample('bM', dist.Normal(0., 0.5))
M = bM * marriage
if age is not None:
bA = numpyro.sample('bA', dist.Normal(0., 0.5))
A = bA * age
sigma = numpyro.sample('sigma', dist.Exponential(1.))
mu = a + M + A
numpyro.sample('obs', dist.Normal(mu, sigma), obs=divorce)
###Output
_____no_output_____
###Markdown
Model 1: Predictor - Marriage RateWe first try to model the divorce rate as depending on a single variable, marriage rate. As mentioned above, we can use the same `model` code as earlier, but only pass values for `marriage` and `divorce` keyword arguments. We will use the No U-Turn Sampler (see [[5](References)] for more details on the NUTS algorithm) to run inference on this simple model.The Hamiltonian Monte Carlo (or, the NUTS) implementation in NumPyro takes in a potential energy function. This is the negative log joint density for the model. Therefore, for our model description above, we need to construct a function which given the parameter values returns the potential energy (or negative log joint density). Additionally, the verlet integrator in HMC (or, NUTS) returns sample values simulated using Hamiltonian dynamics in the unconstrained space. As such, continuous variables with bounded support need to be transformed into unconstrained space using bijective transforms. We also need to transform these samples back to their constrained support before returning these values to the user. Thankfully, this is handled on the backend for us, within a convenience class for doing [MCMC inference](https://numpyro.readthedocs.io/en/latest/mcmc.htmlnumpyro.mcmc.MCMC) that has the following methods: - `run(...)`: runs warmup, adapts steps size and mass matrix, and does sampling using the sample from the warmup phase. - `print_summary()`: print diagnostic information like quantiles, effective sample size, and the Gelman-Rubin diagnostic. - `get_samples()`: gets samples from the posterior distribution.Note the following: - JAX uses functional PRNGs. Unlike other languages / frameworks which maintain a global random state, in JAX, every call to a sampler requires an [explicit PRNGKey](https://github.com/google/jaxrandom-numbers-are-different). We will split our initial random seed for subsequent operations, so that we do not accidentally reuse the same seed. - We run inference with the `NUTS` sampler. To run vanilla HMC, we can instead use the [HMC](https://numpyro.readthedocs.io/en/latest/mcmc.htmlnumpyro.mcmc.HMC) class.
###Code
# Start from this source of randomness. We will split keys for subsequent operations.
rng_key = random.PRNGKey(0)
rng_key, rng_key_ = random.split(rng_key)
num_warmup, num_samples = 1000, 2000
# Run NUTS.
kernel = NUTS(model)
mcmc = MCMC(kernel, num_warmup, num_samples)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_1 = mcmc.get_samples()
###Output
sample: 100%|██████████| 3000/3000 [00:06<00:00, 487.44it/s, 3 steps of size 7.35e-01. acc. prob=0.92]
###Markdown
Posterior Distribution over the Regression ParametersWe notice that the progress bar gives us online statistics on the acceptance probability, step size and number of steps taken per sample while running NUTS. In particular, during warmup, we adapt the step size and mass matrix to achieve a certain target acceptance probability which is 0.8, by default. We were able to successfully adapt our step size to achieve this target in the warmup phase.During warmup, the aim is to adapt hyper-parameters such as step size and mass matrix (the HMC algorithm is very sensitive to these hyper-parameters), and to reach the typical set (see [[6](References)] for more details). If there are any issues in the model specification, the first signal to notice would be low acceptance probabilities or very high number of steps. We use the sample from the end of the warmup phase to seed the MCMC chain (denoted by the second `sample` progress bar) from which we generate the desired number of samples from our target distribution.At the end of inference, NumPyro prints the mean, std and 90% CI values for each of the latent parameters. Note that since we standardized our predictors and response variable, we would expect the intercept to have mean 0, as can be seen here. It also prints other convergence diagnostics on the latent parameters in the model, including [effective sample size](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.effective_sample_size) and the [gelman rubin diagnostic](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.gelman_rubin) ($\hat{R}$). The value for these diagnostics indicates that the chain has converged to the target distribution. In our case, the "target distribution" is the posterior distribution over the latent parameters that we are interested in. Note that this is often worth verifying with multiple chains for more complicated models. In the end, `samples_1` is a collection (in our case, a `dict` since `init_samples` was a `dict`) containing samples from the posterior distribution for each of the latent parameters in the model.To look at our regression fit, let us plot the regression line using our posterior estimates for the regression parameters, along with the 90% Credibility Interval (CI). Note that the [hpdi](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.hpdi) function in NumPyro's diagnostics module can be used to compute CI. In the functions below, note that the collected samples from the posterior are all along the leading axis.
###Code
def plot_regression(x, y_mean, y_hpdi):
# Sort values for plotting by x axis
idx = jnp.argsort(x)
marriage = x[idx]
mean = y_mean[idx]
hpdi = y_hpdi[:, idx]
divorce = dset.DivorceScaled.values[idx]
# Plot
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 6))
ax.plot(marriage, mean)
ax.plot(marriage, divorce, 'o')
ax.fill_between(marriage, hpdi[0], hpdi[1], alpha=0.3, interpolate=True)
return ax
# Compute empirical posterior distribution over mu
posterior_mu = jnp.expand_dims(samples_1['a'], -1) + \
jnp.expand_dims(samples_1['bM'], -1) * dset.MarriageScaled.values
mean_mu = jnp.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_mu, hpdi_mu)
ax.set(xlabel='Marriage rate', ylabel='Divorce rate', title='Regression line with 90% CI');
###Output
_____no_output_____
###Markdown
We can see from the plot, that the CI broadens towards the tails where the data is relatively sparse, as can be expected. Posterior Predictive DistributionLet us now look at the posterior predictive distribution to see how our predictive distribution looks with respect to the observed divorce rates. To get samples from the posterior predictive distribution, we need to run the model by substituting the latent parameters with samples from the posterior. NumPyro provides a handy [Predictive](http://num.pyro.ai/en/latest/utilities.htmlnumpyro.infer.util.Predictive) utility for this purpose. Note that by default we generate a single prediction for each sample from the joint posterior distribution, but this can be controlled using the `num_samples` argument.
###Code
from numpyro.infer import Predictive
rng_key, rng_key_ = random.split(rng_key)
predictive = Predictive(model, samples_1)
predictions = predictive(rng_key_, marriage=dset.MarriageScaled.values)['obs']
df = dset.filter(['Location'])
df['Mean Predictions'] = jnp.mean(predictions, axis=0)
df.head()
###Output
_____no_output_____
###Markdown
Predictive Utility With Effect HandlersTo remove the magic behind `Predictive`, let us see how we can combine [effect handlers](https://numpyro.readthedocs.io/en/latest/handlers.html) with the [vmap](https://github.com/google/jaxauto-vectorization-with-vmap) JAX primitive to implement our own simplified predictive utility function that can do vectorized predictions.
###Code
def predict(rng_key, post_samples, model, *args, **kwargs):
model = handlers.condition(handlers.seed(model, rng_key), post_samples)
model_trace = handlers.trace(model).get_trace(*args, **kwargs)
return model_trace['obs']['value']
# vectorize predictions via vmap
predict_fn = vmap(lambda rng_key, samples: predict(rng_key, samples, model, marriage=dset.MarriageScaled.values))
###Output
_____no_output_____
###Markdown
Note the use of the `condition`, `seed` and `trace` effect handlers in the `predict` function. - The `seed` effect-handler is used to wrap a stochastic function with an initial `PRNGKey` seed. When a sample statement inside the model is called, it uses the existing seed to sample from a distribution but this effect-handler also splits the existing key to ensure that future `sample` calls in the model use the newly split key instead. This is to prevent us from having to explicitly pass in a `PRNGKey` to each `sample` statement in the model. - The `condition` effect handler conditions the latent sample sites to certain values. In our case, we are conditioning on values from the posterior distribution returned by MCMC. - The `trace` effect handler runs the model and records the execution trace within an `OrderedDict`. This trace object contains execution metadata that is useful for computing quantities such as the log joint density. It should be clear now that the `predict` function simply runs the model by substituting the latent parameters with samples from the posterior (generated by the `mcmc` function) to generate predictions. Note the use of JAX's auto-vectorization transform called [vmap](https://github.com/google/jaxauto-vectorization-with-vmap) to vectorize predictions. Note that if we didn't use `vmap`, we would have to use a native for loop which for each sample which is much slower. Each draw from the posterior can be used to get predictions over all the 50 states. When we vectorize this over all the samples from the posterior using `vmap`, we will get a `predictions_1` array of shape `(num_samples, 50)`. We can then compute the mean and 90% CI of these samples to plot the posterior predictive distribution. We note that our mean predictions match those obtained from the `Predictive` utility class.
###Code
# Using the same key as we used for Predictive - note that the results are identical.
predictions_1 = predict_fn(random.split(rng_key_, num_samples), samples_1)
mean_pred = jnp.mean(predictions_1, axis=0)
df = dset.filter(['Location'])
df['Mean Predictions'] = jnp.mean(predictions, axis=0)
df.head()
hpdi_pred = hpdi(predictions_1, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel='Marriage rate', ylabel='Divorce rate', title='Predictions with 90% CI');
###Output
_____no_output_____
###Markdown
We have used the same `plot_regression` function as earlier. We notice that our CI for the predictive distribution is much broader as compared to the last plot due to the additional noise introduced by the `sigma` parameter. Most data points lie well within the 90% CI, which indicates a good fit. Posterior Predictive DensityLikewise, making use of effect-handlers and `vmap`, we can also compute the log likelihood for this model given the dataset, and the log posterior predictive density [[6](References)] which is given by $$ log \prod_{i=1}^{n} \int p(y_i | \theta) p_{post}(\theta) d\theta \approx \sum_{i=1}^n log \frac{\sum_s p(\theta^{s})}{S} \\= \sum_{i=1}^n (log \sum_s p(\theta^{s}) - log(S))$$.Here, $i$ indexes the observed data points $y$ and $s$ indexes the posterior samples over the latent parameters $\theta$. If the posterior predictive density for a model has a comparatively high value, it indicates that the observed data-points have higher probability under the given model.
###Code
def log_likelihood(rng_key, params, model, *args, **kwargs):
model = handlers.condition(model, params)
model_trace = handlers.trace(model).get_trace(*args, **kwargs)
obs_node = model_trace['obs']
return obs_node['fn'].log_prob(obs_node['value'])
def log_pred_density(rng_key, params, model, *args, **kwargs):
n = list(params.values())[0].shape[0]
log_lk_fn = vmap(lambda rng_key, params: log_likelihood(rng_key, params, model, *args, **kwargs))
log_lk_vals = log_lk_fn(random.split(rng_key, n), params)
return (logsumexp(log_lk_vals, 0) - jnp.log(n)).sum()
###Output
_____no_output_____
###Markdown
Note that NumPyro provides the [log_likelihood](http://num.pyro.ai/en/latest/utilities.htmllog-likelihood) utility function that can be used directly for computing `log likelihood` as in the first function for any general model. In this tutorial, we would like to emphasize that there is nothing magical about such utility functions, and you can roll out your own inference utilities using NumPyro's effect handling stack.
###Code
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(log_pred_density(rng_key_,
samples_1,
model,
marriage=dset.MarriageScaled.values,
divorce=dset.DivorceScaled.values)))
###Output
Log posterior predictive density: -66.70353698730469
###Markdown
Model 2: Predictor - Median Age of MarriageWe will now model the divorce rate as a function of the median age of marriage. The computations are mostly a reproduction of what we did for Model 1. Notice the following: - Divorce rate is inversely related to the age of marriage. Hence states where the median age of marriage is low will likely have a higher divorce rate. - We get a higher log likelihood as compared to Model 2, indicating that median age of marriage is likely a much better predictor of divorce rate.
###Code
rng_key, rng_key_ = random.split(rng_key)
mcmc.run(rng_key_, age=dset.AgeScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_2 = mcmc.get_samples()
posterior_mu = jnp.expand_dims(samples_2['a'], -1) + \
jnp.expand_dims(samples_2['bA'], -1) * dset.AgeScaled.values
mean_mu = jnp.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_mu, hpdi_mu)
ax.set(xlabel='Median marriage age', ylabel='Divorce rate', title='Regression line with 90% CI');
rng_key, rng_key_ = random.split(rng_key)
predictions_2 = Predictive(model, samples_2)(rng_key_,
age=dset.AgeScaled.values)['obs']
mean_pred = jnp.mean(predictions_2, axis=0)
hpdi_pred = hpdi(predictions_2, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel='Median Age', ylabel='Divorce rate', title='Predictions with 90% CI');
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(log_pred_density(rng_key_,
samples_2,
model,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values)))
###Output
Log posterior predictive density: -59.22672653198242
###Markdown
Model 3: Predictor - Marriage Rate and Median Age of MarriageFinally, we will also model divorce rate as depending on both marriage rate as well as the median age of marriage. Note that the model's posterior predictive density is similar to Model 2 which likely indicates that the marginal information from marriage rate in predicting divorce rate is low when the median age of marriage is already known.
###Code
rng_key, rng_key_ = random.split(rng_key)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_3 = mcmc.get_samples()
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(
log_pred_density(rng_key_,
samples_3,
model,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values)
))
###Output
Log posterior predictive density: -59.12065505981445
###Markdown
Divorce Rate Residuals by StateThe regression plots above shows that the observed divorce rates for many states differs considerably from the mean regression line. To dig deeper into how the last model (Model 3) under-predicts or over-predicts for each of the states, we will plot the posterior predictive and residuals (`Observed divorce rate - Predicted divorce rate`) for each of the states.
###Code
# Predictions for Model 3.
rng_key, rng_key_ = random.split(rng_key)
predictions_3 = Predictive(model, samples_3)(rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values)['obs']
y = jnp.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 16))
pred_mean = jnp.mean(predictions_3, axis=0)
pred_hpdi = hpdi(predictions_3, 0.9)
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = jnp.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
idx = jnp.argsort(residuals_mean)
# Plot posterior predictive
ax[0].plot(jnp.zeros(50), y, '--')
ax[0].errorbar(pred_mean[idx], y, xerr=pred_hpdi[1, idx] - pred_mean[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
ax[0].plot(dset.DivorceScaled.values[idx], y, marker='o',
ls='none', color='gray')
ax[0].set(xlabel='Posterior Predictive (red) vs. Actuals (gray)', ylabel='State',
title='Posterior Predictive with 90% CI')
ax[0].set_yticks(y)
ax[0].set_yticklabels(dset.Loc.values[idx], fontsize=10);
# Plot residuals
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = jnp.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
err = residuals_hpdi[1] - residuals_mean
ax[1].plot(jnp.zeros(50), y, '--')
ax[1].errorbar(residuals_mean[idx], y, xerr=err[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
ax[1].set(xlabel='Residuals', ylabel='State', title='Residuals with 90% CI')
ax[1].set_yticks(y)
ax[1].set_yticklabels(dset.Loc.values[idx], fontsize=10);
###Output
_____no_output_____
###Markdown
The plot on the left shows the mean predictions with 90% CI for each of the states using Model 3. The gray markers indicate the actual observed divorce rates. The right plot shows the residuals for each of the states, and both these plots are sorted by the residuals, i.e. at the bottom, we are looking at states where the model predictions are higher than the observed rates, whereas at the top, the reverse is true.Overall, the model fit seems good because most observed data points like within a 90% CI around the mean predictions. However, notice how the model over-predicts by a large margin for states like Idaho (bottom left), and on the other end under-predicts for states like Maine (top right). This is likely indicative of other factors that we are missing out in our model that affect divorce rate across different states. Even ignoring other socio-political variables, one such factor that we have not yet modeled is the measurement noise given by `Divorce SE` in the dataset. We will explore this in the next section. Regression Model with Measurement ErrorNote that in our previous models, each data point influences the regression line equally. Is this well justified? We will build on the previous model to incorporate measurement error given by `Divorce SE` variable in the dataset. Incorporating measurement noise will be useful in ensuring that observations that have higher confidence (i.e. lower measurement noise) have a greater impact on the regression line. On the other hand, this will also help us better model outliers with high measurement errors. For more details on modeling errors due to measurement noise, refer to Chapter 15 of [[1](References)].To do this, we will reuse Model 3, with the only change that the final observed value has a measurement error given by `divorce_sd` (notice that this has to be standardized since the `divorce` variable itself has been standardized to mean 0 and std 1).
###Code
def model_se(marriage, age, divorce_sd, divorce=None):
a = numpyro.sample('a', dist.Normal(0., 0.2))
bM = numpyro.sample('bM', dist.Normal(0., 0.5))
M = bM * marriage
bA = numpyro.sample('bA', dist.Normal(0., 0.5))
A = bA * age
sigma = numpyro.sample('sigma', dist.Exponential(1.))
mu = a + M + A
divorce_rate = numpyro.sample('divorce_rate', dist.Normal(mu, sigma))
numpyro.sample('obs', dist.Normal(divorce_rate, divorce_sd), obs=divorce)
# Standardize
dset['DivorceScaledSD'] = dset['Divorce SE'] / jnp.std(dset.Divorce.values)
rng_key, rng_key_ = random.split(rng_key)
kernel = NUTS(model_se, target_accept_prob=0.9)
mcmc = MCMC(kernel, num_warmup=1000, num_samples=3000)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values, age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_4 = mcmc.get_samples()
###Output
sample: 100%|██████████| 4000/4000 [00:10<00:00, 379.65it/s, 15 steps of size 2.93e-01. acc. prob=0.92]
###Markdown
Effect of Incorporating Measurement Noise on ResidualsNotice that our values for the regression coefficients is very similar to Model 3. However, introducing measurement noise allows us to more closely match our predictive distribution to the observed values. We can see this if we plot the residuals as earlier.
###Code
rng_key, rng_key_ = random.split(rng_key)
predictions_4 = Predictive(model_se, samples_4)(rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values)['obs']
sd = dset.DivorceScaledSD.values
residuals_4 = dset.DivorceScaled.values - predictions_4
residuals_mean = jnp.mean(residuals_4, axis=0)
residuals_hpdi = hpdi(residuals_4, 0.9)
err = residuals_hpdi[1] - residuals_mean
idx = jnp.argsort(residuals_mean)
y = jnp.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 16))
# Plot Residuals
ax.plot(jnp.zeros(50), y, '--')
ax.errorbar(residuals_mean[idx], y, xerr=err[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
# Plot SD
ax.errorbar(residuals_mean[idx], y, xerr=sd[idx],
ls='none', color='orange', alpha=0.9)
# Plot earlier mean residual
ax.plot(jnp.mean(dset.DivorceScaled.values - predictions_3, 0)[idx], y,
ls='none', marker='o', ms=6, color='black', alpha=0.6)
ax.set(xlabel='Residuals', ylabel='State', title='Residuals with 90% CI')
ax.set_yticks(y)
ax.set_yticklabels(dset.Loc.values[idx], fontsize=10);
ax.text(-2.8, -7, 'Residuals (with error-bars) from current model (in red). '
'Black marker \nshows residuals from the previous model (Model 3). '
'Measurement \nerror is indicated by orange bar.');
###Output
_____no_output_____
###Markdown
The plot above shows the residuals for each of the states, along with the measurement noise given by inner error bar. The gray dots are the mean residuals from our earlier Model 3. Notice how having an additional degree of freedom to model the measurement noise has shrunk the residuals. In particular, for Idaho and Maine, our predictions are now much closer to the observed values after incorporating measurement noise in the model.To better see how measurement noise affects the movement of the regression line, let us plot the residuals with respect to the measurement noise.
###Code
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10, 6))
x = dset.DivorceScaledSD.values
y1 = jnp.mean(residuals_3, 0)
y2 = jnp.mean(residuals_4, 0)
ax.plot(x, y1, ls='none', marker='o')
ax.plot(x, y2, ls='none', marker='o')
for i, (j, k) in enumerate(zip(y1, y2)):
ax.plot([x[i], x[i]], [j, k], '--', color='gray');
ax.set(xlabel='Measurement Noise', ylabel='Residual', title='Mean residuals (Model 4: red, Model 3: blue)');
###Output
_____no_output_____
###Markdown
Bayesian Regression Using NumPyroIn this tutorial, we will explore how to do bayesian regression in NumPyro, using a simple example adapted from Statistical Rethinking [[1](References)]. In particular, we would like to explore the following: - Write a simple model using the `sample` NumPyro primitive. - Run inference using MCMC in NumPyro, in particular, using the No U-Turn Sampler (NUTS) to get a posterior distribution over our regression parameters of interest. - Learn about inference utilities such as `Predictive` and `log_likelihood`. - Learn how we can use effect-handlers in NumPyro to generate execution traces from the model, condition on sample statements, seed models with RNG seeds, etc., and use this to implement various utilities that will be useful for MCMC. e.g. computing model log likelihood, generating empirical distribution over the posterior predictive, etc. Tutorial Outline:1. [Dataset](Dataset)2. [Regression Model to Predict Divorce Rate](Regression-Model-to-Predict-Divorce-Rate) - [Model-1: Predictor-Marriage Rate](Model-1:-Predictor---Marriage-Rate) - [Posterior Distribution over the Regression Parameters](Posterior-Distribution-over-the-Regression-Parameters) - [Posterior Predictive Distribution](Posterior-Predictive-Distribution) - [Predictive Utility With Effect Handlers](Predictive-Utility-With-Effect-Handlers) - [Model Predictive Density](Model-Predictive-Density) - [Model-2: Predictor-Median Age of Marriage](Model-2:-Predictor---Median-Age-of-Marriage) - [Model-3: Predictor-Marriage Rate and Median Age of Marriage](Model-3:-Predictor---Marriage-Rate-and-Median-Age-of-Marriage) - [Divorce Rate Residuals by State](Divorce-Rate-Residuals-by-State)3. [Regression Model with Measurement Error](Regression-Model-with-Measurement-Error) - [Effect of Incorporating Measurement Noise on Residuals](Effect-of-Incorporating-Measurement-Noise-on-Residuals)4. [References](References)
###Code
%reset -s -f
import os
from IPython.display import set_matplotlib_formats
import jax.numpy as jnp
from jax import random, vmap
from jax.scipy.special import logsumexp
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import numpyro
from numpyro.diagnostics import hpdi
import numpyro.distributions as dist
from numpyro import handlers
from numpyro.infer import MCMC, NUTS
plt.style.use('bmh')
if "NUMPYRO_SPHINXBUILD" in os.environ:
set_matplotlib_formats('svg')
assert numpyro.__version__.startswith('0.6.0')
###Output
_____no_output_____
###Markdown
DatasetFor this example, we will use the `WaffleDivorce` dataset from Chapter 05, Statistical Rethinking [[1](References)]. The dataset contains divorce rates in each of the 50 states in the USA, along with predictors such as population, median age of marriage, whether it is a Southern state and, curiously, number of Waffle Houses.
###Code
DATASET_URL = 'https://raw.githubusercontent.com/rmcelreath/rethinking/master/data/WaffleDivorce.csv'
dset = pd.read_csv(DATASET_URL, sep=';')
dset
###Output
_____no_output_____
###Markdown
Let us plot the pair-wise relationship amongst the main variables in the dataset, using `seaborn.pairplot`.
###Code
vars = ['Population', 'MedianAgeMarriage', 'Marriage', 'WaffleHouses', 'South', 'Divorce']
sns.pairplot(dset, x_vars=vars, y_vars=vars, palette='husl');
###Output
_____no_output_____
###Markdown
From the plots above, we can clearly observe that there is a relationship between divorce rates and marriage rates in a state (as might be expected), and also between divorce rates and median age of marriage. There is also a weak relationship between number of Waffle Houses and divorce rates, which is not obvious from the plot above, but will be clearer if we regress `Divorce` against `WaffleHouse` and plot the results.
###Code
sns.regplot('WaffleHouses', 'Divorce', dset);
###Output
_____no_output_____
###Markdown
This is an example of a spurious association. We do not expect the number of Waffle Houses in a state to affect the divorce rate, but it is likely correlated with other factors that have an effect on the divorce rate. We will not delve into this spurious association in this tutorial, but the interested reader is encouraged to read Chapters 5 and 6 of [[1](References)] which explores the problem of causal association in the presence of multiple predictors. For simplicity, we will primarily focus on marriage rate and the median age of marriage as our predictors for divorce rate throughout the remaining tutorial. Regression Model to Predict Divorce RateLet us now write a regressionn model in *NumPyro* to predict the divorce rate as a linear function of marriage rate and median age of marriage in each of the states. First, note that our predictor variables have somewhat different scales. It is a good practice to standardize our predictors and response variables to mean `0` and standard deviation `1`, which should result in [faster inference](https://mc-stan.org/docs/2_19/stan-users-guide/standardizing-predictors-and-outputs.html).
###Code
standardize = lambda x: (x - x.mean()) / x.std()
dset['AgeScaled'] = dset.MedianAgeMarriage.pipe(standardize)
dset['MarriageScaled'] = dset.Marriage.pipe(standardize)
dset['DivorceScaled'] = dset.Divorce.pipe(standardize)
###Output
_____no_output_____
###Markdown
We write the NumPyro model as follows. While the code should largely be self-explanatory, take note of the following: - In NumPyro, *model* code is any Python callable which can optionally accept additional arguments and keywords. For HMC which we will be using for this tutorial, these arguments and keywords remain static during inference, but we can reuse the same model to generate [predictions](Posterior-Predictive-Distribution) on new data. - In addition to regular Python statements, the model code also contains primitives like `sample`. These primitives can be interpreted with various side-effects using effect handlers. For more on effect handlers, refer to [[3](References)], [[4](References)]. For now, just remember that a `sample` statement makes this a stochastic function that samples some latent parameters from a *prior distribution*. Our goal is to infer the *posterior distribution* of these parameters conditioned on observed data. - The reason why we have kept our predictors as optional keyword arguments is to be able to reuse the same model as we vary the set of predictors. Likewise, the reason why the response variable is optional is that we would like to reuse this model to sample from the posterior predictive distribution. See the [section](Posterior-Predictive-Distribution) on plotting the posterior predictive distribution, as an example.
###Code
def model(marriage=None, age=None, divorce=None):
a = numpyro.sample('a', dist.Normal(0., 0.2))
M, A = 0., 0.
if marriage is not None:
bM = numpyro.sample('bM', dist.Normal(0., 0.5))
M = bM * marriage
if age is not None:
bA = numpyro.sample('bA', dist.Normal(0., 0.5))
A = bA * age
sigma = numpyro.sample('sigma', dist.Exponential(1.))
mu = a + M + A
numpyro.sample('obs', dist.Normal(mu, sigma), obs=divorce)
###Output
_____no_output_____
###Markdown
Model 1: Predictor - Marriage RateWe first try to model the divorce rate as depending on a single variable, marriage rate. As mentioned above, we can use the same `model` code as earlier, but only pass values for `marriage` and `divorce` keyword arguments. We will use the No U-Turn Sampler (see [[5](References)] for more details on the NUTS algorithm) to run inference on this simple model.The Hamiltonian Monte Carlo (or, the NUTS) implementation in NumPyro takes in a potential energy function. This is the negative log joint density for the model. Therefore, for our model description above, we need to construct a function which given the parameter values returns the potential energy (or negative log joint density). Additionally, the verlet integrator in HMC (or, NUTS) returns sample values simulated using Hamiltonian dynamics in the unconstrained space. As such, continuous variables with bounded support need to be transformed into unconstrained space using bijective transforms. We also need to transform these samples back to their constrained support before returning these values to the user. Thankfully, this is handled on the backend for us, within a convenience class for doing [MCMC inference](https://numpyro.readthedocs.io/en/latest/mcmc.htmlnumpyro.mcmc.MCMC) that has the following methods: - `run(...)`: runs warmup, adapts steps size and mass matrix, and does sampling using the sample from the warmup phase. - `print_summary()`: print diagnostic information like quantiles, effective sample size, and the Gelman-Rubin diagnostic. - `get_samples()`: gets samples from the posterior distribution.Note the following: - JAX uses functional PRNGs. Unlike other languages / frameworks which maintain a global random state, in JAX, every call to a sampler requires an [explicit PRNGKey](https://github.com/google/jaxrandom-numbers-are-different). We will split our initial random seed for subsequent operations, so that we do not accidentally reuse the same seed. - We run inference with the `NUTS` sampler. To run vanilla HMC, we can instead use the [HMC](https://numpyro.readthedocs.io/en/latest/mcmc.htmlnumpyro.mcmc.HMC) class.
###Code
# Start from this source of randomness. We will split keys for subsequent operations.
rng_key = random.PRNGKey(0)
rng_key, rng_key_ = random.split(rng_key)
num_warmup, num_samples = 1000, 2000
# Run NUTS.
kernel = NUTS(model)
mcmc = MCMC(kernel, num_warmup, num_samples)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_1 = mcmc.get_samples()
###Output
sample: 100%|██████████| 3000/3000 [00:06<00:00, 429.13it/s, 3 steps of size 7.48e-01. acc. prob=0.91]
###Markdown
Posterior Distribution over the Regression ParametersWe notice that the progress bar gives us online statistics on the acceptance probability, step size and number of steps taken per sample while running NUTS. In particular, during warmup, we adapt the step size and mass matrix to achieve a certain target acceptance probability which is 0.8, by default. We were able to successfully adapt our step size to achieve this target in the warmup phase.During warmup, the aim is to adapt hyper-parameters such as step size and mass matrix (the HMC algorithm is very sensitive to these hyper-parameters), and to reach the typical set (see [[6](References)] for more details). If there are any issues in the model specification, the first signal to notice would be low acceptance probabilities or very high number of steps. We use the sample from the end of the warmup phase to seed the MCMC chain (denoted by the second `sample` progress bar) from which we generate the desired number of samples from our target distribution.At the end of inference, NumPyro prints the mean, std and 90% CI values for each of the latent parameters. Note that since we standardized our predictors and response variable, we would expect the intercept to have mean 0, as can be seen here. It also prints other convergence diagnostics on the latent parameters in the model, including [effective sample size](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.effective_sample_size) and the [gelman rubin diagnostic](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.gelman_rubin) ($\hat{R}$). The value for these diagnostics indicates that the chain has converged to the target distribution. In our case, the "target distribution" is the posterior distribution over the latent parameters that we are interested in. Note that this is often worth verifying with multiple chains for more complicated models. In the end, `samples_1` is a collection (in our case, a `dict` since `init_samples` was a `dict`) containing samples from the posterior distribution for each of the latent parameters in the model.To look at our regression fit, let us plot the regression line using our posterior estimates for the regression parameters, along with the 90% Credibility Interval (CI). Note that the [hpdi](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.hpdi) function in NumPyro's diagnostics module can be used to compute CI. In the functions below, note that the collected samples from the posterior are all along the leading axis.
###Code
def plot_regression(x, y_mean, y_hpdi):
# Sort values for plotting by x axis
idx = jnp.argsort(x)
marriage = x[idx]
mean = y_mean[idx]
hpdi = y_hpdi[:, idx]
divorce = dset.DivorceScaled.values[idx]
# Plot
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 6))
ax.plot(marriage, mean)
ax.plot(marriage, divorce, 'o')
ax.fill_between(marriage, hpdi[0], hpdi[1], alpha=0.3, interpolate=True)
return ax
# Compute empirical posterior distribution over mu
posterior_mu = jnp.expand_dims(samples_1['a'], -1) + \
jnp.expand_dims(samples_1['bM'], -1) * dset.MarriageScaled.values
mean_mu = jnp.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_mu, hpdi_mu)
ax.set(xlabel='Marriage rate', ylabel='Divorce rate', title='Regression line with 90% CI');
###Output
_____no_output_____
###Markdown
We can see from the plot, that the CI broadens towards the tails where the data is relatively sparse, as can be expected. Posterior Predictive DistributionLet us now look at the posterior predictive distribution to see how our predictive distribution looks with respect to the observed divorce rates. To get samples from the posterior predictive distribution, we need to run the model by substituting the latent parameters with samples from the posterior. NumPyro provides a handy [Predictive](http://num.pyro.ai/en/latest/utilities.htmlnumpyro.infer.util.Predictive) utility for this purpose. Note that by default we generate a single prediction for each sample from the joint posterior distribution, but this can be controlled using the `num_samples` argument.
###Code
from numpyro.infer import Predictive
rng_key, rng_key_ = random.split(rng_key)
predictive = Predictive(model, samples_1)
predictions = predictive(rng_key_, marriage=dset.MarriageScaled.values)['obs']
df = dset.filter(['Location'])
df['Mean Predictions'] = jnp.mean(predictions, axis=0)
df.head()
###Output
_____no_output_____
###Markdown
Predictive Utility With Effect HandlersTo remove the magic behind `Predictive`, let us see how we can combine [effect handlers](https://numpyro.readthedocs.io/en/latest/handlers.html) with the [vmap](https://github.com/google/jaxauto-vectorization-with-vmap) JAX primitive to implement our own simplified predictive utility function that can do vectorized predictions.
###Code
def predict(rng_key, post_samples, model, *args, **kwargs):
model = handlers.seed(handlers.condition(model, post_samples), rng_key)
model_trace = handlers.trace(model).get_trace(*args, **kwargs)
return model_trace['obs']['value']
# vectorize predictions via vmap
predict_fn = vmap(lambda rng_key, samples: predict(rng_key, samples, model, marriage=dset.MarriageScaled.values))
###Output
_____no_output_____
###Markdown
Note the use of the `condition`, `seed` and `trace` effect handlers in the `predict` function. - The `seed` effect-handler is used to wrap a stochastic function with an initial `PRNGKey` seed. When a sample statement inside the model is called, it uses the existing seed to sample from a distribution but this effect-handler also splits the existing key to ensure that future `sample` calls in the model use the newly split key instead. This is to prevent us from having to explicitly pass in a `PRNGKey` to each `sample` statement in the model. - The `condition` effect handler conditions the latent sample sites to certain values. In our case, we are conditioning on values from the posterior distribution returned by MCMC. - The `trace` effect handler runs the model and records the execution trace within an `OrderedDict`. This trace object contains execution metadata that is useful for computing quantities such as the log joint density. It should be clear now that the `predict` function simply runs the model by substituting the latent parameters with samples from the posterior (generated by the `mcmc` function) to generate predictions. Note the use of JAX's auto-vectorization transform called [vmap](https://github.com/google/jaxauto-vectorization-with-vmap) to vectorize predictions. Note that if we didn't use `vmap`, we would have to use a native for loop which for each sample which is much slower. Each draw from the posterior can be used to get predictions over all the 50 states. When we vectorize this over all the samples from the posterior using `vmap`, we will get a `predictions_1` array of shape `(num_samples, 50)`. We can then compute the mean and 90% CI of these samples to plot the posterior predictive distribution. We note that our mean predictions match those obtained from the `Predictive` utility class.
###Code
# Using the same key as we used for Predictive - note that the results are identical.
predictions_1 = predict_fn(random.split(rng_key_, num_samples), samples_1)
mean_pred = jnp.mean(predictions_1, axis=0)
df = dset.filter(['Location'])
df['Mean Predictions'] = mean_pred
df.head()
hpdi_pred = hpdi(predictions_1, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel='Marriage rate', ylabel='Divorce rate', title='Predictions with 90% CI');
###Output
_____no_output_____
###Markdown
We have used the same `plot_regression` function as earlier. We notice that our CI for the predictive distribution is much broader as compared to the last plot due to the additional noise introduced by the `sigma` parameter. Most data points lie well within the 90% CI, which indicates a good fit. Posterior Predictive DensityLikewise, making use of effect-handlers and `vmap`, we can also compute the log likelihood for this model given the dataset, and the log posterior predictive density [[6](References)] which is given by $$ log \prod_{i=1}^{n} \int p(y_i | \theta) p_{post}(\theta) d\theta \approx \sum_{i=1}^n log \frac{\sum_s p(\theta^{s})}{S} \\= \sum_{i=1}^n (log \sum_s p(\theta^{s}) - log(S))$$.Here, $i$ indexes the observed data points $y$ and $s$ indexes the posterior samples over the latent parameters $\theta$. If the posterior predictive density for a model has a comparatively high value, it indicates that the observed data-points have higher probability under the given model.
###Code
def log_likelihood(rng_key, params, model, *args, **kwargs):
model = handlers.condition(model, params)
model_trace = handlers.trace(model).get_trace(*args, **kwargs)
obs_node = model_trace['obs']
return obs_node['fn'].log_prob(obs_node['value'])
def log_pred_density(rng_key, params, model, *args, **kwargs):
n = list(params.values())[0].shape[0]
log_lk_fn = vmap(lambda rng_key, params: log_likelihood(rng_key, params, model, *args, **kwargs))
log_lk_vals = log_lk_fn(random.split(rng_key, n), params)
return (logsumexp(log_lk_vals, 0) - jnp.log(n)).sum()
###Output
_____no_output_____
###Markdown
Note that NumPyro provides the [log_likelihood](http://num.pyro.ai/en/latest/utilities.htmllog-likelihood) utility function that can be used directly for computing `log likelihood` as in the first function for any general model. In this tutorial, we would like to emphasize that there is nothing magical about such utility functions, and you can roll out your own inference utilities using NumPyro's effect handling stack.
###Code
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(log_pred_density(rng_key_,
samples_1,
model,
marriage=dset.MarriageScaled.values,
divorce=dset.DivorceScaled.values)))
###Output
Log posterior predictive density: -66.65252685546875
###Markdown
Model 2: Predictor - Median Age of MarriageWe will now model the divorce rate as a function of the median age of marriage. The computations are mostly a reproduction of what we did for Model 1. Notice the following: - Divorce rate is inversely related to the age of marriage. Hence states where the median age of marriage is low will likely have a higher divorce rate. - We get a higher log likelihood as compared to Model 2, indicating that median age of marriage is likely a much better predictor of divorce rate.
###Code
rng_key, rng_key_ = random.split(rng_key)
mcmc.run(rng_key_, age=dset.AgeScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_2 = mcmc.get_samples()
posterior_mu = jnp.expand_dims(samples_2['a'], -1) + \
jnp.expand_dims(samples_2['bA'], -1) * dset.AgeScaled.values
mean_mu = jnp.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_mu, hpdi_mu)
ax.set(xlabel='Median marriage age', ylabel='Divorce rate', title='Regression line with 90% CI');
rng_key, rng_key_ = random.split(rng_key)
predictions_2 = Predictive(model, samples_2)(rng_key_,
age=dset.AgeScaled.values)['obs']
mean_pred = jnp.mean(predictions_2, axis=0)
hpdi_pred = hpdi(predictions_2, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel='Median Age', ylabel='Divorce rate', title='Predictions with 90% CI');
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(log_pred_density(rng_key_,
samples_2,
model,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values)))
###Output
Log posterior predictive density: -59.238067626953125
###Markdown
Model 3: Predictor - Marriage Rate and Median Age of MarriageFinally, we will also model divorce rate as depending on both marriage rate as well as the median age of marriage. Note that the model's posterior predictive density is similar to Model 2 which likely indicates that the marginal information from marriage rate in predicting divorce rate is low when the median age of marriage is already known.
###Code
rng_key, rng_key_ = random.split(rng_key)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_3 = mcmc.get_samples()
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(
log_pred_density(rng_key_,
samples_3,
model,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values)
))
###Output
Log posterior predictive density: -59.06075668334961
###Markdown
Divorce Rate Residuals by StateThe regression plots above shows that the observed divorce rates for many states differs considerably from the mean regression line. To dig deeper into how the last model (Model 3) under-predicts or over-predicts for each of the states, we will plot the posterior predictive and residuals (`Observed divorce rate - Predicted divorce rate`) for each of the states.
###Code
# Predictions for Model 3.
rng_key, rng_key_ = random.split(rng_key)
predictions_3 = Predictive(model, samples_3)(rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values)['obs']
y = jnp.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 16))
pred_mean = jnp.mean(predictions_3, axis=0)
pred_hpdi = hpdi(predictions_3, 0.9)
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = jnp.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
idx = jnp.argsort(residuals_mean)
# Plot posterior predictive
ax[0].plot(jnp.zeros(50), y, '--')
ax[0].errorbar(pred_mean[idx], y, xerr=pred_hpdi[1, idx] - pred_mean[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
ax[0].plot(dset.DivorceScaled.values[idx], y, marker='o',
ls='none', color='gray')
ax[0].set(xlabel='Posterior Predictive (red) vs. Actuals (gray)', ylabel='State',
title='Posterior Predictive with 90% CI')
ax[0].set_yticks(y)
ax[0].set_yticklabels(dset.Loc.values[idx], fontsize=10);
# Plot residuals
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = jnp.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
err = residuals_hpdi[1] - residuals_mean
ax[1].plot(jnp.zeros(50), y, '--')
ax[1].errorbar(residuals_mean[idx], y, xerr=err[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
ax[1].set(xlabel='Residuals', ylabel='State', title='Residuals with 90% CI')
ax[1].set_yticks(y)
ax[1].set_yticklabels(dset.Loc.values[idx], fontsize=10);
###Output
_____no_output_____
###Markdown
The plot on the left shows the mean predictions with 90% CI for each of the states using Model 3. The gray markers indicate the actual observed divorce rates. The right plot shows the residuals for each of the states, and both these plots are sorted by the residuals, i.e. at the bottom, we are looking at states where the model predictions are higher than the observed rates, whereas at the top, the reverse is true.Overall, the model fit seems good because most observed data points like within a 90% CI around the mean predictions. However, notice how the model over-predicts by a large margin for states like Idaho (bottom left), and on the other end under-predicts for states like Maine (top right). This is likely indicative of other factors that we are missing out in our model that affect divorce rate across different states. Even ignoring other socio-political variables, one such factor that we have not yet modeled is the measurement noise given by `Divorce SE` in the dataset. We will explore this in the next section. Regression Model with Measurement ErrorNote that in our previous models, each data point influences the regression line equally. Is this well justified? We will build on the previous model to incorporate measurement error given by `Divorce SE` variable in the dataset. Incorporating measurement noise will be useful in ensuring that observations that have higher confidence (i.e. lower measurement noise) have a greater impact on the regression line. On the other hand, this will also help us better model outliers with high measurement errors. For more details on modeling errors due to measurement noise, refer to Chapter 14 of [[1](References)].To do this, we will reuse Model 3, with the only change that the final observed value has a measurement error given by `divorce_sd` (notice that this has to be standardized since the `divorce` variable itself has been standardized to mean 0 and std 1).
###Code
def model_se(marriage, age, divorce_sd, divorce=None):
a = numpyro.sample('a', dist.Normal(0., 0.2))
bM = numpyro.sample('bM', dist.Normal(0., 0.5))
M = bM * marriage
bA = numpyro.sample('bA', dist.Normal(0., 0.5))
A = bA * age
sigma = numpyro.sample('sigma', dist.Exponential(1.))
mu = a + M + A
divorce_rate = numpyro.sample('divorce_rate', dist.Normal(mu, sigma))
numpyro.sample('obs', dist.Normal(divorce_rate, divorce_sd), obs=divorce)
# Standardize
dset['DivorceScaledSD'] = dset['Divorce SE'] / jnp.std(dset.Divorce.values)
rng_key, rng_key_ = random.split(rng_key)
kernel = NUTS(model_se, target_accept_prob=0.9)
mcmc = MCMC(kernel, num_warmup=1000, num_samples=3000)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values, age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_4 = mcmc.get_samples()
###Output
sample: 100%|██████████| 4000/4000 [00:09<00:00, 408.70it/s, 15 steps of size 3.00e-01. acc. prob=0.91]
###Markdown
Effect of Incorporating Measurement Noise on ResidualsNotice that our values for the regression coefficients is very similar to Model 3. However, introducing measurement noise allows us to more closely match our predictive distribution to the observed values. We can see this if we plot the residuals as earlier.
###Code
rng_key, rng_key_ = random.split(rng_key)
predictions_4 = Predictive(model_se, samples_4)(rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values)['obs']
sd = dset.DivorceScaledSD.values
residuals_4 = dset.DivorceScaled.values - predictions_4
residuals_mean = jnp.mean(residuals_4, axis=0)
residuals_hpdi = hpdi(residuals_4, 0.9)
err = residuals_hpdi[1] - residuals_mean
idx = jnp.argsort(residuals_mean)
y = jnp.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 16))
# Plot Residuals
ax.plot(jnp.zeros(50), y, '--')
ax.errorbar(residuals_mean[idx], y, xerr=err[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
# Plot SD
ax.errorbar(residuals_mean[idx], y, xerr=sd[idx],
ls='none', color='orange', alpha=0.9)
# Plot earlier mean residual
ax.plot(jnp.mean(dset.DivorceScaled.values - predictions_3, 0)[idx], y,
ls='none', marker='o', ms=6, color='black', alpha=0.6)
ax.set(xlabel='Residuals', ylabel='State', title='Residuals with 90% CI')
ax.set_yticks(y)
ax.set_yticklabels(dset.Loc.values[idx], fontsize=10);
ax.text(-2.8, -7, 'Residuals (with error-bars) from current model (in red). '
'Black marker \nshows residuals from the previous model (Model 3). '
'Measurement \nerror is indicated by orange bar.');
###Output
_____no_output_____
###Markdown
The plot above shows the residuals for each of the states, along with the measurement noise given by inner error bar. The gray dots are the mean residuals from our earlier Model 3. Notice how having an additional degree of freedom to model the measurement noise has shrunk the residuals. In particular, for Idaho and Maine, our predictions are now much closer to the observed values after incorporating measurement noise in the model.To better see how measurement noise affects the movement of the regression line, let us plot the residuals with respect to the measurement noise.
###Code
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10, 6))
x = dset.DivorceScaledSD.values
y1 = jnp.mean(residuals_3, 0)
y2 = jnp.mean(residuals_4, 0)
ax.plot(x, y1, ls='none', marker='o')
ax.plot(x, y2, ls='none', marker='o')
for i, (j, k) in enumerate(zip(y1, y2)):
ax.plot([x[i], x[i]], [j, k], '--', color='gray');
ax.set(xlabel='Measurement Noise', ylabel='Residual', title='Mean residuals (Model 4: red, Model 3: blue)');
###Output
_____no_output_____
###Markdown
Bayesian Regression Using NumPyroIn this tutorial, we will explore how to do bayesian regression in NumPyro, using a simple example adapted from Statistical Rethinking [[1](References)]. In particular, we would like to explore the following: - Write a simple model using the `sample` NumPyro primitive. - Run inference using MCMC in NumPyro, in particular, using the No U-Turn Sampler (NUTS) to get a posterior distribution over our regression parameters of interest. - Learn about inference utilities such as `Predictive` and `log_likelihood`. - Learn how we can use effect-handlers in NumPyro to generate execution traces from the model, condition on sample statements, seed models with RNG seeds, etc., and use this to implement various utilities that will be useful for MCMC. e.g. computing model log likelihood, generating empirical distribution over the posterior predictive, etc. Tutorial Outline:1. [Dataset](Dataset)2. [Regression Model to Predict Divorce Rate](Regression-Model-to-Predict-Divorce-Rate) - [Model-1: Predictor-Marriage Rate](Model-1:-Predictor---Marriage-Rate) - [Posterior Distribution over the Regression Parameters](Posterior-Distribution-over-the-Regression-Parameters) - [Posterior Predictive Distribution](Posterior-Predictive-Distribution) - [Predictive Utility With Effect Handlers](Predictive-Utility-With-Effect-Handlers) - [Model Predictive Density](Model-Predictive-Density) - [Model-2: Predictor-Median Age of Marriage](Model-2:-Predictor---Median-Age-of-Marriage) - [Model-3: Predictor-Marriage Rate and Median Age of Marriage](Model-3:-Predictor---Marriage-Rate-and-Median-Age-of-Marriage) - [Divorce Rate Residuals by State](Divorce-Rate-Residuals-by-State)3. [Regression Model with Measurement Error](Regression-Model-with-Measurement-Error) - [Effect of Incorporating Measurement Noise on Residuals](Effect-of-Incorporating-Measurement-Noise-on-Residuals)4. [References](References)
###Code
%reset -s -f
import os
from IPython.display import set_matplotlib_formats
import jax.numpy as jnp
from jax import random, vmap
from jax.scipy.special import logsumexp
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import numpyro
from numpyro.diagnostics import hpdi
import numpyro.distributions as dist
from numpyro import handlers
from numpyro.infer import MCMC, NUTS
plt.style.use('bmh')
if "NUMPYRO_SPHINXBUILD" in os.environ:
set_matplotlib_formats('svg')
assert numpyro.__version__.startswith('0.4.1')
###Output
_____no_output_____
###Markdown
DatasetFor this example, we will use the `WaffleDivorce` dataset from Chapter 05, Statistical Rethinking [[1](References)]. The dataset contains divorce rates in each of the 50 states in the USA, along with predictors such as population, median age of marriage, whether it is a Southern state and, curiously, number of Waffle Houses.
###Code
DATASET_URL = 'https://raw.githubusercontent.com/rmcelreath/rethinking/master/data/WaffleDivorce.csv'
dset = pd.read_csv(DATASET_URL, sep=';')
dset
###Output
_____no_output_____
###Markdown
Let us plot the pair-wise relationship amongst the main variables in the dataset, using `seaborn.pairplot`.
###Code
vars = ['Population', 'MedianAgeMarriage', 'Marriage', 'WaffleHouses', 'South', 'Divorce']
sns.pairplot(dset, x_vars=vars, y_vars=vars, palette='husl');
###Output
_____no_output_____
###Markdown
From the plots above, we can clearly observe that there is a relationship between divorce rates and marriage rates in a state (as might be expected), and also between divorce rates and median age of marriage. There is also a weak relationship between number of Waffle Houses and divorce rates, which is not obvious from the plot above, but will be clearer if we regress `Divorce` against `WaffleHouse` and plot the results.
###Code
sns.regplot('WaffleHouses', 'Divorce', dset);
###Output
_____no_output_____
###Markdown
This is an example of a spurious association. We do not expect the number of Waffle Houses in a state to affect the divorce rate, but it is likely correlated with other factors that have an effect on the divorce rate. We will not delve into this spurious association in this tutorial, but the interested reader is encouraged to read Chapters 5 and 6 of [[1](References)] which explores the problem of causal association in the presence of multiple predictors. For simplicity, we will primarily focus on marriage rate and the median age of marriage as our predictors for divorce rate throughout the remaining tutorial. Regression Model to Predict Divorce RateLet us now write a regressionn model in *NumPyro* to predict the divorce rate as a linear function of marriage rate and median age of marriage in each of the states. First, note that our predictor variables have somewhat different scales. It is a good practice to standardize our predictors and response variables to mean `0` and standard deviation `1`, which should result in [faster inference](https://mc-stan.org/docs/2_19/stan-users-guide/standardizing-predictors-and-outputs.html).
###Code
standardize = lambda x: (x - x.mean()) / x.std()
dset['AgeScaled'] = dset.MedianAgeMarriage.pipe(standardize)
dset['MarriageScaled'] = dset.Marriage.pipe(standardize)
dset['DivorceScaled'] = dset.Divorce.pipe(standardize)
###Output
_____no_output_____
###Markdown
We write the NumPyro model as follows. While the code should largely be self-explanatory, take note of the following: - In NumPyro, *model* code is any Python callable which can optionally accept additional arguments and keywords. For HMC which we will be using for this tutorial, these arguments and keywords remain static during inference, but we can reuse the same model to generate [predictions](Posterior-Predictive-Distribution) on new data. - In addition to regular Python statements, the model code also contains primitives like `sample`. These primitives can be interpreted with various side-effects using effect handlers. For more on effect handlers, refer to [[3](References)], [[4](References)]. For now, just remember that a `sample` statement makes this a stochastic function that samples some latent parameters from a *prior distribution*. Our goal is to infer the *posterior distribution* of these parameters conditioned on observed data. - The reason why we have kept our predictors as optional keyword arguments is to be able to reuse the same model as we vary the set of predictors. Likewise, the reason why the response variable is optional is that we would like to reuse this model to sample from the posterior predictive distribution. See the [section](Posterior-Predictive-Distribution) on plotting the posterior predictive distribution, as an example.
###Code
def model(marriage=None, age=None, divorce=None):
a = numpyro.sample('a', dist.Normal(0., 0.2))
M, A = 0., 0.
if marriage is not None:
bM = numpyro.sample('bM', dist.Normal(0., 0.5))
M = bM * marriage
if age is not None:
bA = numpyro.sample('bA', dist.Normal(0., 0.5))
A = bA * age
sigma = numpyro.sample('sigma', dist.Exponential(1.))
mu = a + M + A
numpyro.sample('obs', dist.Normal(mu, sigma), obs=divorce)
###Output
_____no_output_____
###Markdown
Model 1: Predictor - Marriage RateWe first try to model the divorce rate as depending on a single variable, marriage rate. As mentioned above, we can use the same `model` code as earlier, but only pass values for `marriage` and `divorce` keyword arguments. We will use the No U-Turn Sampler (see [[5](References)] for more details on the NUTS algorithm) to run inference on this simple model.The Hamiltonian Monte Carlo (or, the NUTS) implementation in NumPyro takes in a potential energy function. This is the negative log joint density for the model. Therefore, for our model description above, we need to construct a function which given the parameter values returns the potential energy (or negative log joint density). Additionally, the verlet integrator in HMC (or, NUTS) returns sample values simulated using Hamiltonian dynamics in the unconstrained space. As such, continuous variables with bounded support need to be transformed into unconstrained space using bijective transforms. We also need to transform these samples back to their constrained support before returning these values to the user. Thankfully, this is handled on the backend for us, within a convenience class for doing [MCMC inference](https://numpyro.readthedocs.io/en/latest/mcmc.htmlnumpyro.mcmc.MCMC) that has the following methods: - `run(...)`: runs warmup, adapts steps size and mass matrix, and does sampling using the sample from the warmup phase. - `print_summary()`: print diagnostic information like quantiles, effective sample size, and the Gelman-Rubin diagnostic. - `get_samples()`: gets samples from the posterior distribution.Note the following: - JAX uses functional PRNGs. Unlike other languages / frameworks which maintain a global random state, in JAX, every call to a sampler requires an [explicit PRNGKey](https://github.com/google/jaxrandom-numbers-are-different). We will split our initial random seed for subsequent operations, so that we do not accidentally reuse the same seed. - We run inference with the `NUTS` sampler. To run vanilla HMC, we can instead use the [HMC](https://numpyro.readthedocs.io/en/latest/mcmc.htmlnumpyro.mcmc.HMC) class.
###Code
# Start from this source of randomness. We will split keys for subsequent operations.
rng_key = random.PRNGKey(0)
rng_key, rng_key_ = random.split(rng_key)
num_warmup, num_samples = 1000, 2000
# Run NUTS.
kernel = NUTS(model)
mcmc = MCMC(kernel, num_warmup, num_samples)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_1 = mcmc.get_samples()
###Output
sample: 100%|██████████| 3000/3000 [00:06<00:00, 429.13it/s, 3 steps of size 7.48e-01. acc. prob=0.91]
###Markdown
Posterior Distribution over the Regression ParametersWe notice that the progress bar gives us online statistics on the acceptance probability, step size and number of steps taken per sample while running NUTS. In particular, during warmup, we adapt the step size and mass matrix to achieve a certain target acceptance probability which is 0.8, by default. We were able to successfully adapt our step size to achieve this target in the warmup phase.During warmup, the aim is to adapt hyper-parameters such as step size and mass matrix (the HMC algorithm is very sensitive to these hyper-parameters), and to reach the typical set (see [[6](References)] for more details). If there are any issues in the model specification, the first signal to notice would be low acceptance probabilities or very high number of steps. We use the sample from the end of the warmup phase to seed the MCMC chain (denoted by the second `sample` progress bar) from which we generate the desired number of samples from our target distribution.At the end of inference, NumPyro prints the mean, std and 90% CI values for each of the latent parameters. Note that since we standardized our predictors and response variable, we would expect the intercept to have mean 0, as can be seen here. It also prints other convergence diagnostics on the latent parameters in the model, including [effective sample size](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.effective_sample_size) and the [gelman rubin diagnostic](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.gelman_rubin) ($\hat{R}$). The value for these diagnostics indicates that the chain has converged to the target distribution. In our case, the "target distribution" is the posterior distribution over the latent parameters that we are interested in. Note that this is often worth verifying with multiple chains for more complicated models. In the end, `samples_1` is a collection (in our case, a `dict` since `init_samples` was a `dict`) containing samples from the posterior distribution for each of the latent parameters in the model.To look at our regression fit, let us plot the regression line using our posterior estimates for the regression parameters, along with the 90% Credibility Interval (CI). Note that the [hpdi](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.hpdi) function in NumPyro's diagnostics module can be used to compute CI. In the functions below, note that the collected samples from the posterior are all along the leading axis.
###Code
def plot_regression(x, y_mean, y_hpdi):
# Sort values for plotting by x axis
idx = jnp.argsort(x)
marriage = x[idx]
mean = y_mean[idx]
hpdi = y_hpdi[:, idx]
divorce = dset.DivorceScaled.values[idx]
# Plot
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 6))
ax.plot(marriage, mean)
ax.plot(marriage, divorce, 'o')
ax.fill_between(marriage, hpdi[0], hpdi[1], alpha=0.3, interpolate=True)
return ax
# Compute empirical posterior distribution over mu
posterior_mu = jnp.expand_dims(samples_1['a'], -1) + \
jnp.expand_dims(samples_1['bM'], -1) * dset.MarriageScaled.values
mean_mu = jnp.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_mu, hpdi_mu)
ax.set(xlabel='Marriage rate', ylabel='Divorce rate', title='Regression line with 90% CI');
###Output
_____no_output_____
###Markdown
We can see from the plot, that the CI broadens towards the tails where the data is relatively sparse, as can be expected. Posterior Predictive DistributionLet us now look at the posterior predictive distribution to see how our predictive distribution looks with respect to the observed divorce rates. To get samples from the posterior predictive distribution, we need to run the model by substituting the latent parameters with samples from the posterior. NumPyro provides a handy [Predictive](http://num.pyro.ai/en/latest/utilities.htmlnumpyro.infer.util.Predictive) utility for this purpose. Note that by default we generate a single prediction for each sample from the joint posterior distribution, but this can be controlled using the `num_samples` argument.
###Code
from numpyro.infer import Predictive
rng_key, rng_key_ = random.split(rng_key)
predictive = Predictive(model, samples_1)
predictions = predictive(rng_key_, marriage=dset.MarriageScaled.values)['obs']
df = dset.filter(['Location'])
df['Mean Predictions'] = jnp.mean(predictions, axis=0)
df.head()
###Output
_____no_output_____
###Markdown
Predictive Utility With Effect HandlersTo remove the magic behind `Predictive`, let us see how we can combine [effect handlers](https://numpyro.readthedocs.io/en/latest/handlers.html) with the [vmap](https://github.com/google/jaxauto-vectorization-with-vmap) JAX primitive to implement our own simplified predictive utility function that can do vectorized predictions.
###Code
def predict(rng_key, post_samples, model, *args, **kwargs):
model = handlers.seed(handlers.condition(model, post_samples), rng_key)
model_trace = handlers.trace(model).get_trace(*args, **kwargs)
return model_trace['obs']['value']
# vectorize predictions via vmap
predict_fn = vmap(lambda rng_key, samples: predict(rng_key, samples, model, marriage=dset.MarriageScaled.values))
###Output
_____no_output_____
###Markdown
Note the use of the `condition`, `seed` and `trace` effect handlers in the `predict` function. - The `seed` effect-handler is used to wrap a stochastic function with an initial `PRNGKey` seed. When a sample statement inside the model is called, it uses the existing seed to sample from a distribution but this effect-handler also splits the existing key to ensure that future `sample` calls in the model use the newly split key instead. This is to prevent us from having to explicitly pass in a `PRNGKey` to each `sample` statement in the model. - The `condition` effect handler conditions the latent sample sites to certain values. In our case, we are conditioning on values from the posterior distribution returned by MCMC. - The `trace` effect handler runs the model and records the execution trace within an `OrderedDict`. This trace object contains execution metadata that is useful for computing quantities such as the log joint density. It should be clear now that the `predict` function simply runs the model by substituting the latent parameters with samples from the posterior (generated by the `mcmc` function) to generate predictions. Note the use of JAX's auto-vectorization transform called [vmap](https://github.com/google/jaxauto-vectorization-with-vmap) to vectorize predictions. Note that if we didn't use `vmap`, we would have to use a native for loop which for each sample which is much slower. Each draw from the posterior can be used to get predictions over all the 50 states. When we vectorize this over all the samples from the posterior using `vmap`, we will get a `predictions_1` array of shape `(num_samples, 50)`. We can then compute the mean and 90% CI of these samples to plot the posterior predictive distribution. We note that our mean predictions match those obtained from the `Predictive` utility class.
###Code
# Using the same key as we used for Predictive - note that the results are identical.
predictions_1 = predict_fn(random.split(rng_key_, num_samples), samples_1)
mean_pred = jnp.mean(predictions_1, axis=0)
df = dset.filter(['Location'])
df['Mean Predictions'] = mean_pred
df.head()
hpdi_pred = hpdi(predictions_1, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel='Marriage rate', ylabel='Divorce rate', title='Predictions with 90% CI');
###Output
_____no_output_____
###Markdown
We have used the same `plot_regression` function as earlier. We notice that our CI for the predictive distribution is much broader as compared to the last plot due to the additional noise introduced by the `sigma` parameter. Most data points lie well within the 90% CI, which indicates a good fit. Posterior Predictive DensityLikewise, making use of effect-handlers and `vmap`, we can also compute the log likelihood for this model given the dataset, and the log posterior predictive density [[6](References)] which is given by $$ log \prod_{i=1}^{n} \int p(y_i | \theta) p_{post}(\theta) d\theta \approx \sum_{i=1}^n log \frac{\sum_s p(\theta^{s})}{S} \\= \sum_{i=1}^n (log \sum_s p(\theta^{s}) - log(S))$$.Here, $i$ indexes the observed data points $y$ and $s$ indexes the posterior samples over the latent parameters $\theta$. If the posterior predictive density for a model has a comparatively high value, it indicates that the observed data-points have higher probability under the given model.
###Code
def log_likelihood(rng_key, params, model, *args, **kwargs):
model = handlers.condition(model, params)
model_trace = handlers.trace(model).get_trace(*args, **kwargs)
obs_node = model_trace['obs']
return obs_node['fn'].log_prob(obs_node['value'])
def log_pred_density(rng_key, params, model, *args, **kwargs):
n = list(params.values())[0].shape[0]
log_lk_fn = vmap(lambda rng_key, params: log_likelihood(rng_key, params, model, *args, **kwargs))
log_lk_vals = log_lk_fn(random.split(rng_key, n), params)
return (logsumexp(log_lk_vals, 0) - jnp.log(n)).sum()
###Output
_____no_output_____
###Markdown
Note that NumPyro provides the [log_likelihood](http://num.pyro.ai/en/latest/utilities.htmllog-likelihood) utility function that can be used directly for computing `log likelihood` as in the first function for any general model. In this tutorial, we would like to emphasize that there is nothing magical about such utility functions, and you can roll out your own inference utilities using NumPyro's effect handling stack.
###Code
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(log_pred_density(rng_key_,
samples_1,
model,
marriage=dset.MarriageScaled.values,
divorce=dset.DivorceScaled.values)))
###Output
Log posterior predictive density: -66.65252685546875
###Markdown
Model 2: Predictor - Median Age of MarriageWe will now model the divorce rate as a function of the median age of marriage. The computations are mostly a reproduction of what we did for Model 1. Notice the following: - Divorce rate is inversely related to the age of marriage. Hence states where the median age of marriage is low will likely have a higher divorce rate. - We get a higher log likelihood as compared to Model 2, indicating that median age of marriage is likely a much better predictor of divorce rate.
###Code
rng_key, rng_key_ = random.split(rng_key)
mcmc.run(rng_key_, age=dset.AgeScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_2 = mcmc.get_samples()
posterior_mu = jnp.expand_dims(samples_2['a'], -1) + \
jnp.expand_dims(samples_2['bA'], -1) * dset.AgeScaled.values
mean_mu = jnp.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_mu, hpdi_mu)
ax.set(xlabel='Median marriage age', ylabel='Divorce rate', title='Regression line with 90% CI');
rng_key, rng_key_ = random.split(rng_key)
predictions_2 = Predictive(model, samples_2)(rng_key_,
age=dset.AgeScaled.values)['obs']
mean_pred = jnp.mean(predictions_2, axis=0)
hpdi_pred = hpdi(predictions_2, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel='Median Age', ylabel='Divorce rate', title='Predictions with 90% CI');
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(log_pred_density(rng_key_,
samples_2,
model,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values)))
###Output
Log posterior predictive density: -59.238067626953125
###Markdown
Model 3: Predictor - Marriage Rate and Median Age of MarriageFinally, we will also model divorce rate as depending on both marriage rate as well as the median age of marriage. Note that the model's posterior predictive density is similar to Model 2 which likely indicates that the marginal information from marriage rate in predicting divorce rate is low when the median age of marriage is already known.
###Code
rng_key, rng_key_ = random.split(rng_key)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_3 = mcmc.get_samples()
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(
log_pred_density(rng_key_,
samples_3,
model,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values)
))
###Output
Log posterior predictive density: -59.06075668334961
###Markdown
Divorce Rate Residuals by StateThe regression plots above shows that the observed divorce rates for many states differs considerably from the mean regression line. To dig deeper into how the last model (Model 3) under-predicts or over-predicts for each of the states, we will plot the posterior predictive and residuals (`Observed divorce rate - Predicted divorce rate`) for each of the states.
###Code
# Predictions for Model 3.
rng_key, rng_key_ = random.split(rng_key)
predictions_3 = Predictive(model, samples_3)(rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values)['obs']
y = jnp.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 16))
pred_mean = jnp.mean(predictions_3, axis=0)
pred_hpdi = hpdi(predictions_3, 0.9)
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = jnp.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
idx = jnp.argsort(residuals_mean)
# Plot posterior predictive
ax[0].plot(jnp.zeros(50), y, '--')
ax[0].errorbar(pred_mean[idx], y, xerr=pred_hpdi[1, idx] - pred_mean[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
ax[0].plot(dset.DivorceScaled.values[idx], y, marker='o',
ls='none', color='gray')
ax[0].set(xlabel='Posterior Predictive (red) vs. Actuals (gray)', ylabel='State',
title='Posterior Predictive with 90% CI')
ax[0].set_yticks(y)
ax[0].set_yticklabels(dset.Loc.values[idx], fontsize=10);
# Plot residuals
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = jnp.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
err = residuals_hpdi[1] - residuals_mean
ax[1].plot(jnp.zeros(50), y, '--')
ax[1].errorbar(residuals_mean[idx], y, xerr=err[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
ax[1].set(xlabel='Residuals', ylabel='State', title='Residuals with 90% CI')
ax[1].set_yticks(y)
ax[1].set_yticklabels(dset.Loc.values[idx], fontsize=10);
###Output
_____no_output_____
###Markdown
The plot on the left shows the mean predictions with 90% CI for each of the states using Model 3. The gray markers indicate the actual observed divorce rates. The right plot shows the residuals for each of the states, and both these plots are sorted by the residuals, i.e. at the bottom, we are looking at states where the model predictions are higher than the observed rates, whereas at the top, the reverse is true.Overall, the model fit seems good because most observed data points like within a 90% CI around the mean predictions. However, notice how the model over-predicts by a large margin for states like Idaho (bottom left), and on the other end under-predicts for states like Maine (top right). This is likely indicative of other factors that we are missing out in our model that affect divorce rate across different states. Even ignoring other socio-political variables, one such factor that we have not yet modeled is the measurement noise given by `Divorce SE` in the dataset. We will explore this in the next section. Regression Model with Measurement ErrorNote that in our previous models, each data point influences the regression line equally. Is this well justified? We will build on the previous model to incorporate measurement error given by `Divorce SE` variable in the dataset. Incorporating measurement noise will be useful in ensuring that observations that have higher confidence (i.e. lower measurement noise) have a greater impact on the regression line. On the other hand, this will also help us better model outliers with high measurement errors. For more details on modeling errors due to measurement noise, refer to Chapter 14 of [[1](References)].To do this, we will reuse Model 3, with the only change that the final observed value has a measurement error given by `divorce_sd` (notice that this has to be standardized since the `divorce` variable itself has been standardized to mean 0 and std 1).
###Code
def model_se(marriage, age, divorce_sd, divorce=None):
a = numpyro.sample('a', dist.Normal(0., 0.2))
bM = numpyro.sample('bM', dist.Normal(0., 0.5))
M = bM * marriage
bA = numpyro.sample('bA', dist.Normal(0., 0.5))
A = bA * age
sigma = numpyro.sample('sigma', dist.Exponential(1.))
mu = a + M + A
divorce_rate = numpyro.sample('divorce_rate', dist.Normal(mu, sigma))
numpyro.sample('obs', dist.Normal(divorce_rate, divorce_sd), obs=divorce)
# Standardize
dset['DivorceScaledSD'] = dset['Divorce SE'] / jnp.std(dset.Divorce.values)
rng_key, rng_key_ = random.split(rng_key)
kernel = NUTS(model_se, target_accept_prob=0.9)
mcmc = MCMC(kernel, num_warmup=1000, num_samples=3000)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values, age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_4 = mcmc.get_samples()
###Output
sample: 100%|██████████| 4000/4000 [00:09<00:00, 408.70it/s, 15 steps of size 3.00e-01. acc. prob=0.91]
###Markdown
Effect of Incorporating Measurement Noise on ResidualsNotice that our values for the regression coefficients is very similar to Model 3. However, introducing measurement noise allows us to more closely match our predictive distribution to the observed values. We can see this if we plot the residuals as earlier.
###Code
rng_key, rng_key_ = random.split(rng_key)
predictions_4 = Predictive(model_se, samples_4)(rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values)['obs']
sd = dset.DivorceScaledSD.values
residuals_4 = dset.DivorceScaled.values - predictions_4
residuals_mean = jnp.mean(residuals_4, axis=0)
residuals_hpdi = hpdi(residuals_4, 0.9)
err = residuals_hpdi[1] - residuals_mean
idx = jnp.argsort(residuals_mean)
y = jnp.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 16))
# Plot Residuals
ax.plot(jnp.zeros(50), y, '--')
ax.errorbar(residuals_mean[idx], y, xerr=err[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
# Plot SD
ax.errorbar(residuals_mean[idx], y, xerr=sd[idx],
ls='none', color='orange', alpha=0.9)
# Plot earlier mean residual
ax.plot(jnp.mean(dset.DivorceScaled.values - predictions_3, 0)[idx], y,
ls='none', marker='o', ms=6, color='black', alpha=0.6)
ax.set(xlabel='Residuals', ylabel='State', title='Residuals with 90% CI')
ax.set_yticks(y)
ax.set_yticklabels(dset.Loc.values[idx], fontsize=10);
ax.text(-2.8, -7, 'Residuals (with error-bars) from current model (in red). '
'Black marker \nshows residuals from the previous model (Model 3). '
'Measurement \nerror is indicated by orange bar.');
###Output
_____no_output_____
###Markdown
The plot above shows the residuals for each of the states, along with the measurement noise given by inner error bar. The gray dots are the mean residuals from our earlier Model 3. Notice how having an additional degree of freedom to model the measurement noise has shrunk the residuals. In particular, for Idaho and Maine, our predictions are now much closer to the observed values after incorporating measurement noise in the model.To better see how measurement noise affects the movement of the regression line, let us plot the residuals with respect to the measurement noise.
###Code
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10, 6))
x = dset.DivorceScaledSD.values
y1 = jnp.mean(residuals_3, 0)
y2 = jnp.mean(residuals_4, 0)
ax.plot(x, y1, ls='none', marker='o')
ax.plot(x, y2, ls='none', marker='o')
for i, (j, k) in enumerate(zip(y1, y2)):
ax.plot([x[i], x[i]], [j, k], '--', color='gray');
ax.set(xlabel='Measurement Noise', ylabel='Residual', title='Mean residuals (Model 4: red, Model 3: blue)');
###Output
_____no_output_____
###Markdown
Bayesian Regression Using NumPyroIn this tutorial, we will explore how to do bayesian regression in NumPyro, using a simple example adapted from Statistical Rethinking [[1](References)]. In particular, we would like to explore the following: - Write a simple model using the `sample` NumPyro primitive. - Run inference using MCMC in NumPyro, in particular, using the No U-Turn Sampler (NUTS) to get a posterior distribution over our regression parameters of interest. - Learn about inference utilities such as `Predictive` and `log_likelihood`. - Learn how we can use effect-handlers in NumPyro to generate execution traces from the model, condition on sample statements, seed models with RNG seeds, etc., and use this to implement various utilities that will be useful for MCMC. e.g. computing model log likelihood, generating empirical distribution over the posterior predictive, etc. Tutorial Outline:1. [Dataset](Dataset)2. [Regression Model to Predict Divorce Rate](Regression-Model-to-Predict-Divorce-Rate) - [Model-1: Predictor-Marriage Rate](Model-1:-Predictor---Marriage-Rate) - [Posterior Distribution over the Regression Parameters](Posterior-Distribution-over-the-Regression-Parameters) - [Posterior Predictive Distribution](Posterior-Predictive-Distribution) - [Predictive Utility With Effect Handlers](Predictive-Utility-With-Effect-Handlers) - [Model Predictive Density](Model-Predictive-Density) - [Model-2: Predictor-Median Age of Marriage](Model-2:-Predictor---Median-Age-of-Marriage) - [Model-3: Predictor-Marriage Rate and Median Age of Marriage](Model-3:-Predictor---Marriage-Rate-and-Median-Age-of-Marriage) - [Divorce Rate Residuals by State](Divorce-Rate-Residuals-by-State)3. [Regression Model with Measurement Error](Regression-Model-with-Measurement-Error) - [Effect of Incorporating Measurement Noise on Residuals](Effect-of-Incorporating-Measurement-Noise-on-Residuals)4. [References](References)
###Code
!pip install -q numpyro@git+https://github.com/pyro-ppl/numpyro
import os
from IPython.display import set_matplotlib_formats
import jax.numpy as jnp
from jax import random, vmap
from jax.scipy.special import logsumexp
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import numpyro
from numpyro.diagnostics import hpdi
import numpyro.distributions as dist
from numpyro import handlers
from numpyro.infer import MCMC, NUTS
plt.style.use("bmh")
if "NUMPYRO_SPHINXBUILD" in os.environ:
set_matplotlib_formats("svg")
assert numpyro.__version__.startswith("0.9.1")
###Output
_____no_output_____
###Markdown
DatasetFor this example, we will use the `WaffleDivorce` dataset from Chapter 05, Statistical Rethinking [[1](References)]. The dataset contains divorce rates in each of the 50 states in the USA, along with predictors such as population, median age of marriage, whether it is a Southern state and, curiously, number of Waffle Houses.
###Code
DATASET_URL = "https://raw.githubusercontent.com/rmcelreath/rethinking/master/data/WaffleDivorce.csv"
dset = pd.read_csv(DATASET_URL, sep=";")
dset
###Output
_____no_output_____
###Markdown
Let us plot the pair-wise relationship amongst the main variables in the dataset, using `seaborn.pairplot`.
###Code
vars = [
"Population",
"MedianAgeMarriage",
"Marriage",
"WaffleHouses",
"South",
"Divorce",
]
sns.pairplot(dset, x_vars=vars, y_vars=vars, palette="husl");
###Output
_____no_output_____
###Markdown
From the plots above, we can clearly observe that there is a relationship between divorce rates and marriage rates in a state (as might be expected), and also between divorce rates and median age of marriage. There is also a weak relationship between number of Waffle Houses and divorce rates, which is not obvious from the plot above, but will be clearer if we regress `Divorce` against `WaffleHouse` and plot the results.
###Code
sns.regplot(x="WaffleHouses", y="Divorce", data=dset);
###Output
_____no_output_____
###Markdown
This is an example of a spurious association. We do not expect the number of Waffle Houses in a state to affect the divorce rate, but it is likely correlated with other factors that have an effect on the divorce rate. We will not delve into this spurious association in this tutorial, but the interested reader is encouraged to read Chapters 5 and 6 of [[1](References)] which explores the problem of causal association in the presence of multiple predictors. For simplicity, we will primarily focus on marriage rate and the median age of marriage as our predictors for divorce rate throughout the remaining tutorial. Regression Model to Predict Divorce RateLet us now write a regressionn model in *NumPyro* to predict the divorce rate as a linear function of marriage rate and median age of marriage in each of the states. First, note that our predictor variables have somewhat different scales. It is a good practice to standardize our predictors and response variables to mean `0` and standard deviation `1`, which should result in [faster inference](https://mc-stan.org/docs/2_19/stan-users-guide/standardizing-predictors-and-outputs.html).
###Code
standardize = lambda x: (x - x.mean()) / x.std()
dset["AgeScaled"] = dset.MedianAgeMarriage.pipe(standardize)
dset["MarriageScaled"] = dset.Marriage.pipe(standardize)
dset["DivorceScaled"] = dset.Divorce.pipe(standardize)
###Output
_____no_output_____
###Markdown
We write the NumPyro model as follows. While the code should largely be self-explanatory, take note of the following: - In NumPyro, *model* code is any Python callable which can optionally accept additional arguments and keywords. For HMC which we will be using for this tutorial, these arguments and keywords remain static during inference, but we can reuse the same model to generate [predictions](Posterior-Predictive-Distribution) on new data. - In addition to regular Python statements, the model code also contains primitives like `sample`. These primitives can be interpreted with various side-effects using effect handlers. For more on effect handlers, refer to [[3](References)], [[4](References)]. For now, just remember that a `sample` statement makes this a stochastic function that samples some latent parameters from a *prior distribution*. Our goal is to infer the *posterior distribution* of these parameters conditioned on observed data. - The reason why we have kept our predictors as optional keyword arguments is to be able to reuse the same model as we vary the set of predictors. Likewise, the reason why the response variable is optional is that we would like to reuse this model to sample from the posterior predictive distribution. See the [section](Posterior-Predictive-Distribution) on plotting the posterior predictive distribution, as an example.
###Code
def model(marriage=None, age=None, divorce=None):
a = numpyro.sample("a", dist.Normal(0.0, 0.2))
M, A = 0.0, 0.0
if marriage is not None:
bM = numpyro.sample("bM", dist.Normal(0.0, 0.5))
M = bM * marriage
if age is not None:
bA = numpyro.sample("bA", dist.Normal(0.0, 0.5))
A = bA * age
sigma = numpyro.sample("sigma", dist.Exponential(1.0))
mu = a + M + A
numpyro.sample("obs", dist.Normal(mu, sigma), obs=divorce)
###Output
_____no_output_____
###Markdown
Model 1: Predictor - Marriage RateWe first try to model the divorce rate as depending on a single variable, marriage rate. As mentioned above, we can use the same `model` code as earlier, but only pass values for `marriage` and `divorce` keyword arguments. We will use the No U-Turn Sampler (see [[5](References)] for more details on the NUTS algorithm) to run inference on this simple model.The Hamiltonian Monte Carlo (or, the NUTS) implementation in NumPyro takes in a potential energy function. This is the negative log joint density for the model. Therefore, for our model description above, we need to construct a function which given the parameter values returns the potential energy (or negative log joint density). Additionally, the verlet integrator in HMC (or, NUTS) returns sample values simulated using Hamiltonian dynamics in the unconstrained space. As such, continuous variables with bounded support need to be transformed into unconstrained space using bijective transforms. We also need to transform these samples back to their constrained support before returning these values to the user. Thankfully, this is handled on the backend for us, within a convenience class for doing [MCMC inference](https://numpyro.readthedocs.io/en/latest/mcmc.htmlnumpyro.mcmc.MCMC) that has the following methods: - `run(...)`: runs warmup, adapts steps size and mass matrix, and does sampling using the sample from the warmup phase. - `print_summary()`: print diagnostic information like quantiles, effective sample size, and the Gelman-Rubin diagnostic. - `get_samples()`: gets samples from the posterior distribution.Note the following: - JAX uses functional PRNGs. Unlike other languages / frameworks which maintain a global random state, in JAX, every call to a sampler requires an [explicit PRNGKey](https://github.com/google/jaxrandom-numbers-are-different). We will split our initial random seed for subsequent operations, so that we do not accidentally reuse the same seed. - We run inference with the `NUTS` sampler. To run vanilla HMC, we can instead use the [HMC](https://numpyro.readthedocs.io/en/latest/mcmc.htmlnumpyro.mcmc.HMC) class.
###Code
# Start from this source of randomness. We will split keys for subsequent operations.
rng_key = random.PRNGKey(0)
rng_key, rng_key_ = random.split(rng_key)
# Run NUTS.
kernel = NUTS(model)
num_samples = 2000
mcmc = MCMC(kernel, num_warmup=1000, num_samples=num_samples)
mcmc.run(
rng_key_, marriage=dset.MarriageScaled.values, divorce=dset.DivorceScaled.values
)
mcmc.print_summary()
samples_1 = mcmc.get_samples()
###Output
sample: 100%|██████████| 3000/3000 [00:04<00:00, 748.14it/s, 7 steps of size 7.41e-01. acc. prob=0.92]
###Markdown
Posterior Distribution over the Regression ParametersWe notice that the progress bar gives us online statistics on the acceptance probability, step size and number of steps taken per sample while running NUTS. In particular, during warmup, we adapt the step size and mass matrix to achieve a certain target acceptance probability which is 0.8, by default. We were able to successfully adapt our step size to achieve this target in the warmup phase.During warmup, the aim is to adapt hyper-parameters such as step size and mass matrix (the HMC algorithm is very sensitive to these hyper-parameters), and to reach the typical set (see [[6](References)] for more details). If there are any issues in the model specification, the first signal to notice would be low acceptance probabilities or very high number of steps. We use the sample from the end of the warmup phase to seed the MCMC chain (denoted by the second `sample` progress bar) from which we generate the desired number of samples from our target distribution.At the end of inference, NumPyro prints the mean, std and 90% CI values for each of the latent parameters. Note that since we standardized our predictors and response variable, we would expect the intercept to have mean 0, as can be seen here. It also prints other convergence diagnostics on the latent parameters in the model, including [effective sample size](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.effective_sample_size) and the [gelman rubin diagnostic](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.gelman_rubin) ($\hat{R}$). The value for these diagnostics indicates that the chain has converged to the target distribution. In our case, the "target distribution" is the posterior distribution over the latent parameters that we are interested in. Note that this is often worth verifying with multiple chains for more complicated models. In the end, `samples_1` is a collection (in our case, a `dict` since `init_samples` was a `dict`) containing samples from the posterior distribution for each of the latent parameters in the model.To look at our regression fit, let us plot the regression line using our posterior estimates for the regression parameters, along with the 90% Credibility Interval (CI). Note that the [hpdi](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.hpdi) function in NumPyro's diagnostics module can be used to compute CI. In the functions below, note that the collected samples from the posterior are all along the leading axis.
###Code
def plot_regression(x, y_mean, y_hpdi):
# Sort values for plotting by x axis
idx = jnp.argsort(x)
marriage = x[idx]
mean = y_mean[idx]
hpdi = y_hpdi[:, idx]
divorce = dset.DivorceScaled.values[idx]
# Plot
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 6))
ax.plot(marriage, mean)
ax.plot(marriage, divorce, "o")
ax.fill_between(marriage, hpdi[0], hpdi[1], alpha=0.3, interpolate=True)
return ax
# Compute empirical posterior distribution over mu
posterior_mu = (
jnp.expand_dims(samples_1["a"], -1)
+ jnp.expand_dims(samples_1["bM"], -1) * dset.MarriageScaled.values
)
mean_mu = jnp.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_mu, hpdi_mu)
ax.set(
xlabel="Marriage rate", ylabel="Divorce rate", title="Regression line with 90% CI"
);
###Output
_____no_output_____
###Markdown
We can see from the plot, that the CI broadens towards the tails where the data is relatively sparse, as can be expected. Prior Predictive DistributionLet us check that we have set sensible priors by sampling from the prior predictive distribution. NumPyro provides a handy [Predictive](http://num.pyro.ai/en/latest/utilities.htmlnumpyro.infer.util.Predictive) utility for this purpose.
###Code
from numpyro.infer import Predictive
rng_key, rng_key_ = random.split(rng_key)
prior_predictive = Predictive(model, num_samples=100)
prior_predictions = prior_predictive(rng_key_, marriage=dset.MarriageScaled.values)[
"obs"
]
mean_prior_pred = jnp.mean(prior_predictions, axis=0)
hpdi_prior_pred = hpdi(prior_predictions, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_prior_pred, hpdi_prior_pred)
ax.set(xlabel="Marriage rate", ylabel="Divorce rate", title="Predictions with 90% CI");
###Output
_____no_output_____
###Markdown
Posterior Predictive DistributionLet us now look at the posterior predictive distribution to see how our predictive distribution looks with respect to the observed divorce rates. To get samples from the posterior predictive distribution, we need to run the model by substituting the latent parameters with samples from the posterior. Note that by default we generate a single prediction for each sample from the joint posterior distribution, but this can be controlled using the `num_samples` argument.
###Code
rng_key, rng_key_ = random.split(rng_key)
predictive = Predictive(model, samples_1)
predictions = predictive(rng_key_, marriage=dset.MarriageScaled.values)["obs"]
df = dset.filter(["Location"])
df["Mean Predictions"] = jnp.mean(predictions, axis=0)
df.head()
###Output
_____no_output_____
###Markdown
Predictive Utility With Effect HandlersTo remove the magic behind `Predictive`, let us see how we can combine [effect handlers](https://numpyro.readthedocs.io/en/latest/handlers.html) with the [vmap](https://github.com/google/jaxauto-vectorization-with-vmap) JAX primitive to implement our own simplified predictive utility function that can do vectorized predictions.
###Code
def predict(rng_key, post_samples, model, *args, **kwargs):
model = handlers.seed(handlers.condition(model, post_samples), rng_key)
model_trace = handlers.trace(model).get_trace(*args, **kwargs)
return model_trace["obs"]["value"]
# vectorize predictions via vmap
predict_fn = vmap(
lambda rng_key, samples: predict(
rng_key, samples, model, marriage=dset.MarriageScaled.values
)
)
###Output
_____no_output_____
###Markdown
Note the use of the `condition`, `seed` and `trace` effect handlers in the `predict` function. - The `seed` effect-handler is used to wrap a stochastic function with an initial `PRNGKey` seed. When a sample statement inside the model is called, it uses the existing seed to sample from a distribution but this effect-handler also splits the existing key to ensure that future `sample` calls in the model use the newly split key instead. This is to prevent us from having to explicitly pass in a `PRNGKey` to each `sample` statement in the model. - The `condition` effect handler conditions the latent sample sites to certain values. In our case, we are conditioning on values from the posterior distribution returned by MCMC. - The `trace` effect handler runs the model and records the execution trace within an `OrderedDict`. This trace object contains execution metadata that is useful for computing quantities such as the log joint density. It should be clear now that the `predict` function simply runs the model by substituting the latent parameters with samples from the posterior (generated by the `mcmc` function) to generate predictions. Note the use of JAX's auto-vectorization transform called [vmap](https://github.com/google/jaxauto-vectorization-with-vmap) to vectorize predictions. Note that if we didn't use `vmap`, we would have to use a native for loop which for each sample which is much slower. Each draw from the posterior can be used to get predictions over all the 50 states. When we vectorize this over all the samples from the posterior using `vmap`, we will get a `predictions_1` array of shape `(num_samples, 50)`. We can then compute the mean and 90% CI of these samples to plot the posterior predictive distribution. We note that our mean predictions match those obtained from the `Predictive` utility class.
###Code
# Using the same key as we used for Predictive - note that the results are identical.
predictions_1 = predict_fn(random.split(rng_key_, num_samples), samples_1)
mean_pred = jnp.mean(predictions_1, axis=0)
df = dset.filter(["Location"])
df["Mean Predictions"] = mean_pred
df.head()
hpdi_pred = hpdi(predictions_1, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel="Marriage rate", ylabel="Divorce rate", title="Predictions with 90% CI");
###Output
_____no_output_____
###Markdown
We have used the same `plot_regression` function as earlier. We notice that our CI for the predictive distribution is much broader as compared to the last plot due to the additional noise introduced by the `sigma` parameter. Most data points lie well within the 90% CI, which indicates a good fit. Posterior Predictive DensityLikewise, making use of effect-handlers and `vmap`, we can also compute the log likelihood for this model given the dataset, and the log posterior predictive density [[6](References)] which is given by $$ log \prod_{i=1}^{n} \int p(y_i | \theta) p_{post}(\theta) d\theta \approx \sum_{i=1}^n log \frac{\sum_s p(\theta^{s})}{S} \\= \sum_{i=1}^n (log \sum_s p(\theta^{s}) - log(S))$$.Here, $i$ indexes the observed data points $y$ and $s$ indexes the posterior samples over the latent parameters $\theta$. If the posterior predictive density for a model has a comparatively high value, it indicates that the observed data-points have higher probability under the given model.
###Code
def log_likelihood(rng_key, params, model, *args, **kwargs):
model = handlers.condition(model, params)
model_trace = handlers.trace(model).get_trace(*args, **kwargs)
obs_node = model_trace["obs"]
return obs_node["fn"].log_prob(obs_node["value"])
def log_pred_density(rng_key, params, model, *args, **kwargs):
n = list(params.values())[0].shape[0]
log_lk_fn = vmap(
lambda rng_key, params: log_likelihood(rng_key, params, model, *args, **kwargs)
)
log_lk_vals = log_lk_fn(random.split(rng_key, n), params)
return (logsumexp(log_lk_vals, 0) - jnp.log(n)).sum()
###Output
_____no_output_____
###Markdown
Note that NumPyro provides the [log_likelihood](http://num.pyro.ai/en/latest/utilities.htmllog-likelihood) utility function that can be used directly for computing `log likelihood` as in the first function for any general model. In this tutorial, we would like to emphasize that there is nothing magical about such utility functions, and you can roll out your own inference utilities using NumPyro's effect handling stack.
###Code
rng_key, rng_key_ = random.split(rng_key)
print(
"Log posterior predictive density: {}".format(
log_pred_density(
rng_key_,
samples_1,
model,
marriage=dset.MarriageScaled.values,
divorce=dset.DivorceScaled.values,
)
)
)
###Output
Log posterior predictive density: -66.70008087158203
###Markdown
Model 2: Predictor - Median Age of MarriageWe will now model the divorce rate as a function of the median age of marriage. The computations are mostly a reproduction of what we did for Model 1. Notice the following: - Divorce rate is inversely related to the age of marriage. Hence states where the median age of marriage is low will likely have a higher divorce rate. - We get a higher log likelihood as compared to Model 2, indicating that median age of marriage is likely a much better predictor of divorce rate.
###Code
rng_key, rng_key_ = random.split(rng_key)
mcmc.run(rng_key_, age=dset.AgeScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_2 = mcmc.get_samples()
posterior_mu = (
jnp.expand_dims(samples_2["a"], -1)
+ jnp.expand_dims(samples_2["bA"], -1) * dset.AgeScaled.values
)
mean_mu = jnp.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_mu, hpdi_mu)
ax.set(
xlabel="Median marriage age",
ylabel="Divorce rate",
title="Regression line with 90% CI",
);
rng_key, rng_key_ = random.split(rng_key)
predictions_2 = Predictive(model, samples_2)(rng_key_, age=dset.AgeScaled.values)["obs"]
mean_pred = jnp.mean(predictions_2, axis=0)
hpdi_pred = hpdi(predictions_2, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel="Median Age", ylabel="Divorce rate", title="Predictions with 90% CI");
rng_key, rng_key_ = random.split(rng_key)
print(
"Log posterior predictive density: {}".format(
log_pred_density(
rng_key_,
samples_2,
model,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values,
)
)
)
###Output
Log posterior predictive density: -59.251956939697266
###Markdown
Model 3: Predictor - Marriage Rate and Median Age of MarriageFinally, we will also model divorce rate as depending on both marriage rate as well as the median age of marriage. Note that the model's posterior predictive density is similar to Model 2 which likely indicates that the marginal information from marriage rate in predicting divorce rate is low when the median age of marriage is already known.
###Code
rng_key, rng_key_ = random.split(rng_key)
mcmc.run(
rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values,
)
mcmc.print_summary()
samples_3 = mcmc.get_samples()
rng_key, rng_key_ = random.split(rng_key)
print(
"Log posterior predictive density: {}".format(
log_pred_density(
rng_key_,
samples_3,
model,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values,
)
)
)
###Output
Log posterior predictive density: -59.06374740600586
###Markdown
Divorce Rate Residuals by StateThe regression plots above shows that the observed divorce rates for many states differs considerably from the mean regression line. To dig deeper into how the last model (Model 3) under-predicts or over-predicts for each of the states, we will plot the posterior predictive and residuals (`Observed divorce rate - Predicted divorce rate`) for each of the states.
###Code
# Predictions for Model 3.
rng_key, rng_key_ = random.split(rng_key)
predictions_3 = Predictive(model, samples_3)(
rng_key_, marriage=dset.MarriageScaled.values, age=dset.AgeScaled.values
)["obs"]
y = jnp.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 16))
pred_mean = jnp.mean(predictions_3, axis=0)
pred_hpdi = hpdi(predictions_3, 0.9)
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = jnp.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
idx = jnp.argsort(residuals_mean)
# Plot posterior predictive
ax[0].plot(jnp.zeros(50), y, "--")
ax[0].errorbar(
pred_mean[idx],
y,
xerr=pred_hpdi[1, idx] - pred_mean[idx],
marker="o",
ms=5,
mew=4,
ls="none",
alpha=0.8,
)
ax[0].plot(dset.DivorceScaled.values[idx], y, marker="o", ls="none", color="gray")
ax[0].set(
xlabel="Posterior Predictive (red) vs. Actuals (gray)",
ylabel="State",
title="Posterior Predictive with 90% CI",
)
ax[0].set_yticks(y)
ax[0].set_yticklabels(dset.Loc.values[idx], fontsize=10)
# Plot residuals
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = jnp.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
err = residuals_hpdi[1] - residuals_mean
ax[1].plot(jnp.zeros(50), y, "--")
ax[1].errorbar(
residuals_mean[idx], y, xerr=err[idx], marker="o", ms=5, mew=4, ls="none", alpha=0.8
)
ax[1].set(xlabel="Residuals", ylabel="State", title="Residuals with 90% CI")
ax[1].set_yticks(y)
ax[1].set_yticklabels(dset.Loc.values[idx], fontsize=10);
###Output
_____no_output_____
###Markdown
The plot on the left shows the mean predictions with 90% CI for each of the states using Model 3. The gray markers indicate the actual observed divorce rates. The right plot shows the residuals for each of the states, and both these plots are sorted by the residuals, i.e. at the bottom, we are looking at states where the model predictions are higher than the observed rates, whereas at the top, the reverse is true.Overall, the model fit seems good because most observed data points like within a 90% CI around the mean predictions. However, notice how the model over-predicts by a large margin for states like Idaho (bottom left), and on the other end under-predicts for states like Maine (top right). This is likely indicative of other factors that we are missing out in our model that affect divorce rate across different states. Even ignoring other socio-political variables, one such factor that we have not yet modeled is the measurement noise given by `Divorce SE` in the dataset. We will explore this in the next section. Regression Model with Measurement ErrorNote that in our previous models, each data point influences the regression line equally. Is this well justified? We will build on the previous model to incorporate measurement error given by `Divorce SE` variable in the dataset. Incorporating measurement noise will be useful in ensuring that observations that have higher confidence (i.e. lower measurement noise) have a greater impact on the regression line. On the other hand, this will also help us better model outliers with high measurement errors. For more details on modeling errors due to measurement noise, refer to Chapter 14 of [[1](References)].To do this, we will reuse Model 3, with the only change that the final observed value has a measurement error given by `divorce_sd` (notice that this has to be standardized since the `divorce` variable itself has been standardized to mean 0 and std 1).
###Code
def model_se(marriage, age, divorce_sd, divorce=None):
a = numpyro.sample("a", dist.Normal(0.0, 0.2))
bM = numpyro.sample("bM", dist.Normal(0.0, 0.5))
M = bM * marriage
bA = numpyro.sample("bA", dist.Normal(0.0, 0.5))
A = bA * age
sigma = numpyro.sample("sigma", dist.Exponential(1.0))
mu = a + M + A
divorce_rate = numpyro.sample("divorce_rate", dist.Normal(mu, sigma))
numpyro.sample("obs", dist.Normal(divorce_rate, divorce_sd), obs=divorce)
# Standardize
dset["DivorceScaledSD"] = dset["Divorce SE"] / jnp.std(dset.Divorce.values)
rng_key, rng_key_ = random.split(rng_key)
kernel = NUTS(model_se, target_accept_prob=0.9)
mcmc = MCMC(kernel, num_warmup=1000, num_samples=3000)
mcmc.run(
rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values,
divorce=dset.DivorceScaled.values,
)
mcmc.print_summary()
samples_4 = mcmc.get_samples()
###Output
sample: 100%|██████████| 4000/4000 [00:06<00:00, 578.19it/s, 15 steps of size 2.58e-01. acc. prob=0.93]
###Markdown
Effect of Incorporating Measurement Noise on ResidualsNotice that our values for the regression coefficients is very similar to Model 3. However, introducing measurement noise allows us to more closely match our predictive distribution to the observed values. We can see this if we plot the residuals as earlier.
###Code
rng_key, rng_key_ = random.split(rng_key)
predictions_4 = Predictive(model_se, samples_4)(
rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values,
)["obs"]
sd = dset.DivorceScaledSD.values
residuals_4 = dset.DivorceScaled.values - predictions_4
residuals_mean = jnp.mean(residuals_4, axis=0)
residuals_hpdi = hpdi(residuals_4, 0.9)
err = residuals_hpdi[1] - residuals_mean
idx = jnp.argsort(residuals_mean)
y = jnp.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 16))
# Plot Residuals
ax.plot(jnp.zeros(50), y, "--")
ax.errorbar(
residuals_mean[idx], y, xerr=err[idx], marker="o", ms=5, mew=4, ls="none", alpha=0.8
)
# Plot SD
ax.errorbar(residuals_mean[idx], y, xerr=sd[idx], ls="none", color="orange", alpha=0.9)
# Plot earlier mean residual
ax.plot(
jnp.mean(dset.DivorceScaled.values - predictions_3, 0)[idx],
y,
ls="none",
marker="o",
ms=6,
color="black",
alpha=0.6,
)
ax.set(xlabel="Residuals", ylabel="State", title="Residuals with 90% CI")
ax.set_yticks(y)
ax.set_yticklabels(dset.Loc.values[idx], fontsize=10)
ax.text(
-2.8,
-7,
"Residuals (with error-bars) from current model (in red). "
"Black marker \nshows residuals from the previous model (Model 3). "
"Measurement \nerror is indicated by orange bar.",
);
###Output
_____no_output_____
###Markdown
The plot above shows the residuals for each of the states, along with the measurement noise given by inner error bar. The gray dots are the mean residuals from our earlier Model 3. Notice how having an additional degree of freedom to model the measurement noise has shrunk the residuals. In particular, for Idaho and Maine, our predictions are now much closer to the observed values after incorporating measurement noise in the model.To better see how measurement noise affects the movement of the regression line, let us plot the residuals with respect to the measurement noise.
###Code
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10, 6))
x = dset.DivorceScaledSD.values
y1 = jnp.mean(residuals_3, 0)
y2 = jnp.mean(residuals_4, 0)
ax.plot(x, y1, ls="none", marker="o")
ax.plot(x, y2, ls="none", marker="o")
for i, (j, k) in enumerate(zip(y1, y2)):
ax.plot([x[i], x[i]], [j, k], "--", color="gray")
ax.set(
xlabel="Measurement Noise",
ylabel="Residual",
title="Mean residuals (Model 4: red, Model 3: blue)",
);
###Output
_____no_output_____
###Markdown
Bayesian Regression Using NumPyroIn this tutorial, we will explore how to do bayesian regression in NumPyro, using a simple example adapted from Statistical Rethinking [[1](References)]. In particular, we would like to explore the following: - Write a simple model using the `sample` NumPyro primitive. - Run inference using MCMC in NumPyro, in particular, using the No U-Turn Sampler (NUTS) to get a posterior distribution over our regression parameters of interest. - Learn about inference utilities such as `Predictive` and `log_likelihood`. - Learn how we can use effect-handlers in NumPyro to generate execution traces from the model, condition on sample statements, seed models with RNG seeds, etc., and use this to implement various utilities that will be useful for MCMC. e.g. computing model log likelihood, generating empirical distribution over the posterior predictive, etc. Tutorial Outline:1. [Dataset](Dataset)2. [Regression Model to Predict Divorce Rate](Regression-Model-to-Predict-Divorce-Rate) - [Model-1: Predictor-Marriage Rate](Model-1:-Predictor---Marriage-Rate) - [Posterior Distribution over the Regression Parameters](Posterior-Distribution-over-the-Regression-Parameters) - [Posterior Predictive Distribution](Posterior-Predictive-Distribution) - [Predictive Utility With Effect Handlers](Predictive-Utility-With-Effect-Handlers) - [Model Predictive Density](Model-Predictive-Density) - [Model-2: Predictor-Median Age of Marriage](Model-2:-Predictor---Median-Age-of-Marriage) - [Model-3: Predictor-Marriage Rate and Median Age of Marriage](Model-3:-Predictor---Marriage-Rate-and-Median-Age-of-Marriage) - [Divorce Rate Residuals by State](Divorce-Rate-Residuals-by-State)3. [Regression Model with Measurement Error](Regression-Model-with-Measurement-Error) - [Effect of Incorporating Measurement Noise on Residuals](Effect-of-Incorporating-Measurement-Noise-on-Residuals)4. [References](References)
###Code
!pip install -q numpyro@git+https://github.com/pyro-ppl/numpyro
import os
from IPython.display import set_matplotlib_formats
import jax.numpy as jnp
from jax import random, vmap
from jax.scipy.special import logsumexp
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import numpyro
from numpyro.diagnostics import hpdi
import numpyro.distributions as dist
from numpyro import handlers
from numpyro.infer import MCMC, NUTS
plt.style.use('bmh')
if "NUMPYRO_SPHINXBUILD" in os.environ:
set_matplotlib_formats('svg')
assert numpyro.__version__.startswith('0.7.1')
###Output
_____no_output_____
###Markdown
DatasetFor this example, we will use the `WaffleDivorce` dataset from Chapter 05, Statistical Rethinking [[1](References)]. The dataset contains divorce rates in each of the 50 states in the USA, along with predictors such as population, median age of marriage, whether it is a Southern state and, curiously, number of Waffle Houses.
###Code
DATASET_URL = 'https://raw.githubusercontent.com/rmcelreath/rethinking/master/data/WaffleDivorce.csv'
dset = pd.read_csv(DATASET_URL, sep=';')
dset
###Output
_____no_output_____
###Markdown
Let us plot the pair-wise relationship amongst the main variables in the dataset, using `seaborn.pairplot`.
###Code
vars = ['Population', 'MedianAgeMarriage', 'Marriage', 'WaffleHouses', 'South', 'Divorce']
sns.pairplot(dset, x_vars=vars, y_vars=vars, palette='husl');
###Output
_____no_output_____
###Markdown
From the plots above, we can clearly observe that there is a relationship between divorce rates and marriage rates in a state (as might be expected), and also between divorce rates and median age of marriage. There is also a weak relationship between number of Waffle Houses and divorce rates, which is not obvious from the plot above, but will be clearer if we regress `Divorce` against `WaffleHouse` and plot the results.
###Code
sns.regplot(x='WaffleHouses', y='Divorce', data=dset);
###Output
_____no_output_____
###Markdown
This is an example of a spurious association. We do not expect the number of Waffle Houses in a state to affect the divorce rate, but it is likely correlated with other factors that have an effect on the divorce rate. We will not delve into this spurious association in this tutorial, but the interested reader is encouraged to read Chapters 5 and 6 of [[1](References)] which explores the problem of causal association in the presence of multiple predictors. For simplicity, we will primarily focus on marriage rate and the median age of marriage as our predictors for divorce rate throughout the remaining tutorial. Regression Model to Predict Divorce RateLet us now write a regressionn model in *NumPyro* to predict the divorce rate as a linear function of marriage rate and median age of marriage in each of the states. First, note that our predictor variables have somewhat different scales. It is a good practice to standardize our predictors and response variables to mean `0` and standard deviation `1`, which should result in [faster inference](https://mc-stan.org/docs/2_19/stan-users-guide/standardizing-predictors-and-outputs.html).
###Code
standardize = lambda x: (x - x.mean()) / x.std()
dset['AgeScaled'] = dset.MedianAgeMarriage.pipe(standardize)
dset['MarriageScaled'] = dset.Marriage.pipe(standardize)
dset['DivorceScaled'] = dset.Divorce.pipe(standardize)
###Output
_____no_output_____
###Markdown
We write the NumPyro model as follows. While the code should largely be self-explanatory, take note of the following: - In NumPyro, *model* code is any Python callable which can optionally accept additional arguments and keywords. For HMC which we will be using for this tutorial, these arguments and keywords remain static during inference, but we can reuse the same model to generate [predictions](Posterior-Predictive-Distribution) on new data. - In addition to regular Python statements, the model code also contains primitives like `sample`. These primitives can be interpreted with various side-effects using effect handlers. For more on effect handlers, refer to [[3](References)], [[4](References)]. For now, just remember that a `sample` statement makes this a stochastic function that samples some latent parameters from a *prior distribution*. Our goal is to infer the *posterior distribution* of these parameters conditioned on observed data. - The reason why we have kept our predictors as optional keyword arguments is to be able to reuse the same model as we vary the set of predictors. Likewise, the reason why the response variable is optional is that we would like to reuse this model to sample from the posterior predictive distribution. See the [section](Posterior-Predictive-Distribution) on plotting the posterior predictive distribution, as an example.
###Code
def model(marriage=None, age=None, divorce=None):
a = numpyro.sample('a', dist.Normal(0., 0.2))
M, A = 0., 0.
if marriage is not None:
bM = numpyro.sample('bM', dist.Normal(0., 0.5))
M = bM * marriage
if age is not None:
bA = numpyro.sample('bA', dist.Normal(0., 0.5))
A = bA * age
sigma = numpyro.sample('sigma', dist.Exponential(1.))
mu = a + M + A
numpyro.sample('obs', dist.Normal(mu, sigma), obs=divorce)
###Output
_____no_output_____
###Markdown
Model 1: Predictor - Marriage RateWe first try to model the divorce rate as depending on a single variable, marriage rate. As mentioned above, we can use the same `model` code as earlier, but only pass values for `marriage` and `divorce` keyword arguments. We will use the No U-Turn Sampler (see [[5](References)] for more details on the NUTS algorithm) to run inference on this simple model.The Hamiltonian Monte Carlo (or, the NUTS) implementation in NumPyro takes in a potential energy function. This is the negative log joint density for the model. Therefore, for our model description above, we need to construct a function which given the parameter values returns the potential energy (or negative log joint density). Additionally, the verlet integrator in HMC (or, NUTS) returns sample values simulated using Hamiltonian dynamics in the unconstrained space. As such, continuous variables with bounded support need to be transformed into unconstrained space using bijective transforms. We also need to transform these samples back to their constrained support before returning these values to the user. Thankfully, this is handled on the backend for us, within a convenience class for doing [MCMC inference](https://numpyro.readthedocs.io/en/latest/mcmc.htmlnumpyro.mcmc.MCMC) that has the following methods: - `run(...)`: runs warmup, adapts steps size and mass matrix, and does sampling using the sample from the warmup phase. - `print_summary()`: print diagnostic information like quantiles, effective sample size, and the Gelman-Rubin diagnostic. - `get_samples()`: gets samples from the posterior distribution.Note the following: - JAX uses functional PRNGs. Unlike other languages / frameworks which maintain a global random state, in JAX, every call to a sampler requires an [explicit PRNGKey](https://github.com/google/jaxrandom-numbers-are-different). We will split our initial random seed for subsequent operations, so that we do not accidentally reuse the same seed. - We run inference with the `NUTS` sampler. To run vanilla HMC, we can instead use the [HMC](https://numpyro.readthedocs.io/en/latest/mcmc.htmlnumpyro.mcmc.HMC) class.
###Code
# Start from this source of randomness. We will split keys for subsequent operations.
rng_key = random.PRNGKey(0)
rng_key, rng_key_ = random.split(rng_key)
# Run NUTS.
kernel = NUTS(model)
num_samples = 2000
mcmc = MCMC(kernel, num_warmup=1000, num_samples=num_samples)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_1 = mcmc.get_samples()
###Output
sample: 100%|██████████| 3000/3000 [00:04<00:00, 748.14it/s, 7 steps of size 7.41e-01. acc. prob=0.92]
###Markdown
Posterior Distribution over the Regression ParametersWe notice that the progress bar gives us online statistics on the acceptance probability, step size and number of steps taken per sample while running NUTS. In particular, during warmup, we adapt the step size and mass matrix to achieve a certain target acceptance probability which is 0.8, by default. We were able to successfully adapt our step size to achieve this target in the warmup phase.During warmup, the aim is to adapt hyper-parameters such as step size and mass matrix (the HMC algorithm is very sensitive to these hyper-parameters), and to reach the typical set (see [[6](References)] for more details). If there are any issues in the model specification, the first signal to notice would be low acceptance probabilities or very high number of steps. We use the sample from the end of the warmup phase to seed the MCMC chain (denoted by the second `sample` progress bar) from which we generate the desired number of samples from our target distribution.At the end of inference, NumPyro prints the mean, std and 90% CI values for each of the latent parameters. Note that since we standardized our predictors and response variable, we would expect the intercept to have mean 0, as can be seen here. It also prints other convergence diagnostics on the latent parameters in the model, including [effective sample size](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.effective_sample_size) and the [gelman rubin diagnostic](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.gelman_rubin) ($\hat{R}$). The value for these diagnostics indicates that the chain has converged to the target distribution. In our case, the "target distribution" is the posterior distribution over the latent parameters that we are interested in. Note that this is often worth verifying with multiple chains for more complicated models. In the end, `samples_1` is a collection (in our case, a `dict` since `init_samples` was a `dict`) containing samples from the posterior distribution for each of the latent parameters in the model.To look at our regression fit, let us plot the regression line using our posterior estimates for the regression parameters, along with the 90% Credibility Interval (CI). Note that the [hpdi](https://numpyro.readthedocs.io/en/latest/diagnostics.htmlnumpyro.diagnostics.hpdi) function in NumPyro's diagnostics module can be used to compute CI. In the functions below, note that the collected samples from the posterior are all along the leading axis.
###Code
def plot_regression(x, y_mean, y_hpdi):
# Sort values for plotting by x axis
idx = jnp.argsort(x)
marriage = x[idx]
mean = y_mean[idx]
hpdi = y_hpdi[:, idx]
divorce = dset.DivorceScaled.values[idx]
# Plot
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 6))
ax.plot(marriage, mean)
ax.plot(marriage, divorce, 'o')
ax.fill_between(marriage, hpdi[0], hpdi[1], alpha=0.3, interpolate=True)
return ax
# Compute empirical posterior distribution over mu
posterior_mu = jnp.expand_dims(samples_1['a'], -1) + \
jnp.expand_dims(samples_1['bM'], -1) * dset.MarriageScaled.values
mean_mu = jnp.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_mu, hpdi_mu)
ax.set(xlabel='Marriage rate', ylabel='Divorce rate', title='Regression line with 90% CI');
###Output
_____no_output_____
###Markdown
We can see from the plot, that the CI broadens towards the tails where the data is relatively sparse, as can be expected. Prior Predictive DistributionLet us check that we have set sensible priors by sampling from the prior predictive distribution. NumPyro provides a handy [Predictive](http://num.pyro.ai/en/latest/utilities.htmlnumpyro.infer.util.Predictive) utility for this purpose.
###Code
from numpyro.infer import Predictive
rng_key, rng_key_ = random.split(rng_key)
prior_predictive = Predictive(model, num_samples=100)
prior_predictions = prior_predictive(rng_key_, marriage=dset.MarriageScaled.values)['obs']
mean_prior_pred = jnp.mean(prior_predictions, axis=0)
hpdi_prior_pred = hpdi(prior_predictions, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_prior_pred, hpdi_prior_pred)
ax.set(xlabel='Marriage rate', ylabel='Divorce rate', title='Predictions with 90% CI');
###Output
_____no_output_____
###Markdown
Posterior Predictive DistributionLet us now look at the posterior predictive distribution to see how our predictive distribution looks with respect to the observed divorce rates. To get samples from the posterior predictive distribution, we need to run the model by substituting the latent parameters with samples from the posterior. Note that by default we generate a single prediction for each sample from the joint posterior distribution, but this can be controlled using the `num_samples` argument.
###Code
rng_key, rng_key_ = random.split(rng_key)
predictive = Predictive(model, samples_1)
predictions = predictive(rng_key_, marriage=dset.MarriageScaled.values)['obs']
df = dset.filter(['Location'])
df['Mean Predictions'] = jnp.mean(predictions, axis=0)
df.head()
###Output
_____no_output_____
###Markdown
Predictive Utility With Effect HandlersTo remove the magic behind `Predictive`, let us see how we can combine [effect handlers](https://numpyro.readthedocs.io/en/latest/handlers.html) with the [vmap](https://github.com/google/jaxauto-vectorization-with-vmap) JAX primitive to implement our own simplified predictive utility function that can do vectorized predictions.
###Code
def predict(rng_key, post_samples, model, *args, **kwargs):
model = handlers.seed(handlers.condition(model, post_samples), rng_key)
model_trace = handlers.trace(model).get_trace(*args, **kwargs)
return model_trace['obs']['value']
# vectorize predictions via vmap
predict_fn = vmap(lambda rng_key, samples: predict(rng_key, samples, model, marriage=dset.MarriageScaled.values))
###Output
_____no_output_____
###Markdown
Note the use of the `condition`, `seed` and `trace` effect handlers in the `predict` function. - The `seed` effect-handler is used to wrap a stochastic function with an initial `PRNGKey` seed. When a sample statement inside the model is called, it uses the existing seed to sample from a distribution but this effect-handler also splits the existing key to ensure that future `sample` calls in the model use the newly split key instead. This is to prevent us from having to explicitly pass in a `PRNGKey` to each `sample` statement in the model. - The `condition` effect handler conditions the latent sample sites to certain values. In our case, we are conditioning on values from the posterior distribution returned by MCMC. - The `trace` effect handler runs the model and records the execution trace within an `OrderedDict`. This trace object contains execution metadata that is useful for computing quantities such as the log joint density. It should be clear now that the `predict` function simply runs the model by substituting the latent parameters with samples from the posterior (generated by the `mcmc` function) to generate predictions. Note the use of JAX's auto-vectorization transform called [vmap](https://github.com/google/jaxauto-vectorization-with-vmap) to vectorize predictions. Note that if we didn't use `vmap`, we would have to use a native for loop which for each sample which is much slower. Each draw from the posterior can be used to get predictions over all the 50 states. When we vectorize this over all the samples from the posterior using `vmap`, we will get a `predictions_1` array of shape `(num_samples, 50)`. We can then compute the mean and 90% CI of these samples to plot the posterior predictive distribution. We note that our mean predictions match those obtained from the `Predictive` utility class.
###Code
# Using the same key as we used for Predictive - note that the results are identical.
predictions_1 = predict_fn(random.split(rng_key_, num_samples), samples_1)
mean_pred = jnp.mean(predictions_1, axis=0)
df = dset.filter(['Location'])
df['Mean Predictions'] = mean_pred
df.head()
hpdi_pred = hpdi(predictions_1, 0.9)
ax = plot_regression(dset.MarriageScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel='Marriage rate', ylabel='Divorce rate', title='Predictions with 90% CI');
###Output
_____no_output_____
###Markdown
We have used the same `plot_regression` function as earlier. We notice that our CI for the predictive distribution is much broader as compared to the last plot due to the additional noise introduced by the `sigma` parameter. Most data points lie well within the 90% CI, which indicates a good fit. Posterior Predictive DensityLikewise, making use of effect-handlers and `vmap`, we can also compute the log likelihood for this model given the dataset, and the log posterior predictive density [[6](References)] which is given by $$ log \prod_{i=1}^{n} \int p(y_i | \theta) p_{post}(\theta) d\theta \approx \sum_{i=1}^n log \frac{\sum_s p(\theta^{s})}{S} \\= \sum_{i=1}^n (log \sum_s p(\theta^{s}) - log(S))$$.Here, $i$ indexes the observed data points $y$ and $s$ indexes the posterior samples over the latent parameters $\theta$. If the posterior predictive density for a model has a comparatively high value, it indicates that the observed data-points have higher probability under the given model.
###Code
def log_likelihood(rng_key, params, model, *args, **kwargs):
model = handlers.condition(model, params)
model_trace = handlers.trace(model).get_trace(*args, **kwargs)
obs_node = model_trace['obs']
return obs_node['fn'].log_prob(obs_node['value'])
def log_pred_density(rng_key, params, model, *args, **kwargs):
n = list(params.values())[0].shape[0]
log_lk_fn = vmap(lambda rng_key, params: log_likelihood(rng_key, params, model, *args, **kwargs))
log_lk_vals = log_lk_fn(random.split(rng_key, n), params)
return (logsumexp(log_lk_vals, 0) - jnp.log(n)).sum()
###Output
_____no_output_____
###Markdown
Note that NumPyro provides the [log_likelihood](http://num.pyro.ai/en/latest/utilities.htmllog-likelihood) utility function that can be used directly for computing `log likelihood` as in the first function for any general model. In this tutorial, we would like to emphasize that there is nothing magical about such utility functions, and you can roll out your own inference utilities using NumPyro's effect handling stack.
###Code
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(log_pred_density(rng_key_,
samples_1,
model,
marriage=dset.MarriageScaled.values,
divorce=dset.DivorceScaled.values)))
###Output
Log posterior predictive density: -66.70008087158203
###Markdown
Model 2: Predictor - Median Age of MarriageWe will now model the divorce rate as a function of the median age of marriage. The computations are mostly a reproduction of what we did for Model 1. Notice the following: - Divorce rate is inversely related to the age of marriage. Hence states where the median age of marriage is low will likely have a higher divorce rate. - We get a higher log likelihood as compared to Model 2, indicating that median age of marriage is likely a much better predictor of divorce rate.
###Code
rng_key, rng_key_ = random.split(rng_key)
mcmc.run(rng_key_, age=dset.AgeScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_2 = mcmc.get_samples()
posterior_mu = jnp.expand_dims(samples_2['a'], -1) + \
jnp.expand_dims(samples_2['bA'], -1) * dset.AgeScaled.values
mean_mu = jnp.mean(posterior_mu, axis=0)
hpdi_mu = hpdi(posterior_mu, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_mu, hpdi_mu)
ax.set(xlabel='Median marriage age', ylabel='Divorce rate', title='Regression line with 90% CI');
rng_key, rng_key_ = random.split(rng_key)
predictions_2 = Predictive(model, samples_2)(rng_key_,
age=dset.AgeScaled.values)['obs']
mean_pred = jnp.mean(predictions_2, axis=0)
hpdi_pred = hpdi(predictions_2, 0.9)
ax = plot_regression(dset.AgeScaled.values, mean_pred, hpdi_pred)
ax.set(xlabel='Median Age', ylabel='Divorce rate', title='Predictions with 90% CI');
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(log_pred_density(rng_key_,
samples_2,
model,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values)))
###Output
Log posterior predictive density: -59.251956939697266
###Markdown
Model 3: Predictor - Marriage Rate and Median Age of MarriageFinally, we will also model divorce rate as depending on both marriage rate as well as the median age of marriage. Note that the model's posterior predictive density is similar to Model 2 which likely indicates that the marginal information from marriage rate in predicting divorce rate is low when the median age of marriage is already known.
###Code
rng_key, rng_key_ = random.split(rng_key)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_3 = mcmc.get_samples()
rng_key, rng_key_ = random.split(rng_key)
print('Log posterior predictive density: {}'.format(
log_pred_density(rng_key_,
samples_3,
model,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce=dset.DivorceScaled.values)
))
###Output
Log posterior predictive density: -59.06374740600586
###Markdown
Divorce Rate Residuals by StateThe regression plots above shows that the observed divorce rates for many states differs considerably from the mean regression line. To dig deeper into how the last model (Model 3) under-predicts or over-predicts for each of the states, we will plot the posterior predictive and residuals (`Observed divorce rate - Predicted divorce rate`) for each of the states.
###Code
# Predictions for Model 3.
rng_key, rng_key_ = random.split(rng_key)
predictions_3 = Predictive(model, samples_3)(rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values)['obs']
y = jnp.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 16))
pred_mean = jnp.mean(predictions_3, axis=0)
pred_hpdi = hpdi(predictions_3, 0.9)
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = jnp.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
idx = jnp.argsort(residuals_mean)
# Plot posterior predictive
ax[0].plot(jnp.zeros(50), y, '--')
ax[0].errorbar(pred_mean[idx], y, xerr=pred_hpdi[1, idx] - pred_mean[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
ax[0].plot(dset.DivorceScaled.values[idx], y, marker='o',
ls='none', color='gray')
ax[0].set(xlabel='Posterior Predictive (red) vs. Actuals (gray)', ylabel='State',
title='Posterior Predictive with 90% CI')
ax[0].set_yticks(y)
ax[0].set_yticklabels(dset.Loc.values[idx], fontsize=10);
# Plot residuals
residuals_3 = dset.DivorceScaled.values - predictions_3
residuals_mean = jnp.mean(residuals_3, axis=0)
residuals_hpdi = hpdi(residuals_3, 0.9)
err = residuals_hpdi[1] - residuals_mean
ax[1].plot(jnp.zeros(50), y, '--')
ax[1].errorbar(residuals_mean[idx], y, xerr=err[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
ax[1].set(xlabel='Residuals', ylabel='State', title='Residuals with 90% CI')
ax[1].set_yticks(y)
ax[1].set_yticklabels(dset.Loc.values[idx], fontsize=10);
###Output
_____no_output_____
###Markdown
The plot on the left shows the mean predictions with 90% CI for each of the states using Model 3. The gray markers indicate the actual observed divorce rates. The right plot shows the residuals for each of the states, and both these plots are sorted by the residuals, i.e. at the bottom, we are looking at states where the model predictions are higher than the observed rates, whereas at the top, the reverse is true.Overall, the model fit seems good because most observed data points like within a 90% CI around the mean predictions. However, notice how the model over-predicts by a large margin for states like Idaho (bottom left), and on the other end under-predicts for states like Maine (top right). This is likely indicative of other factors that we are missing out in our model that affect divorce rate across different states. Even ignoring other socio-political variables, one such factor that we have not yet modeled is the measurement noise given by `Divorce SE` in the dataset. We will explore this in the next section. Regression Model with Measurement ErrorNote that in our previous models, each data point influences the regression line equally. Is this well justified? We will build on the previous model to incorporate measurement error given by `Divorce SE` variable in the dataset. Incorporating measurement noise will be useful in ensuring that observations that have higher confidence (i.e. lower measurement noise) have a greater impact on the regression line. On the other hand, this will also help us better model outliers with high measurement errors. For more details on modeling errors due to measurement noise, refer to Chapter 14 of [[1](References)].To do this, we will reuse Model 3, with the only change that the final observed value has a measurement error given by `divorce_sd` (notice that this has to be standardized since the `divorce` variable itself has been standardized to mean 0 and std 1).
###Code
def model_se(marriage, age, divorce_sd, divorce=None):
a = numpyro.sample('a', dist.Normal(0., 0.2))
bM = numpyro.sample('bM', dist.Normal(0., 0.5))
M = bM * marriage
bA = numpyro.sample('bA', dist.Normal(0., 0.5))
A = bA * age
sigma = numpyro.sample('sigma', dist.Exponential(1.))
mu = a + M + A
divorce_rate = numpyro.sample('divorce_rate', dist.Normal(mu, sigma))
numpyro.sample('obs', dist.Normal(divorce_rate, divorce_sd), obs=divorce)
# Standardize
dset['DivorceScaledSD'] = dset['Divorce SE'] / jnp.std(dset.Divorce.values)
rng_key, rng_key_ = random.split(rng_key)
kernel = NUTS(model_se, target_accept_prob=0.9)
mcmc = MCMC(kernel, num_warmup=1000, num_samples=3000)
mcmc.run(rng_key_, marriage=dset.MarriageScaled.values, age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values, divorce=dset.DivorceScaled.values)
mcmc.print_summary()
samples_4 = mcmc.get_samples()
###Output
sample: 100%|██████████| 4000/4000 [00:06<00:00, 578.19it/s, 15 steps of size 2.58e-01. acc. prob=0.93]
###Markdown
Effect of Incorporating Measurement Noise on ResidualsNotice that our values for the regression coefficients is very similar to Model 3. However, introducing measurement noise allows us to more closely match our predictive distribution to the observed values. We can see this if we plot the residuals as earlier.
###Code
rng_key, rng_key_ = random.split(rng_key)
predictions_4 = Predictive(model_se, samples_4)(rng_key_,
marriage=dset.MarriageScaled.values,
age=dset.AgeScaled.values,
divorce_sd=dset.DivorceScaledSD.values)['obs']
sd = dset.DivorceScaledSD.values
residuals_4 = dset.DivorceScaled.values - predictions_4
residuals_mean = jnp.mean(residuals_4, axis=0)
residuals_hpdi = hpdi(residuals_4, 0.9)
err = residuals_hpdi[1] - residuals_mean
idx = jnp.argsort(residuals_mean)
y = jnp.arange(50)
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 16))
# Plot Residuals
ax.plot(jnp.zeros(50), y, '--')
ax.errorbar(residuals_mean[idx], y, xerr=err[idx],
marker='o', ms=5, mew=4, ls='none', alpha=0.8)
# Plot SD
ax.errorbar(residuals_mean[idx], y, xerr=sd[idx],
ls='none', color='orange', alpha=0.9)
# Plot earlier mean residual
ax.plot(jnp.mean(dset.DivorceScaled.values - predictions_3, 0)[idx], y,
ls='none', marker='o', ms=6, color='black', alpha=0.6)
ax.set(xlabel='Residuals', ylabel='State', title='Residuals with 90% CI')
ax.set_yticks(y)
ax.set_yticklabels(dset.Loc.values[idx], fontsize=10);
ax.text(-2.8, -7, 'Residuals (with error-bars) from current model (in red). '
'Black marker \nshows residuals from the previous model (Model 3). '
'Measurement \nerror is indicated by orange bar.');
###Output
_____no_output_____
###Markdown
The plot above shows the residuals for each of the states, along with the measurement noise given by inner error bar. The gray dots are the mean residuals from our earlier Model 3. Notice how having an additional degree of freedom to model the measurement noise has shrunk the residuals. In particular, for Idaho and Maine, our predictions are now much closer to the observed values after incorporating measurement noise in the model.To better see how measurement noise affects the movement of the regression line, let us plot the residuals with respect to the measurement noise.
###Code
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10, 6))
x = dset.DivorceScaledSD.values
y1 = jnp.mean(residuals_3, 0)
y2 = jnp.mean(residuals_4, 0)
ax.plot(x, y1, ls='none', marker='o')
ax.plot(x, y2, ls='none', marker='o')
for i, (j, k) in enumerate(zip(y1, y2)):
ax.plot([x[i], x[i]], [j, k], '--', color='gray');
ax.set(xlabel='Measurement Noise', ylabel='Residual', title='Mean residuals (Model 4: red, Model 3: blue)');
###Output
_____no_output_____ |
rain_forecast.ipynb | ###Markdown
Importando pacotes e dataset
###Code
import os
import matplotlib.pyplot as plt
# import earthpy as et
import numpy as np
import pandas as pd
df = pd.read_csv('83377.csv')
df.head()
df.columns
def derive_nth_day_feature(df, feature, N):
rows = df.shape[0]
nth_prior_measurements = [None] * N + [df[feature][i - N] for i in range(N, rows)]
col_name = "{}_{}".format(feature, N)
df[col_name] = nth_prior_measurements
def dataframe_sort(df):
# make list of original features without meantempm, mintempm, and maxtempm
to_remove = [feature
for feature in df
if feature not in ['Precipitacao', 'TempBulboSeco', 'TempBulboUmido', 'TempMaxima', 'TempMinima',
'UmidadeRelativa', 'PressaoAtmEstacao', 'PressaoAtmMar', 'DirecaoVento',
'VelocidadeVento', 'Insolacao', 'Nebulosidade', 'Evaporacao Piche',
'Temp Comp Media', 'Umidade Relativa Media', 'Velocidade do Vento Media']]
# make a list of columns to keep
to_keep = [col for col in df.columns if col not in to_remove]
# select only the columns in to_keep and assign to df
df = df[to_keep]
for feature in df:
if feature != 'Data':
for N in range(1, 4):
derive_nth_day_feature(df, feature, N)
return df
# df = dataframe_sort(df)
# # df.head()
# df2 = df
del df['Unnamed: 0']
del df['Unnamed: 19']
del df['Estacao']
df.info()
# df = df2
df['TempMaxima'].value_counts()
df.head()
# df = df.loc[:, df.columns != 'Data']
# print(df.head())
df = df.fillna(df.mean(axis=0, skipna=True))
#df['Precipitacao'].value_counts()
df.head()
plt.figure(figsize=(14,6))
df['TempMaxima'].plot()
###Output
_____no_output_____ |
DSES_2nd_review/kaggle_prediction_of_house_price.ipynb | ###Markdown
1. Data Loading
###Code
test=pd.read_csv('kc_house_data.csv')
test.head()
test.isnull().head()
test.isnull().any()
###Output
_____no_output_____ |
docs/user_guide/weights/weights_nb.ipynb | ###Markdown
Generating spatial weights`momepy` is using `libpysal` to handle spatial weights, but also builds on top of it. This notebook will show how to use different weights.
###Code
import momepy
import geopandas as gpd
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
We will again use `osmnx` to get the data for our example and after preprocessing of building layer will generate tessellation layer.
###Code
import osmnx as ox
gdf = ox.footprints.footprints_from_place(place='Kahla, Germany')
gdf_projected = ox.projection.project_gdf(gdf)
buildings = momepy.preprocess(gdf_projected, size=30,
compactness=True, islands=True)
buildings['uID'] = momepy.unique_id(buildings)
limit = momepy.buffered_limit(buildings)
tessellation = momepy.Tessellation(buildings, unique_id='uID', limit=limit).tessellation
###Output
Loop 1 out of 2.
###Markdown
Queen contiguityMorphological tessellation allows using contiguity-based weights matrix. While `libpysal.weights.contiguity.Queen` will do the standard Queen contiguity matrix of the first order; it might not be enough to capture proper context. For that reason, we can use `momepy.sw_high` to capture all neighbours within set topological distance `k`. It generates spatial weights of higher orders under the hood and joins them together.
###Code
sw3 = momepy.sw_high(k=3, gdf=tessellation, ids='uID')
###Output
_____no_output_____
###Markdown
Queen contiguity of morphological tessellation can capture the comparable level of information across the study area - the number of the neighbour is relatively similar and depends on the morphology of urban form. We can visualize it by counting the number of neighbours (as captured by `sw3`).
###Code
tessellation['neighbours'] = momepy.Neighbors(tessellation, sw3,'uID').series
f, ax = plt.subplots(figsize=(10, 10))
tessellation.plot(ax=ax, column='neighbours', legend=True, cmap='Spectral_r')
buildings.plot(ax=ax, color="white", alpha=0.4)
ax.set_axis_off()
plt.show()
###Output
_____no_output_____
###Markdown
DistanceOften we want to define the neighbours based on metric distance. We will look at two options - distance band and k-nearest neighbour. Distance bandWe can imagine distance band as a buffer of a set radius around each object, for example, 400 meters. For that, we can use `libpysal.weights.DistanceBand`:
###Code
import libpysal
dist400 = libpysal.weights.DistanceBand.from_dataframe(buildings, 400,
ids='uID')
###Output
/Users/martin/anaconda3/envs/momepy_guide/lib/python3.7/site-packages/libpysal/weights/weights.py:165: UserWarning: The weights matrix is not fully connected:
There are 2 disconnected components.
There is 1 island with id: 324.
warnings.warn(message)
###Markdown
Because we have defined spatial weights using uID, we can use `dist400` generated on buildings and use it on tessellation:
###Code
tessellation['neighbours400'] = momepy.Neighbors(tessellation, dist400, 'uID').series
f, ax = plt.subplots(figsize=(10, 10))
tessellation.plot(ax=ax, column='neighbours400', legend=True, cmap='Spectral_r')
buildings.plot(ax=ax, color="white", alpha=0.4)
ax.set_axis_off()
plt.show()
###Output
_____no_output_____
###Markdown
K nearest neighborIf we want fixed number of neighbours, we can use `libpysal.weights.KNN`:
###Code
knn = libpysal.weights.KNN.from_dataframe(buildings, k=200, ids='uID')
tessellation['neighboursKNN'] = momepy.Neighbors(tessellation, knn,'uID').series
###Output
100%|██████████| 2518/2518 [00:00<00:00, 10159.05it/s]
###Markdown
**Note**: As all tessellation cells have the same number of neighbours (due to KNN), they all have the same colour.
###Code
f, ax = plt.subplots(figsize=(10, 10))
tessellation.plot(ax=ax, column='neighboursKNN', legend=True, cmap='Spectral_r')
buildings.plot(ax=ax, color="white", alpha=0.4)
ax.set_axis_off()
plt.show()
###Output
_____no_output_____
###Markdown
Generating spatial weights`momepy` is using `libpysal` to handle spatial weights, but also builds on top of it. This notebook will show how to use different weights.
###Code
import momepy
import geopandas as gpd
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
We will again use `osmnx` to get the data for our example and after preprocessing of building layer will generate tessellation layer.
###Code
import osmnx as ox
gdf = ox.geometries.geometries_from_place('Kahla, Germany', tags={'building':True})
gdf_projected = ox.projection.project_gdf(gdf)
buildings = momepy.preprocess(gdf_projected, size=30,
compactness=True, islands=True)
buildings['uID'] = momepy.unique_id(buildings)
limit = momepy.buffered_limit(buildings)
tessellation = momepy.Tessellation(buildings, unique_id='uID', limit=limit).tessellation
###Output
Loop 1 out of 2.
###Markdown
Queen contiguityMorphological tessellation allows using contiguity-based weights matrix. While `libpysal.weights.contiguity.Queen` will do the standard Queen contiguity matrix of the first order; it might not be enough to capture proper context. For that reason, we can use `momepy.sw_high` to capture all neighbours within set topological distance `k`. It generates spatial weights of higher orders under the hood and joins them together.
###Code
sw3 = momepy.sw_high(k=3, gdf=tessellation, ids='uID')
###Output
_____no_output_____
###Markdown
Queen contiguity of morphological tessellation can capture the comparable level of information across the study area - the number of the neighbour is relatively similar and depends on the morphology of urban form. We can visualize it by counting the number of neighbours (as captured by `sw3`).
###Code
tessellation['neighbours'] = momepy.Neighbors(tessellation, sw3,'uID').series
f, ax = plt.subplots(figsize=(10, 10))
tessellation.plot(ax=ax, column='neighbours', legend=True, cmap='Spectral_r')
buildings.plot(ax=ax, color="white", alpha=0.4)
ax.set_axis_off()
plt.show()
###Output
_____no_output_____
###Markdown
DistanceOften we want to define the neighbours based on metric distance. We will look at two options - distance band and k-nearest neighbour. Distance bandWe can imagine distance band as a buffer of a set radius around centroid of each object, for example, 400 meters. For that, we can use `libpysal.weights.DistanceBand`:
###Code
import libpysal
dist400 = libpysal.weights.DistanceBand.from_dataframe(buildings, 400,
ids='uID')
###Output
/opt/miniconda3/envs/momepy_guide/lib/python3.8/site-packages/libpysal/weights/weights.py:172: UserWarning: The weights matrix is not fully connected:
There are 2 disconnected components.
There is 1 island with id: 212.
warnings.warn(message)
###Markdown
Because we have defined spatial weights using uID, we can use `dist400` generated on buildings and use it on tessellation:
###Code
tessellation['neighbours400'] = momepy.Neighbors(tessellation, dist400, 'uID').series
f, ax = plt.subplots(figsize=(10, 10))
tessellation.plot(ax=ax, column='neighbours400', legend=True, cmap='Spectral_r')
buildings.plot(ax=ax, color="white", alpha=0.4)
ax.set_axis_off()
plt.show()
###Output
_____no_output_____
###Markdown
K nearest neighborIf we want fixed number of neighbours, we can use `libpysal.weights.KNN`:
###Code
knn = libpysal.weights.KNN.from_dataframe(buildings, k=200, ids='uID')
tessellation['neighboursKNN'] = momepy.Neighbors(tessellation, knn,'uID').series
###Output
100%|██████████| 2005/2005 [00:00<00:00, 46032.72it/s]
###Markdown
**Note**: As all tessellation cells have the same number of neighbours (due to KNN), they all have the same colour.
###Code
f, ax = plt.subplots(figsize=(10, 10))
tessellation.plot(ax=ax, column='neighboursKNN', legend=True, cmap='Spectral_r')
buildings.plot(ax=ax, color="white", alpha=0.4)
ax.set_axis_off()
plt.show()
###Output
_____no_output_____
###Markdown
Generating spatial weights`momepy` is using `libpysal` to handle spatial weights, but also builds on top of it. This notebook will show how to use different weights.
###Code
import momepy
import geopandas as gpd
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
We will again use `osmnx` to get the data for our example and after preprocessing of building layer will generate tessellation layer.
###Code
import osmnx as ox
gdf = ox.geometries.geometries_from_place('Kahla, Germany', tags={'building':True})
buildings = ox.projection.project_gdf(gdf)
buildings['uID'] = momepy.unique_id(buildings)
limit = momepy.buffered_limit(buildings)
tessellation = momepy.Tessellation(buildings, unique_id='uID', limit=limit).tessellation
###Output
Inward offset...
Generating input point array...
Generating Voronoi diagram...
Generating GeoDataFrame...
Dissolving Voronoi polygons...
###Markdown
Queen contiguityMorphological tessellation allows using contiguity-based weights matrix. While `libpysal.weights.contiguity.Queen` will do the standard Queen contiguity matrix of the first order; it might not be enough to capture proper context. For that reason, we can use `momepy.sw_high` to capture all neighbours within set topological distance `k`. It generates spatial weights of higher orders under the hood and joins them together.
###Code
sw3 = momepy.sw_high(k=3, gdf=tessellation, ids='uID')
###Output
_____no_output_____
###Markdown
Queen contiguity of morphological tessellation can capture the comparable level of information across the study area - the number of the neighbour is relatively similar and depends on the morphology of urban form. We can visualize it by counting the number of neighbours (as captured by `sw3`).
###Code
tessellation['neighbours'] = momepy.Neighbors(tessellation, sw3,'uID').series
f, ax = plt.subplots(figsize=(10, 10))
tessellation.plot(ax=ax, column='neighbours', legend=True, cmap='Spectral_r')
buildings.plot(ax=ax, color="white", alpha=0.4)
ax.set_axis_off()
plt.show()
###Output
_____no_output_____
###Markdown
DistanceOften we want to define the neighbours based on metric distance. We will look at two options - distance band and k-nearest neighbour. Distance bandWe can imagine distance band as a buffer of a set radius around centroid of each object, for example, 400 meters. For that, we can use `libpysal.weights.DistanceBand`:
###Code
import libpysal
dist400 = libpysal.weights.DistanceBand.from_dataframe(buildings, 400,
ids='uID')
###Output
/opt/miniconda3/envs/geo_dev/lib/python3.9/site-packages/libpysal/weights/weights.py:172: UserWarning: The weights matrix is not fully connected:
There are 2 disconnected components.
There is 1 island with id: 330.
warnings.warn(message)
###Markdown
Because we have defined spatial weights using uID, we can use `dist400` generated on buildings and use it on tessellation:
###Code
tessellation['neighbours400'] = momepy.Neighbors(tessellation, dist400, 'uID').series
f, ax = plt.subplots(figsize=(10, 10))
tessellation.plot(ax=ax, column='neighbours400', legend=True, cmap='Spectral_r')
buildings.plot(ax=ax, color="white", alpha=0.4)
ax.set_axis_off()
plt.show()
###Output
_____no_output_____
###Markdown
K nearest neighborIf we want fixed number of neighbours, we can use `libpysal.weights.KNN`:
###Code
knn = libpysal.weights.KNN.from_dataframe(buildings, k=200, ids='uID')
tessellation['neighboursKNN'] = momepy.Neighbors(tessellation, knn,'uID').series
###Output
_____no_output_____
###Markdown
**Note**: As all tessellation cells have the same number of neighbours (due to KNN), they all have the same colour.
###Code
f, ax = plt.subplots(figsize=(10, 10))
tessellation.plot(ax=ax, column='neighboursKNN', legend=True, cmap='Spectral_r')
buildings.plot(ax=ax, color="white", alpha=0.4)
ax.set_axis_off()
plt.show()
###Output
_____no_output_____ |
site/es-419/guide/keras/functional.ipynb | ###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
La API funcional "Keras" en TensorFlow Ver en TensorFlow.org Correr en Google Colab Ver código fuente en GitHub Descargar notebook Note: Nuestra comunidad de Tensorflow ha traducido estos documentos. Como las traducciones de la comunidadson basados en el "mejor esfuerzo", no hay ninguna garantia que esta sea un reflejo preciso y actual de la [Documentacion Oficial en Ingles](https://www.tensorflow.org/?hl=en).Si tienen sugerencias sobre como mejorar esta traduccion, por favor envian un "Pull request"al siguiente repositorio [tensorflow/docs](https://github.com/tensorflow/docs).Para ofrecerse como voluntario o hacer revision de las traducciones de la Comunidadpor favor contacten al siguiente grupo [[email protected] list](https://groups.google.com/a/tensorflow.org/forum/!forum/docs). Setup
###Code
import tensorflow as tf
tf.keras.backend.clear_session() # Reseteo sencillo
###Output
_____no_output_____
###Markdown
IntroduccionYa estás familiarizado con el uso del metodo `keras.Sequential()` para crear modelos.La API funcional es una forma de crear modelos mas dinamicos que con ` Sequential `: La API funcional puede manejar modelos con topología no lineal, modelos con capas compartidas y modelos con múltiples entradas o salidas.Se basa en la idea de que un modelo de aprendizaje profundosuele ser un gráfico acíclico dirigido (DAG) de capas.La API funcional es un conjunto de herramientas para **construir gráficos de capas**.Considera el siguiente modelo:```(input: 784-vectores dimensionales) ↧[Dense (64 units, activacion relu)] ↧[Dense (64 units, activacion relu)] ↧[Dense (10 units, activacion softmax)] ↧(output: distribución de probabilidad en 10 clases)```Es una simple grafica de tres capas.Para construir este modelo con la API funcional,comenzarías creando un nodo de entrada:
###Code
from tensorflow import keras
inputs = keras.Input(shape=(784,))
###Output
_____no_output_____
###Markdown
Aqui solo especificamos el tipo de nuestra data set: 784-vectores dimensionales.Nota que el tamaño del batch siempre debe ser omitido, solo se incluye el tipo de la data set.Para una input de tipo imágen ` (31,32,3) ` hubiese sido:
###Code
img_inputs = keras.Input(shape=(32, 32, 3))
###Output
_____no_output_____
###Markdown
Lo que se devuelve, ` input `, contiene información sobre la forma y el tipo de dato que se espera ingresa en tu modelo:
###Code
inputs.shape
inputs.dtype
###Output
_____no_output_____
###Markdown
Puedes crear un nuevo nodo en el grafico de capas mandando a llamar al objeto ` input `.
###Code
from tensorflow.keras import layers
dense = layers.Dense(64, activation='relu')
x = dense(inputs)
###Output
_____no_output_____
###Markdown
La acción "layer call" es como dibujar una flecha desde "entradas" a la capa que creamos.Estamos "pasando" las entradas a la capa `dense`, y afuera obtenemos` x`.Agreguemos algunas capas más a nuestro gráfico de capas:La acción "llamada a la capa" es como dibujar una flecha de "entradas" a la capa que creamos.Estamos pasando las entradas a una capa mas densa, y respecto a la salida obtenemos una ` x `.
###Code
x = layers.Dense(64, activation='relu')(x)
outputs = layers.Dense(10, activation='softmax')(x)
###Output
_____no_output_____
###Markdown
LLegados a este punto, podemos crear un ` Modelo ` especificando sus entradas y salidas en las capas de graficas.
###Code
model = keras.Model(inputs=inputs, outputs=outputs)
###Output
_____no_output_____
###Markdown
Recapitulando, esta es nuestra definción completa del proceso:
###Code
inputs = keras.Input(shape=(784,), name='img')
x = layers.Dense(64, activation='relu')(inputs)
x = layers.Dense(64, activation='relu')(x)
outputs = layers.Dense(10, activation='softmax')(x)
model = keras.Model(inputs=inputs, outputs=outputs, name='mnist_model')
###Output
_____no_output_____
###Markdown
Veamos como se muestra el model summary:
###Code
model.summary()
###Output
_____no_output_____
###Markdown
También podemos trazar el modelo como un gráfico:
###Code
keras.utils.plot_model(model, 'my_first_model.png')
###Output
_____no_output_____
###Markdown
Y opcionalmente mostrar la entrada y la salida de la forma de cada capa en la gráfica ploteada:
###Code
keras.utils.plot_model(model, 'my_first_model_with_shape_info.png', show_shapes=True)
###Output
_____no_output_____
###Markdown
Esta figura y el código que escribimos son prácticamente idénticos. En la versión de código, las flechas de conexión simplemente se reemplazan por la operación de llamada.Un "gráfico de capas" es una imagen mental muy intuitiva para un modelo de aprendizaje profundo, y la API funcional es una forma de crear modelos que reflejan de cerca esta imagen mental. Entrenamiento, evaluación e inferencia.El entrenamiento, la evaluación y la inferencia funcionan exactamente de la misma manera para los modelos construidosutilizando la API funcional como para los modelos secuenciales.Aquí hay una demostración rápida.Aquí cargamos datos de imagen MNIST, los rediseñamos en vectores,ajustar el modelo en los datos (mientras se monitorea el rendimiento en una división de validación),y finalmente evaluamos nuestro modelo en los datos de prueba:
###Code
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype('float32') / 255
x_test = x_test.reshape(10000, 784).astype('float32') / 255
model.compile(loss='sparse_categorical_crossentropy',
optimizer=keras.optimizers.RMSprop(),
metrics=['accuracy'])
history = model.fit(x_train, y_train,
batch_size=64,
epochs=5,
validation_split=0.2)
test_scores = model.evaluate(x_test, y_test, verbose=2)
print('Test loss:', test_scores[0])
print('Test accuracy:', test_scores[1])
###Output
_____no_output_____
###Markdown
Para obtener una guía completa sobre el entrenamiento y evaluación de modelos, consulta [Guía de entrenamiento y evaluación](./train_and_evaluate.ipynb). Almacenado y serializaciónEl almacenado y la serialización funcionan exactamente de la misma manera para los modelos construidosutilizando la API funcional como para los modelos secuenciales.Una forma estándar de guardar un modelo funcional es llamar a `model.save ()` para guardar todo el modelo en un solo archivo.Posteriormente, puede volver a crear el mismo modelo a partir de este archivo, incluso si ya no tiene acceso al código.eso creó el modelo.Este archivo incluye:- La arquitectura del modelo.- Los valores de peso del modelo (que se aprendieron durante el entrenamiento)- La configuración de entrenamiento del modelo (lo que pasó a `compilar`), si corresponde- El optimizador y su estado, si corresponde (esto le permite reiniciar el entrenamiento donde lo dejó)
###Code
model.save('path_to_my_model.h5')
del model
# Recrea el mismo modelo, desde el archivo:
model = keras.models.load_model('path_to_my_model.h5')
###Output
_____no_output_____
###Markdown
Para obtener una guía completa sobre el guardado de modelos, consulta [Guía para guardar y serializar modelos](./save_and_serialize.ipynb). Usando el mismo gráfico de capas para definir múltiples modelosEn la API funcional, los modelos se crean especificando sus entradasy salidas en un gráfico de capas. Eso significa que un solo gráfico de capasSe puede utilizar para generar múltiples modelos.En el siguiente ejemplo, usamos la misma arquitectura de capas para crear instancias de dos modelos:un modelo de "codificador" que convierte las entradas de imagen en vectores de 16 dimensiones,y un modelo completo de `autoencoder` para entrenamiento.
###Code
encoder_input = keras.Input(shape=(28, 28, 1), name='img')
x = layers.Conv2D(16, 3, activation='relu')(encoder_input)
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.Conv2D(16, 3, activation='relu')(x)
encoder_output = layers.GlobalMaxPooling2D()(x)
encoder = keras.Model(encoder_input, encoder_output, name='encoder')
encoder.summary()
x = layers.Reshape((4, 4, 1))(encoder_output)
x = layers.Conv2DTranspose(16, 3, activation='relu')(x)
x = layers.Conv2DTranspose(32, 3, activation='relu')(x)
x = layers.UpSampling2D(3)(x)
x = layers.Conv2DTranspose(16, 3, activation='relu')(x)
decoder_output = layers.Conv2DTranspose(1, 3, activation='relu')(x)
autoencoder = keras.Model(encoder_input, decoder_output, name='autoencoder')
autoencoder.summary()
###Output
_____no_output_____
###Markdown
Tenga en cuenta que hacemos que la arquitectura de decodificación sea estrictamente simétrica a la arquitectura de codificación,para que obtengamos una forma de salida que sea igual a la forma de entrada `(28, 28, 1)`.El reverso de una capa `Conv2D` es una capa` Conv2DTranspose`, y el reverso de una capa `MaxPooling2D`La capa es una capa `UpSampling2D`. Todos los modelos son invocables, al igual que las capas.Puede tratar cualquier modelo como si fuera una capa, llamándolo en una `Entrada` o en la salida de otra capa.Tenga en cuenta que al llamar a un modelo no solo está reutilizando la arquitectura del modelo, también está reutilizando sus pesos.Veamos esto en acción. Aquí hay una versión diferente del ejemplo de autoencoder que crea un modelo de codificador, un modelo de decodificador,y encadenarlos en dos llamadas para obtener el modelo de autoencoder:
###Code
encoder_input = keras.Input(shape=(28, 28, 1), name='original_img')
x = layers.Conv2D(16, 3, activation='relu')(encoder_input)
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.Conv2D(16, 3, activation='relu')(x)
encoder_output = layers.GlobalMaxPooling2D()(x)
encoder = keras.Model(encoder_input, encoder_output, name='encoder')
encoder.summary()
decoder_input = keras.Input(shape=(16,), name='encoded_img')
x = layers.Reshape((4, 4, 1))(decoder_input)
x = layers.Conv2DTranspose(16, 3, activation='relu')(x)
x = layers.Conv2DTranspose(32, 3, activation='relu')(x)
x = layers.UpSampling2D(3)(x)
x = layers.Conv2DTranspose(16, 3, activation='relu')(x)
decoder_output = layers.Conv2DTranspose(1, 3, activation='relu')(x)
decoder = keras.Model(decoder_input, decoder_output, name='decoder')
decoder.summary()
autoencoder_input = keras.Input(shape=(28, 28, 1), name='img')
encoded_img = encoder(autoencoder_input)
decoded_img = decoder(encoded_img)
autoencoder = keras.Model(autoencoder_input, decoded_img, name='autoencoder')
autoencoder.summary()
###Output
_____no_output_____
###Markdown
Como puede ver, el modelo puede estar anidado: un modelo puede contener submodelos (ya que un modelo es como una capa).Un caso de uso común para la anidación de modelos es * ensamblaje *.Como ejemplo, a continuación se explica cómo agrupar un conjunto de modelos en un solo modelo que promedia sus predicciones:
###Code
def get_model():
inputs = keras.Input(shape=(128,))
outputs = layers.Dense(1, activation='sigmoid')(inputs)
return keras.Model(inputs, outputs)
model1 = get_model()
model2 = get_model()
model3 = get_model()
inputs = keras.Input(shape=(128,))
y1 = model1(inputs)
y2 = model2(inputs)
y3 = model3(inputs)
outputs = layers.average([y1, y2, y3])
ensemble_model = keras.Model(inputs=inputs, outputs=outputs)
###Output
_____no_output_____
###Markdown
Manipulación de topologías gráficas complejas Modelos con múltiples entradas y salidasLa API funcional facilita la manipulación de múltiples entradas y salidas.Esto no se puede manejar con la API secuencial.Aquí hay un ejemplo simple.Supongamos que está creando un sistema para clasificar los tickets de emisión personalizados por prioridad y enrutarlos al departamento correcto.Tu modelo tendrá 3 entradas:- Título del ticket (entrada de texto)- Cuerpo del texto del ticket (entrada de texto)- Cualquier etiqueta agregada por el usuario (entrada categórica)Tendrá dos salidas:- Puntuación de prioridad entre 0 y 1 (salida sigmoidea escalar)- El departamento que debe manejar el ticket (salida softmax sobre el conjunto de departamentos)Construyamos este modelo en pocas líneas con la API funcional.
###Code
num_tags = 12 # Número de etiquetas de problemas únicos
num_words = 10000 # Tamaño del vocabulario obtenido al preprocesar datos de texto
num_departments = 4 # Número de departamentos para predicciones.
title_input = keras.Input(shape=(None,), name='title') # Secuencia de longitud variable de entradas
body_input = keras.Input(shape=(None,), name='body') # Secuencia de longitud variable de entradas
tags_input = keras.Input(shape=(num_tags,), name='tags') # Vectores binarios de tamaño `num_tags`
# Ingresa cada palabra en el título en un vector de 64 dimensiones
title_features = layers.Embedding(num_words, 64)(title_input)
# Ingresa cada palabra en el texto en un vector de 64 dimensiones
body_features = layers.Embedding(num_words, 64)(body_input)
# Reduce la secuencia de palabras ingresadas en el título en un solo vector de 128 dimensiones
title_features = layers.LSTM(128)(title_features)
# Reduce la secuencia de palabras ingresadas en el cuerpo en un solo vector de 32 dimensiones
body_features = layers.LSTM(32)(body_features)
# Combina todas las funciones disponibles en un solo vector grande mediante concatenación
x = layers.concatenate([title_features, body_features, tags_input])
# Pegua una regresión logística para la predicción de prioridad en la parte superior de las características
priority_pred = layers.Dense(1, activation='sigmoid', name='priority')(x)
# Stick a department classifier on top of the features
department_pred = layers.Dense(num_departments, activation='softmax', name='department')(x)
# Instancia un modelo de extremo a extremo que prediga tanto la prioridad como el departamento
model = keras.Model(inputs=[title_input, body_input, tags_input],
outputs=[priority_pred, department_pred])
###Output
_____no_output_____
###Markdown
Ploteando el modelo:
###Code
keras.utils.plot_model(model, 'multi_input_and_output_model.png', show_shapes=True)
###Output
_____no_output_____
###Markdown
Al compilar este modelo, podemos asignar diferentes pérdidas a cada salida.Incluso puede asignar diferentes pesos a cada pérdida, para modular sucontribución a la pérdida total de entrenamiento.
###Code
model.compile(optimizer=keras.optimizers.RMSprop(1e-3),
loss=['binary_crossentropy', 'categorical_crossentropy'],
loss_weights=[1., 0.2])
###Output
_____no_output_____
###Markdown
Como dimos nombres a nuestras capas de salida, también podríamos especificar la pérdida de esta manera:
###Code
model.compile(optimizer=keras.optimizers.RMSprop(1e-3),
loss={'priority': 'binary_crossentropy',
'department': 'categorical_crossentropy'},
loss_weights=[1., 0.2])
###Output
_____no_output_____
###Markdown
Podemos entrenar el modelo pasando listas de matrices Numpy de entradas y objetivos:
###Code
import numpy as np
# Datos de entrada ficticios
title_data = np.random.randint(num_words, size=(1280, 10))
body_data = np.random.randint(num_words, size=(1280, 100))
tags_data = np.random.randint(2, size=(1280, num_tags)).astype('float32')
# Datos objetivo ficticios
priority_targets = np.random.random(size=(1280, 1))
dept_targets = np.random.randint(2, size=(1280, num_departments))
model.fit({'title': title_data, 'body': body_data, 'tags': tags_data},
{'priority': priority_targets, 'department': dept_targets},
epochs=2,
batch_size=32)
###Output
_____no_output_____
###Markdown
Al llamar al ajuste con un objeto `Dataset`, debería producir untupla de listas como `([title_data, body_data, tags_data], [priority_targets, dept_targets])`o una tupla de diccionarios como`({'title': title_data, 'body': body_data, 'tags': tags_data}, {'priority': priority_targets, 'department': dept_targets})`.Para obtener una explicación más detallada, consulta la guía completa [Guía de entrenamiento y evaluación](./train_and_evaluate.ipynb). Un modelo de Red neuronal residual de jugueteAdemás de los modelos con múltiples entradas y salidas,La API funcional facilita la manipulación de topologías de conectividad no lineal,es decir, modelos donde las capas no están conectadas secuencialmente.Esto tampoco se puede manejar con la API secuencial (como su nombre lo indica).Un caso de uso común para esto son las conexiones residuales.Construyamos un modelo de ResNet de juguete para CIFAR10 para demostrar esto.
###Code
inputs = keras.Input(shape=(32, 32, 3), name='img')
x = layers.Conv2D(32, 3, activation='relu')(inputs)
x = layers.Conv2D(64, 3, activation='relu')(x)
block_1_output = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(64, 3, activation='relu', padding='same')(block_1_output)
x = layers.Conv2D(64, 3, activation='relu', padding='same')(x)
block_2_output = layers.add([x, block_1_output])
x = layers.Conv2D(64, 3, activation='relu', padding='same')(block_2_output)
x = layers.Conv2D(64, 3, activation='relu', padding='same')(x)
block_3_output = layers.add([x, block_2_output])
x = layers.Conv2D(64, 3, activation='relu')(block_3_output)
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dense(256, activation='relu')(x)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(10, activation='softmax')(x)
model = keras.Model(inputs, outputs, name='toy_resnet')
model.summary()
###Output
_____no_output_____
###Markdown
Ploteando el modelo:
###Code
keras.utils.plot_model(model, 'mini_resnet.png', show_shapes=True)
###Output
_____no_output_____
###Markdown
Vamos a entrenarlo:
###Code
(x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)
model.compile(optimizer=keras.optimizers.RMSprop(1e-3),
loss='categorical_crossentropy',
metrics=['acc'])
model.fit(x_train, y_train,
batch_size=64,
epochs=1,
validation_split=0.2)
###Output
_____no_output_____
###Markdown
Compartir capasOtro buen uso de la API funcional son los modelos que usan capas compartidas. Las capas compartidas son instancias de capa que se reutilizan varias veces en un mismo modelo: aprenden características que corresponden a múltiples rutas en el gráfico de capas.Las capas compartidas a menudo se usan para codificar entradas que provienen de espacios similares (por ejemplo, dos piezas de texto diferentes que presentan un vocabulario similar), ya que permiten compartir información entre estas diferentes entradas y hacen posible entrenar un modelo de este tipo en menos datos. Si se ve una palabra determinada en una de las entradas, eso beneficiará el procesamiento de todas las entradas que pasan por la capa compartida.Para compartir una capa en la API funcional, simplemente llame a la misma instancia de capa varias veces. Por ejemplo, aquí hay una capa `Ingresa (del ingles Embedding)` compartida entre dos entradas de texto diferentes:
###Code
# Ingreso de 1000 palabras únicas asignadas a vectores de 128 dimensiones
shared_embedding = layers.Embedding(1000, 128)
# Secuencia de longitud variable de enteros
text_input_a = keras.Input(shape=(None,), dtype='int32')
# Secuencia de longitud variable de enteros
text_input_b = keras.Input(shape=(None,), dtype='int32')
# Reutilizamos la misma capa para codificar ambas entradas
encoded_input_a = shared_embedding(text_input_a)
encoded_input_b = shared_embedding(text_input_b)
###Output
_____no_output_____
###Markdown
Extracción y reutilización de nodos en el gráfico de capas Debido a que el gráfico de capas que está manipulando en la API funcional es una estructura de datos estática, se puede acceder e inspeccionarlo. Así es como podemos trazar modelos funcionales como imágenes, por ejemplo.Esto también significa que podemos acceder a las activaciones de capas intermedias ("nodos" en el gráfico) y reutilizarlas en otros lugares. ¡Esto es extremadamente útil para la extracción de características, por ejemplo!Veamos un ejemplo. Este es un modelo VGG19 con pesas pre-entrenadas en ImageNet:
###Code
from tensorflow.keras.applications import VGG19
vgg19 = VGG19()
###Output
_____no_output_____
###Markdown
Y estas son las activaciones intermedias del modelo, obtenidas al consultar la estructura de datos del gráfico:
###Code
features_list = [layer.output for layer in vgg19.layers]
###Output
_____no_output_____
###Markdown
Podemos usar estas características para crear un nuevo modelo de extracción de características, que devuelve los valores de las activaciones de la capa intermedia, y podemos hacer todo esto en 3 líneas.
###Code
feat_extraction_model = keras.Model(inputs=vgg19.input, outputs=features_list)
img = np.random.random((1, 224, 224, 3)).astype('float32')
extracted_features = feat_extraction_model(img)
###Output
_____no_output_____
###Markdown
Esto es útil cuando [implementa la transferencia de estilo neural] (https://medium.com/tensorflow/neural-style-transfer-creating-art-with-deep-learning-using-tf-keras-and-eager-execution- 7d541ac31398), entre otras cosas. Extendiendo la API escribiendo capas personalizadastf.keras tiene una amplia gama de capas incorporadas. Aquí están algunos ejemplos:- Capas convolucionales: `Conv1D`,` Conv2D`, `Conv3D`,` Conv2DTranspose`, etc.- Capas de agrupación: `MaxPooling1D`,` MaxPooling2D`, `MaxPooling3D`,` AveragePooling1D`, etc.- Capas RNN: `GRU`,` LSTM`, `ConvLSTM2D`, etc.- `BatchNormalization`,` Dropout`, `Embedded`, etc.Si no encuentras lo que necesitas, es fácil extender la API creando tus propias capas.Todas las capas subclasifican la clase `Layer` e implementan:- Un método `call`, que especifica el cálculo realizado por la capa.- Un método `build`, que crea los pesos de la capa (tenga en cuenta que esto es solo una convención de estilo; también puede crear pesos en` __init__`).Para obtener más información sobre cómo crear capas desde cero, consulta la guía [Guía para escribir capas y modelos desde cero](./custom_layers_and_models.ipynb).Aquí hay una implementación simple de una capa `Densa`:
###Code
class CustomDense(layers.Layer):
def __init__(self, units=32):
super(CustomDense, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(shape=(input_shape[-1], self.units),
initializer='random_normal',
trainable=True)
self.b = self.add_weight(shape=(self.units,),
initializer='random_normal',
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
inputs = keras.Input((4,))
outputs = CustomDense(10)(inputs)
model = keras.Model(inputs, outputs)
###Output
_____no_output_____
###Markdown
Si deseas que tu capa personalizada admita la serialización, también debes definir un método `get_config`,que devuelve los argumentos del constructor de la instancia de capa:
###Code
class CustomDense(layers.Layer):
def __init__(self, units=32):
super(CustomDense, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(shape=(input_shape[-1], self.units),
initializer='random_normal',
trainable=True)
self.b = self.add_weight(shape=(self.units,),
initializer='random_normal',
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
return {'units': self.units}
inputs = keras.Input((4,))
outputs = CustomDense(10)(inputs)
model = keras.Model(inputs, outputs)
config = model.get_config()
new_model = keras.Model.from_config(
config, custom_objects={'CustomDense': CustomDense})
###Output
_____no_output_____
###Markdown
Opcionalmente, también podría implementar el método de clase `from_config (cls, config)`, que se encarga de recrear una instancia de capa dado su diccionario de configuración. La implementación predeterminada de `from_config` es:```pythondef from_config(cls, config): return cls(**config)``` Cuándo usar la API funcional¿Cómo decidir si usar la API funcional para crear un nuevo modelo o simplemente subclasificar la clase `Modelo` directamente?En general, la API funcional es de nivel superior, más fácil y segura de usar, y tiene una serie de características que los modelos de subclases no admiten.Sin embargo, la subclasificación de modelos le brinda una mayor flexibilidad al crear modelos que no se pueden expresar fácilmente como gráficos acíclicos dirigidos de capas (por ejemplo, no podría implementar un Tree-RNN con la API funcional, tendría que subclasificar `Model` directamente). Estas son las fortalezas de la API funcional:Las propiedades enumeradas a continuación también son ciertas para los modelos secuenciales (que también son estructuras de datos), pero no son ciertas para los modelos subclasificados (que son bytecode de Python, no estructuras de datos). Es menos detallado.No ` super (MyClass, self) .__ init __ (...)`, no `def call (self, ...): `, etc.Comparar:```pitóninput = keras.Input (shape = (32,))x = capas. Denso (64, activación = 'relu') (entradas)salidas = capas. Denso (10) (x)mlp = keras.Model (entradas, salidas)```Con la versión subclaseada:```pitónclase MLP (keras.Model): def __init __ (self, ** kwargs): super (MLP, self) .__ init __ (** kwargs) self.dense_1 = capas.Dense (64, activación = 'relu') self.dense_2 = layers.Dense (10) llamada def (auto, entradas): x = self.dense_1 (entradas) return self.dense_2 (x) Instanciar el modelo.mlp = MLP () Necesario para crear el estado del modelo. El modelo no tiene un estado hasta que se llama al menos una vez._ = mlp (tf.zeros ((1, 32)))``` Valida su modelo mientras lo está definiendo.En la API funcional, su especificación de entrada (forma y dtype) se crea de antemano (a través de `Input`), y cada vez que llama a una capa, la capa comprueba que la especificación que se le pasa coincide con sus supuestos, y generará un mensaje de error útil si no.Esto garantiza que se ejecutará cualquier modelo que pueda construir con la API funcional. Toda la depuración (que no sea la depuración relacionada con la convergencia) ocurrirá estáticamente durante la construcción del modelo, y no en el momento de la ejecución. Esto es similar a la comprobación de tipo en un compilador. Tu modelo funcional es trazable e inspeccionable.Puedes trazar el modelo como un gráfico, y puedes acceder fácilmente a los nodos intermedios en este gráfico, por ejemplo, para extraer y reutilizar las activaciones de las capas intermedias, como vimos en un ejemplo anterior:```pitónfeatures_list = [layer.output para la capa en vgg19.layers]feat_extraction_model = keras.Model (input = vgg19.input, salidas = features_list)``` Su modelo funcional puede ser serializado o clonado.Debido a que un modelo funcional es una estructura de datos en lugar de un fragmento de código, es serializable de forma segura y se puede guardar como un único archivo que le permite recrear exactamente el mismo modelo sin tener acceso a ninguno de los códigos originales. Consulta nuestra [guía de guardado y serialización] (./save_and_serialize.ipynb) para obtener más detalles. Estas son las debilidades de la API funcional: No admite arquitecturas dinámicas.La API funcional trata los modelos como DAG de capas. Esto es cierto para la mayoría de las arquitecturas de aprendizaje profundo, pero no para todas: por ejemplo, las redes recursivas o los RNN de árbol no siguen este supuesto y no se pueden implementar en la API funcional. A veces, solo necesitas escribir todo desde cero.Al escribir actividades avanzadas, es posible que desee hacer cosas que están fuera del alcance de "definir un DAG de capas": por ejemplo, es posible que desee exponer múltiples métodos personalizados de entrenamiento e inferencia en su instancia de modelo. Esto requiere subclases.---Para profundizar más en las diferencias entre la API funcional y la subclasificación de modelos, puede leer [¿Qué son las API simbólicas e imperativas en TensorFlow 2.0?] (Https://medium.com/tensorflow/what-are-symbolic-and -imperative-apis-in-tensorflow-2-0-dfccecb01021). Mezcla y combina diferentes estilos de APIEs importante destacar que elegir entre la subclasificación de API funcional o modelo no es una decisión binaria que lo restringe a una categoría de modelos. Todos los modelos en la API tf.keras pueden interactuar con cada uno, ya sean modelos secuenciales, modelos funcionales o modelos / capas subclasificados escritos desde cero.Siempre puede usar un modelo funcional o modelo secuencial como parte de un modelo / capa subclasificado:
###Code
units = 32
timesteps = 10
input_dim = 5
# Define a Functional model
inputs = keras.Input((None, units))
x = layers.GlobalAveragePooling1D()(inputs)
outputs = layers.Dense(1, activation='sigmoid')(x)
model = keras.Model(inputs, outputs)
class CustomRNN(layers.Layer):
def __init__(self):
super(CustomRNN, self).__init__()
self.units = units
self.projection_1 = layers.Dense(units=units, activation='tanh')
self.projection_2 = layers.Dense(units=units, activation='tanh')
# Our previously-defined Functional model
self.classifier = model
def call(self, inputs):
outputs = []
state = tf.zeros(shape=(inputs.shape[0], self.units))
for t in range(inputs.shape[1]):
x = inputs[:, t, :]
h = self.projection_1(x)
y = h + self.projection_2(state)
state = y
outputs.append(y)
features = tf.stack(outputs, axis=1)
print(features.shape)
return self.classifier(features)
rnn_model = CustomRNN()
_ = rnn_model(tf.zeros((1, timesteps, input_dim)))
###Output
_____no_output_____
###Markdown
Inversamente, puede usar cualquier Capa o Modelo subclasificado en la API Funcional siempre que implemente un método `call` que siga uno de los siguientes patrones:- `call (self, input, ** kwargs)` donde `input` es un tensor o una estructura anidada de tensores (por ejemplo, una lista de tensores), y donde` ** kwargs` son argumentos no tensoriales (no input )- `call (self, input, training = None, ** kwargs)` donde `training` es un valor booleano que indica si la capa debe comportarse en modo de entrenamiento y modo de inferencia.- `call (self, input, mask = None, ** kwargs)` donde `mask` es un tensor de máscara booleano (útil para RNN, por ejemplo).- `call (self, input, training = None, mask = None, ** kwargs)` - por supuesto, puede tener tanto un comportamiento específico de enmascaramiento como de entrenamiento al mismo tiempo.Además, si implementa el método `get_config` en su Capa o Modelo personalizado, los modelos funcionales que cree con él seguirán siendo serializables y clonables.Aquí hay un ejemplo rápido en el que usamos un RNN personalizado escrito desde cero en un modelo funcional:
###Code
units = 32
timesteps = 10
input_dim = 5
batch_size = 16
class CustomRNN(layers.Layer):
def __init__(self):
super(CustomRNN, self).__init__()
self.units = units
self.projection_1 = layers.Dense(units=units, activation='tanh')
self.projection_2 = layers.Dense(units=units, activation='tanh')
self.classifier = layers.Dense(1, activation='sigmoid')
def call(self, inputs):
outputs = []
state = tf.zeros(shape=(inputs.shape[0], self.units))
for t in range(inputs.shape[1]):
x = inputs[:, t, :]
h = self.projection_1(x)
y = h + self.projection_2(state)
state = y
outputs.append(y)
features = tf.stack(outputs, axis=1)
return self.classifier(features)
# Tenga en cuenta que especificamos un tamaño de lote estático para las entradas con `batch_shape`
# arg, porque el cálculo interno de `CustomRNN` requiere un tamaño de lote estático
# (cuando creamos el tensor de ceros `estado`).
inputs = keras.Input(batch_shape=(batch_size, timesteps, input_dim))
x = layers.Conv1D(32, 3)(inputs)
outputs = CustomRNN()(x)
model = keras.Model(inputs, outputs)
rnn_model = CustomRNN()
_ = rnn_model(tf.zeros((1, 10, 5)))
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
La API funcional "Keras" en TensorFlow Ver en TensorFlow.org Correr en Google Colab Ver código fuente en GitHub Descargar notebook Note: Nuestra comunidad de Tensorflow ha traducido estos documentos. Como las traducciones de la comunidadson basados en el "mejor esfuerzo", no hay ninguna garantia que esta sea un reflejo preciso y actual de la [Documentacion Oficial en Ingles](https://www.tensorflow.org/?hl=en).Si tienen sugerencias sobre como mejorar esta traduccion, por favor envian un "Pull request"al siguiente repositorio [tensorflow/docs](https://github.com/tensorflow/docs).Para ofrecerse como voluntario o hacer revision de las traducciones de la Comunidadpor favor contacten al siguiente grupo [[email protected] list](https://groups.google.com/a/tensorflow.org/forum/!forum/docs). Setup
###Code
import tensorflow as tf
tf.keras.backend.clear_session() # Reseteo sencillo
###Output
_____no_output_____
###Markdown
IntroduccionYa estás familiarizado con el uso del metodo `keras.Sequential()` para crear modelos.La API funcional es una forma de crear modelos mas dinamicos que con ` Sequential `: La API funcional puede manejar modelos con topología no lineal, modelos con capas compartidas y modelos con múltiples entradas o salidas.Se basa en la idea de que un modelo de aprendizaje profundosuele ser un gráfico acíclico dirigido (DAG) de capas.La API funcional es un conjunto de herramientas para **construir gráficos de capas**.Considera el siguiente modelo:```(input: 784-vectores dimensionales) ↧[Dense (64 units, activacion relu)] ↧[Dense (64 units, activacion relu)] ↧[Dense (10 units, activacion softmax)] ↧(output: distribución de probabilidad en 10 clases)```Es una simple grafica de tres capas.Para construir este modelo con la API funcional,comenzarías creando un nodo de entrada:
###Code
from tensorflow import keras
inputs = keras.Input(shape=(784,))
###Output
_____no_output_____
###Markdown
Aqui solo especificamos el tipo de nuestra data set: 784-vectores dimensionales.Nota que el tamaño del batch siempre debe ser omitido, solo se incluye el tipo de la data set.Para una input de tipo imágen ` (31,32,3) ` hubiese sido:
###Code
img_inputs = keras.Input(shape=(32, 32, 3))
###Output
_____no_output_____
###Markdown
Lo que se devuelve, ` input `, contiene información sobre la forma y el tipo de dato que se espera ingresa en tu modelo:
###Code
inputs.shape
inputs.dtype
###Output
_____no_output_____
###Markdown
Puedes crear un nuevo nodo en el grafico de capas mandando a llamar al objeto ` input `.
###Code
from tensorflow.keras import layers
dense = layers.Dense(64, activation='relu')
x = dense(inputs)
###Output
_____no_output_____
###Markdown
La acción "layer call" es como dibujar una flecha desde "entradas" a la capa que creamos.Estamos "pasando" las entradas a la capa `dense`, y afuera obtenemos` x`.Agreguemos algunas capas más a nuestro gráfico de capas:La acción "llamada a la capa" es como dibujar una flecha de "entradas" a la capa que creamos.Estamos pasando las entradas a una capa mas densa, y respecto a la salida obtenemos una ` x `.
###Code
x = layers.Dense(64, activation='relu')(x)
outputs = layers.Dense(10, activation='softmax')(x)
###Output
_____no_output_____
###Markdown
LLegados a este punto, podemos crear un ` Modelo ` especificando sus entradas y salidas en las capas de graficas.
###Code
model = keras.Model(inputs=inputs, outputs=outputs)
###Output
_____no_output_____
###Markdown
Recapitulando, esta es nuestra definción completa del proceso:
###Code
inputs = keras.Input(shape=(784,), name='img')
x = layers.Dense(64, activation='relu')(inputs)
x = layers.Dense(64, activation='relu')(x)
outputs = layers.Dense(10, activation='softmax')(x)
model = keras.Model(inputs=inputs, outputs=outputs, name='mnist_model')
###Output
_____no_output_____
###Markdown
Veamos como se muestra el model summary:
###Code
model.summary()
###Output
_____no_output_____
###Markdown
También podemos trazar el modelo como un gráfico:
###Code
keras.utils.plot_model(model, 'my_first_model.png')
###Output
_____no_output_____
###Markdown
Y opcionalmente mostrar la entrada y la salida de la forma de cada capa en la gráfica ploteada:
###Code
keras.utils.plot_model(model, 'my_first_model_with_shape_info.png', show_shapes=True)
###Output
_____no_output_____
###Markdown
Esta figura y el código que escribimos son prácticamente idénticos. En la versión de código, las flechas de conexión simplemente se reemplazan por la operación de llamada.Un "gráfico de capas" es una imagen mental muy intuitiva para un modelo de aprendizaje profundo, y la API funcional es una forma de crear modelos que reflejan de cerca esta imagen mental. Entrenamiento, evaluación e inferencia.El entrenamiento, la evaluación y la inferencia funcionan exactamente de la misma manera para los modelos construidosutilizando la API funcional como para los modelos secuenciales.Aquí hay una demostración rápida.Aquí cargamos datos de imagen MNIST, los rediseñamos en vectores,ajustar el modelo en los datos (mientras se monitorea el rendimiento en una división de validación),y finalmente evaluamos nuestro modelo en los datos de prueba:
###Code
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype('float32') / 255
x_test = x_test.reshape(10000, 784).astype('float32') / 255
model.compile(loss='sparse_categorical_crossentropy',
optimizer=keras.optimizers.RMSprop(),
metrics=['accuracy'])
history = model.fit(x_train, y_train,
batch_size=64,
epochs=5,
validation_split=0.2)
test_scores = model.evaluate(x_test, y_test, verbose=2)
print('Test loss:', test_scores[0])
print('Test accuracy:', test_scores[1])
###Output
_____no_output_____
###Markdown
Para obtener una guía completa sobre el entrenamiento y evaluación de modelos, consulta [Guía de entrenamiento y evaluación](./train_and_evaluate.ipynb). Almacenado y serializaciónEl almacenado y la serialización funcionan exactamente de la misma manera para los modelos construidosutilizando la API funcional como para los modelos secuenciales.Una forma estándar de guardar un modelo funcional es llamar a `model.save ()` para guardar todo el modelo en un solo archivo.Posteriormente, puede volver a crear el mismo modelo a partir de este archivo, incluso si ya no tiene acceso al código.eso creó el modelo.Este archivo incluye:- La arquitectura del modelo.- Los valores de peso del modelo (que se aprendieron durante el entrenamiento)- La configuración de entrenamiento del modelo (lo que pasó a `compilar`), si corresponde- El optimizador y su estado, si corresponde (esto le permite reiniciar el entrenamiento donde lo dejó)
###Code
model.save('path_to_my_model.h5')
del model
# Recrea el mismo modelo, desde el archivo:
model = keras.models.load_model('path_to_my_model.h5')
###Output
_____no_output_____
###Markdown
Para obtener una guía completa sobre el guardado de modelos, consulta [Guía para guardar y serializar modelos](./save_and_serialize.ipynb). Usando el mismo gráfico de capas para definir múltiples modelosEn la API funcional, los modelos se crean especificando sus entradasy salidas en un gráfico de capas. Eso significa que un solo gráfico de capasSe puede utilizar para generar múltiples modelos.En el siguiente ejemplo, usamos la misma arquitectura de capas para crear instancias de dos modelos:un modelo de "codificador" que convierte las entradas de imagen en vectores de 16 dimensiones,y un modelo completo de `autoencoder` para entrenamiento.
###Code
encoder_input = keras.Input(shape=(28, 28, 1), name='img')
x = layers.Conv2D(16, 3, activation='relu')(encoder_input)
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.Conv2D(16, 3, activation='relu')(x)
encoder_output = layers.GlobalMaxPooling2D()(x)
encoder = keras.Model(encoder_input, encoder_output, name='encoder')
encoder.summary()
x = layers.Reshape((4, 4, 1))(encoder_output)
x = layers.Conv2DTranspose(16, 3, activation='relu')(x)
x = layers.Conv2DTranspose(32, 3, activation='relu')(x)
x = layers.UpSampling2D(3)(x)
x = layers.Conv2DTranspose(16, 3, activation='relu')(x)
decoder_output = layers.Conv2DTranspose(1, 3, activation='relu')(x)
autoencoder = keras.Model(encoder_input, decoder_output, name='autoencoder')
autoencoder.summary()
###Output
_____no_output_____
###Markdown
Tenga en cuenta que hacemos que la arquitectura de decodificación sea estrictamente simétrica a la arquitectura de codificación,para que obtengamos una forma de salida que sea igual a la forma de entrada `(28, 28, 1)`.El reverso de una capa `Conv2D` es una capa` Conv2DTranspose`, y el reverso de una capa `MaxPooling2D`La capa es una capa `UpSampling2D`. Todos los modelos son invocables, al igual que las capas.Puede tratar cualquier modelo como si fuera una capa, llamándolo en una `Entrada` o en la salida de otra capa.Tenga en cuenta que al llamar a un modelo no solo está reutilizando la arquitectura del modelo, también está reutilizando sus pesos.Veamos esto en acción. Aquí hay una versión diferente del ejemplo de autoencoder que crea un modelo de codificador, un modelo de decodificador,y encadenarlos en dos llamadas para obtener el modelo de autoencoder:
###Code
encoder_input = keras.Input(shape=(28, 28, 1), name='original_img')
x = layers.Conv2D(16, 3, activation='relu')(encoder_input)
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.Conv2D(16, 3, activation='relu')(x)
encoder_output = layers.GlobalMaxPooling2D()(x)
encoder = keras.Model(encoder_input, encoder_output, name='encoder')
encoder.summary()
decoder_input = keras.Input(shape=(16,), name='encoded_img')
x = layers.Reshape((4, 4, 1))(decoder_input)
x = layers.Conv2DTranspose(16, 3, activation='relu')(x)
x = layers.Conv2DTranspose(32, 3, activation='relu')(x)
x = layers.UpSampling2D(3)(x)
x = layers.Conv2DTranspose(16, 3, activation='relu')(x)
decoder_output = layers.Conv2DTranspose(1, 3, activation='relu')(x)
decoder = keras.Model(decoder_input, decoder_output, name='decoder')
decoder.summary()
autoencoder_input = keras.Input(shape=(28, 28, 1), name='img')
encoded_img = encoder(autoencoder_input)
decoded_img = decoder(encoded_img)
autoencoder = keras.Model(autoencoder_input, decoded_img, name='autoencoder')
autoencoder.summary()
###Output
_____no_output_____
###Markdown
Como puede ver, el modelo puede estar anidado: un modelo puede contener submodelos (ya que un modelo es como una capa).Un caso de uso común para la anidación de modelos es * ensamblaje *.Como ejemplo, a continuación se explica cómo agrupar un conjunto de modelos en un solo modelo que promedia sus predicciones:
###Code
def get_model():
inputs = keras.Input(shape=(128,))
outputs = layers.Dense(1, activation='sigmoid')(inputs)
return keras.Model(inputs, outputs)
model1 = get_model()
model2 = get_model()
model3 = get_model()
inputs = keras.Input(shape=(128,))
y1 = model1(inputs)
y2 = model2(inputs)
y3 = model3(inputs)
outputs = layers.average([y1, y2, y3])
ensemble_model = keras.Model(inputs=inputs, outputs=outputs)
###Output
_____no_output_____
###Markdown
Manipulación de topologías gráficas complejas Modelos con múltiples entradas y salidasLa API funcional facilita la manipulación de múltiples entradas y salidas.Esto no se puede manejar con la API secuencial.Aquí hay un ejemplo simple.Supongamos que está creando un sistema para clasificar los tickets de emisión personalizados por prioridad y enrutarlos al departamento correcto.Tu modelo tendrá 3 entradas:- Título del ticket (entrada de texto)- Cuerpo del texto del ticket (entrada de texto)- Cualquier etiqueta agregada por el usuario (entrada categórica)Tendrá dos salidas:- Puntuación de prioridad entre 0 y 1 (salida sigmoidea escalar)- El departamento que debe manejar el ticket (salida softmax sobre el conjunto de departamentos)Construyamos este modelo en pocas líneas con la API funcional.
###Code
num_tags = 12 # Número de etiquetas de problemas únicos
num_words = 10000 # Tamaño del vocabulario obtenido al preprocesar datos de texto
num_departments = 4 # Número de departamentos para predicciones.
title_input = keras.Input(shape=(None,), name='title') # Secuencia de longitud variable de entradas
body_input = keras.Input(shape=(None,), name='body') # Secuencia de longitud variable de entradas
tags_input = keras.Input(shape=(num_tags,), name='tags') # Vectores binarios de tamaño `num_tags`
# Ingresa cada palabra en el título en un vector de 64 dimensiones
title_features = layers.Embedding(num_words, 64)(title_input)
# Ingresa cada palabra en el texto en un vector de 64 dimensiones
body_features = layers.Embedding(num_words, 64)(body_input)
# Reduce la secuencia de palabras ingresadas en el título en un solo vector de 128 dimensiones
title_features = layers.LSTM(128)(title_features)
# Reduce la secuencia de palabras ingresadas en el cuerpo en un solo vector de 32 dimensiones
body_features = layers.LSTM(32)(body_features)
# Combina todas las funciones disponibles en un solo vector grande mediante concatenación
x = layers.concatenate([title_features, body_features, tags_input])
# Pegua una regresión logística para la predicción de prioridad en la parte superior de las características
priority_pred = layers.Dense(1, activation='sigmoid', name='priority')(x)
# Stick a department classifier on top of the features
department_pred = layers.Dense(num_departments, activation='softmax', name='department')(x)
# Instancia un modelo de extremo a extremo que prediga tanto la prioridad como el departamento
model = keras.Model(inputs=[title_input, body_input, tags_input],
outputs=[priority_pred, department_pred])
###Output
_____no_output_____
###Markdown
Ploteando el modelo:
###Code
keras.utils.plot_model(model, 'multi_input_and_output_model.png', show_shapes=True)
###Output
_____no_output_____
###Markdown
Al compilar este modelo, podemos asignar diferentes pérdidas a cada salida.Incluso puede asignar diferentes pesos a cada pérdida, para modular sucontribución a la pérdida total de entrenamiento.
###Code
model.compile(optimizer=keras.optimizers.RMSprop(1e-3),
loss=['binary_crossentropy', 'categorical_crossentropy'],
loss_weights=[1., 0.2])
###Output
_____no_output_____
###Markdown
Como dimos nombres a nuestras capas de salida, también podríamos especificar la pérdida de esta manera:
###Code
model.compile(optimizer=keras.optimizers.RMSprop(1e-3),
loss={'priority': 'binary_crossentropy',
'department': 'categorical_crossentropy'},
loss_weights=[1., 0.2])
###Output
_____no_output_____
###Markdown
Podemos entrenar el modelo pasando listas de matrices Numpy de entradas y objetivos:
###Code
import numpy as np
# Datos de entrada ficticios
title_data = np.random.randint(num_words, size=(1280, 10))
body_data = np.random.randint(num_words, size=(1280, 100))
tags_data = np.random.randint(2, size=(1280, num_tags)).astype('float32')
# Datos objetivo ficticios
priority_targets = np.random.random(size=(1280, 1))
dept_targets = np.random.randint(2, size=(1280, num_departments))
model.fit({'title': title_data, 'body': body_data, 'tags': tags_data},
{'priority': priority_targets, 'department': dept_targets},
epochs=2,
batch_size=32)
###Output
_____no_output_____
###Markdown
Al llamar al ajuste con un objeto `Dataset`, debería producir untupla de listas como `([title_data, body_data, tags_data], [priority_targets, dept_targets])`o una tupla de diccionarios como`({'title': title_data, 'body': body_data, 'tags': tags_data}, {'priority': priority_targets, 'department': dept_targets})`.Para obtener una explicación más detallada, consulta la guía completa [Guía de entrenamiento y evaluación](./train_and_evaluate.ipynb). Un modelo de Red neuronal residual de jugueteAdemás de los modelos con múltiples entradas y salidas,La API funcional facilita la manipulación de topologías de conectividad no lineal,es decir, modelos donde las capas no están conectadas secuencialmente.Esto tampoco se puede manejar con la API secuencial (como su nombre lo indica).Un caso de uso común para esto son las conexiones residuales.Construyamos un modelo de ResNet de juguete para CIFAR10 para demostrar esto.
###Code
inputs = keras.Input(shape=(32, 32, 3), name='img')
x = layers.Conv2D(32, 3, activation='relu')(inputs)
x = layers.Conv2D(64, 3, activation='relu')(x)
block_1_output = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(64, 3, activation='relu', padding='same')(block_1_output)
x = layers.Conv2D(64, 3, activation='relu', padding='same')(x)
block_2_output = layers.add([x, block_1_output])
x = layers.Conv2D(64, 3, activation='relu', padding='same')(block_2_output)
x = layers.Conv2D(64, 3, activation='relu', padding='same')(x)
block_3_output = layers.add([x, block_2_output])
x = layers.Conv2D(64, 3, activation='relu')(block_3_output)
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dense(256, activation='relu')(x)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(10, activation='softmax')(x)
model = keras.Model(inputs, outputs, name='toy_resnet')
model.summary()
###Output
_____no_output_____
###Markdown
Ploteando el modelo:
###Code
keras.utils.plot_model(model, 'mini_resnet.png', show_shapes=True)
###Output
_____no_output_____
###Markdown
Vamos a entrenarlo:
###Code
(x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)
model.compile(optimizer=keras.optimizers.RMSprop(1e-3),
loss='categorical_crossentropy',
metrics=['acc'])
model.fit(x_train, y_train,
batch_size=64,
epochs=1,
validation_split=0.2)
###Output
_____no_output_____
###Markdown
Compartir capasOtro buen uso de la API funcional son los modelos que usan capas compartidas. Las capas compartidas son instancias de capa que se reutilizan varias veces en un mismo modelo: aprenden características que corresponden a múltiples rutas en el gráfico de capas.Las capas compartidas a menudo se usan para codificar entradas que provienen de espacios similares (por ejemplo, dos piezas de texto diferentes que presentan un vocabulario similar), ya que permiten compartir información entre estas diferentes entradas y hacen posible entrenar un modelo de este tipo en menos datos. Si se ve una palabra determinada en una de las entradas, eso beneficiará el procesamiento de todas las entradas que pasan por la capa compartida.Para compartir una capa en la API funcional, simplemente llame a la misma instancia de capa varias veces. Por ejemplo, aquí hay una capa `Ingresa (del ingles Embedding)` compartida entre dos entradas de texto diferentes:
###Code
# Ingreso de 1000 palabras únicas asignadas a vectores de 128 dimensiones
shared_embedding = layers.Embedding(1000, 128)
# Secuencia de longitud variable de enteros
text_input_a = keras.Input(shape=(None,), dtype='int32')
# Secuencia de longitud variable de enteros
text_input_b = keras.Input(shape=(None,), dtype='int32')
# Reutilizamos la misma capa para codificar ambas entradas
encoded_input_a = shared_embedding(text_input_a)
encoded_input_b = shared_embedding(text_input_b)
###Output
_____no_output_____
###Markdown
Extracción y reutilización de nodos en el gráfico de capas Debido a que el gráfico de capas que está manipulando en la API funcional es una estructura de datos estática, se puede acceder e inspeccionarlo. Así es como podemos trazar modelos funcionales como imágenes, por ejemplo.Esto también significa que podemos acceder a las activaciones de capas intermedias ("nodos" en el gráfico) y reutilizarlas en otros lugares. ¡Esto es extremadamente útil para la extracción de características, por ejemplo!Veamos un ejemplo. Este es un modelo VGG19 con pesas pre-entrenadas en ImageNet:
###Code
from tensorflow.keras.applications import VGG19
vgg19 = VGG19()
###Output
_____no_output_____
###Markdown
Y estas son las activaciones intermedias del modelo, obtenidas al consultar la estructura de datos del gráfico:
###Code
features_list = [layer.output for layer in vgg19.layers]
###Output
_____no_output_____
###Markdown
Podemos usar estas características para crear un nuevo modelo de extracción de características, que devuelve los valores de las activaciones de la capa intermedia, y podemos hacer todo esto en 3 líneas.
###Code
feat_extraction_model = keras.Model(inputs=vgg19.input, outputs=features_list)
img = np.random.random((1, 224, 224, 3)).astype('float32')
extracted_features = feat_extraction_model(img)
###Output
_____no_output_____
###Markdown
Esto es útil cuando [implementa la transferencia de estilo neural] (https://medium.com/tensorflow/neural-style-transfer-creating-art-with-deep-learning-using-tf-keras-and-eager-execution- 7d541ac31398), entre otras cosas. Extendiendo la API escribiendo capas personalizadastf.keras tiene una amplia gama de capas incorporadas. Aquí están algunos ejemplos:- Capas convolucionales: `Conv1D`,` Conv2D`, `Conv3D`,` Conv2DTranspose`, etc.- Capas de agrupación: `MaxPooling1D`,` MaxPooling2D`, `MaxPooling3D`,` AveragePooling1D`, etc.- Capas RNN: `GRU`,` LSTM`, `ConvLSTM2D`, etc.- `BatchNormalization`,` Dropout`, `Embedded`, etc.Si no encuentras lo que necesitas, es fácil extender la API creando tus propias capas.Todas las capas subclasifican la clase `Layer` e implementan:- Un método `call`, que especifica el cálculo realizado por la capa.- Un método `build`, que crea los pesos de la capa (tenga en cuenta que esto es solo una convención de estilo; también puede crear pesos en` __init__`).Para obtener más información sobre cómo crear capas desde cero, consulta la guía [Guía para escribir capas y modelos desde cero](./custom_layers_and_models.ipynb).Aquí hay una implementación simple de una capa `Densa`:
###Code
class CustomDense(layers.Layer):
def __init__(self, units=32):
super(CustomDense, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(shape=(input_shape[-1], self.units),
initializer='random_normal',
trainable=True)
self.b = self.add_weight(shape=(self.units,),
initializer='random_normal',
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
inputs = keras.Input((4,))
outputs = CustomDense(10)(inputs)
model = keras.Model(inputs, outputs)
###Output
_____no_output_____
###Markdown
Si deseas que tu capa personalizada admita la serialización, también debes definir un método `get_config`,que devuelve los argumentos del constructor de la instancia de capa:
###Code
class CustomDense(layers.Layer):
def __init__(self, units=32):
super(CustomDense, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(shape=(input_shape[-1], self.units),
initializer='random_normal',
trainable=True)
self.b = self.add_weight(shape=(self.units,),
initializer='random_normal',
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
return {'units': self.units}
inputs = keras.Input((4,))
outputs = CustomDense(10)(inputs)
model = keras.Model(inputs, outputs)
config = model.get_config()
new_model = keras.Model.from_config(
config, custom_objects={'CustomDense': CustomDense})
###Output
_____no_output_____
###Markdown
Opcionalmente, también podría implementar el método de clase `from_config (cls, config)`, que se encarga de recrear una instancia de capa dado su diccionario de configuración. La implementación predeterminada de `from_config` es:```pythondef from_config(cls, config): return cls(**config)``` Cuándo usar la API funcional¿Cómo decidir si usar la API funcional para crear un nuevo modelo o simplemente subclasificar la clase `Modelo` directamente?En general, la API funcional es de nivel superior, más fácil y segura de usar, y tiene una serie de características que los modelos de subclases no admiten.Sin embargo, la subclasificación de modelos le brinda una mayor flexibilidad al crear modelos que no se pueden expresar fácilmente como gráficos acíclicos dirigidos de capas (por ejemplo, no podría implementar un Tree-RNN con la API funcional, tendría que subclasificar `Model` directamente). Estas son las fortalezas de la API funcional:Las propiedades enumeradas a continuación también son ciertas para los modelos secuenciales (que también son estructuras de datos), pero no son ciertas para los modelos subclasificados (que son bytecode de Python, no estructuras de datos). Es menos detallado.No ` super (MyClass, self) .__ init __ (...)`, no `def call (self, ...): `, etc.Comparar:```pitóninput = keras.Input (shape = (32,))x = capas. Denso (64, activación = 'relu') (entradas)salidas = capas. Denso (10) (x)mlp = keras.Model (entradas, salidas)```Con la versión subclaseada:```pitónclase MLP (keras.Model): def __init __ (self, ** kwargs): super (MLP, self) .__ init __ (** kwargs) self.dense_1 = capas.Dense (64, activación = 'relu') self.dense_2 = layers.Dense (10) llamada def (auto, entradas): x = self.dense_1 (entradas) return self.dense_2 (x) Instanciar el modelo.mlp = MLP () Necesario para crear el estado del modelo. El modelo no tiene un estado hasta que se llama al menos una vez._ = mlp (tf.zeros ((1, 32)))``` Valida su modelo mientras lo está definiendo.En la API funcional, su especificación de entrada (forma y dtype) se crea de antemano (a través de `Input`), y cada vez que llama a una capa, la capa comprueba que la especificación que se le pasa coincide con sus supuestos, y generará un mensaje de error útil si no.Esto garantiza que se ejecutará cualquier modelo que pueda construir con la API funcional. Toda la depuración (que no sea la depuración relacionada con la convergencia) ocurrirá estáticamente durante la construcción del modelo, y no en el momento de la ejecución. Esto es similar a la comprobación de tipo en un compilador. Tu modelo funcional es trazable e inspeccionable.Puedes trazar el modelo como un gráfico, y puedes acceder fácilmente a los nodos intermedios en este gráfico, por ejemplo, para extraer y reutilizar las activaciones de las capas intermedias, como vimos en un ejemplo anterior:```pitónfeatures_list = [layer.output para la capa en vgg19.layers]feat_extraction_model = keras.Model (input = vgg19.input, salidas = features_list)``` Su modelo funcional puede ser serializado o clonado.Debido a que un modelo funcional es una estructura de datos en lugar de un fragmento de código, es serializable de forma segura y se puede guardar como un único archivo que le permite recrear exactamente el mismo modelo sin tener acceso a ninguno de los códigos originales. Consulta nuestra [guía de guardado y serialización] (./save_and_serialize.ipynb) para obtener más detalles. Estas son las debilidades de la API funcional: No admite arquitecturas dinámicas.La API funcional trata los modelos como DAG de capas. Esto es cierto para la mayoría de las arquitecturas de aprendizaje profundo, pero no para todas: por ejemplo, las redes recursivas o los RNN de árbol no siguen este supuesto y no se pueden implementar en la API funcional. A veces, solo necesitas escribir todo desde cero.Al escribir actividades avanzadas, es posible que desee hacer cosas que están fuera del alcance de "definir un DAG de capas": por ejemplo, es posible que desee exponer múltiples métodos personalizados de entrenamiento e inferencia en su instancia de modelo. Esto requiere subclases.---Para profundizar más en las diferencias entre la API funcional y la subclasificación de modelos, puede leer [¿Qué son las API simbólicas e imperativas en TensorFlow 2.0?] (Https://medium.com/tensorflow/what-are-symbolic-and -imperative-apis-in-tensorflow-2-0-dfccecb01021). Mezcla y combina diferentes estilos de APIEs importante destacar que elegir entre la subclasificación de API funcional o modelo no es una decisión binaria que lo restringe a una categoría de modelos. Todos los modelos en la API tf.keras pueden interactuar con cada uno, ya sean modelos secuenciales, modelos funcionales o modelos / capas subclasificados escritos desde cero.Siempre puede usar un modelo funcional o modelo secuencial como parte de un modelo / capa subclasificado:
###Code
units = 32
timesteps = 10
input_dim = 5
# Define a Functional model
inputs = keras.Input((None, units))
x = layers.GlobalAveragePooling1D()(inputs)
outputs = layers.Dense(1, activation='sigmoid')(x)
model = keras.Model(inputs, outputs)
class CustomRNN(layers.Layer):
def __init__(self):
super(CustomRNN, self).__init__()
self.units = units
self.projection_1 = layers.Dense(units=units, activation='tanh')
self.projection_2 = layers.Dense(units=units, activation='tanh')
# Our previously-defined Functional model
self.classifier = model
def call(self, inputs):
outputs = []
state = tf.zeros(shape=(inputs.shape[0], self.units))
for t in range(inputs.shape[1]):
x = inputs[:, t, :]
h = self.projection_1(x)
y = h + self.projection_2(state)
state = y
outputs.append(y)
features = tf.stack(outputs, axis=1)
print(features.shape)
return self.classifier(features)
rnn_model = CustomRNN()
_ = rnn_model(tf.zeros((1, timesteps, input_dim)))
###Output
_____no_output_____
###Markdown
Inversamente, puede usar cualquier Capa o Modelo subclasificado en la API Funcional siempre que implemente un método `call` que siga uno de los siguientes patrones:- `call (self, input, ** kwargs)` donde `input` es un tensor o una estructura anidada de tensores (por ejemplo, una lista de tensores), y donde` ** kwargs` son argumentos no tensoriales (no input )- `call (self, input, training = None, ** kwargs)` donde `training` es un valor booleano que indica si la capa debe comportarse en modo de entrenamiento y modo de inferencia.- `call (self, input, mask = None, ** kwargs)` donde `mask` es un tensor de máscara booleano (útil para RNN, por ejemplo).- `call (self, input, training = None, mask = None, ** kwargs)` - por supuesto, puede tener tanto un comportamiento específico de enmascaramiento como de entrenamiento al mismo tiempo.Además, si implementa el método `get_config` en su Capa o Modelo personalizado, los modelos funcionales que cree con él seguirán siendo serializables y clonables.Aquí hay un ejemplo rápido en el que usamos un RNN personalizado escrito desde cero en un modelo funcional:
###Code
units = 32
timesteps = 10
input_dim = 5
batch_size = 16
class CustomRNN(layers.Layer):
def __init__(self):
super(CustomRNN, self).__init__()
self.units = units
self.projection_1 = layers.Dense(units=units, activation='tanh')
self.projection_2 = layers.Dense(units=units, activation='tanh')
self.classifier = layers.Dense(1, activation='sigmoid')
def call(self, inputs):
outputs = []
state = tf.zeros(shape=(inputs.shape[0], self.units))
for t in range(inputs.shape[1]):
x = inputs[:, t, :]
h = self.projection_1(x)
y = h + self.projection_2(state)
state = y
outputs.append(y)
features = tf.stack(outputs, axis=1)
return self.classifier(features)
# Tenga en cuenta que especificamos un tamaño de lote estático para las entradas con `batch_shape`
# arg, porque el cálculo interno de `CustomRNN` requiere un tamaño de lote estático
# (cuando creamos el tensor de ceros `estado`).
inputs = keras.Input(batch_shape=(batch_size, timesteps, input_dim))
x = layers.Conv1D(32, 3)(inputs)
outputs = CustomRNN()(x)
model = keras.Model(inputs, outputs)
rnn_model = CustomRNN()
_ = rnn_model(tf.zeros((1, 10, 5)))
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
La API funcional "Keras" en TensorFlow Ver en TensorFlow.org Correr en Google Colab Ver código fuente en GitHub Descargar notebook Note: Nuestra comunidad de Tensorflow ha traducido estos documentos. Como las traducciones de la comunidadson basados en el "mejor esfuerzo", no hay ninguna garantia que esta sea un reflejo preciso y actual de la [Documentacion Oficial en Ingles](https://www.tensorflow.org/?hl=en).Si tienen sugerencias sobre como mejorar esta traduccion, por favor envian un "Pull request"al siguiente repositorio [tensorflow/docs](https://github.com/tensorflow/docs).Para ofrecerse como voluntario o hacer revision de las traducciones de la Comunidadpor favor contacten al siguiente grupo [[email protected] list](https://groups.google.com/a/tensorflow.org/forum/!forum/docs). Setup
###Code
import tensorflow as tf
tf.keras.backend.clear_session() # Reseteo sencillo
###Output
_____no_output_____
###Markdown
IntroduccionYa estás familiarizado con el uso del metodo `keras.Sequential()` para crear modelos.La API funcional es una forma de crear modelos mas dinamicos que con ` Sequential `: La API funcional puede manejar modelos con topología no lineal, modelos con capas compartidas y modelos con múltiples entradas o salidas.Se basa en la idea de que un modelo de aprendizaje profundosuele ser un gráfico acíclico dirigido (DAG) de capas.La API funcional es un conjunto de herramientas para **construir gráficos de capas**.Considera el siguiente modelo:```(input: 784-vectores dimensionales) ↧[Dense (64 units, activacion relu)] ↧[Dense (64 units, activacion relu)] ↧[Dense (10 units, activacion softmax)] ↧(output: distribución de probabilidad en 10 clases)```Es una simple grafica de tres capas.Para construir este modelo con la API funcional,comenzarías creando un nodo de entrada:
###Code
from tensorflow import keras
inputs = keras.Input(shape=(784,))
###Output
_____no_output_____
###Markdown
Aqui solo especificamos el tipo de nuestra data set: 784-vectores dimensionales.Nota que el tamaño del batch siempre debe ser omitido, solo se incluye el tipo de la data set.Para una input de tipo imágen ` (31,32,3) ` hubiese sido:
###Code
img_inputs = keras.Input(shape=(32, 32, 3))
###Output
_____no_output_____
###Markdown
Lo que se devuelve, ` input `, contiene información sobre la forma y el tipo de dato que se espera ingresa en tu modelo:
###Code
inputs.shape
inputs.dtype
###Output
_____no_output_____
###Markdown
Puedes crear un nuevo nodo en el grafico de capas mandando a llamar al objeto ` input `.
###Code
from tensorflow.keras import layers
dense = layers.Dense(64, activation='relu')
x = dense(inputs)
###Output
_____no_output_____
###Markdown
La acción "layer call" es como dibujar una flecha desde "entradas" a la capa que creamos.Estamos "pasando" las entradas a la capa `dense`, y afuera obtenemos` x`.Agreguemos algunas capas más a nuestro gráfico de capas:La acción "llamada a la capa" es como dibujar una flecha de "entradas" a la capa que creamos.Estamos pasando las entradas a una capa mas densa, y respecto a la salida obtenemos una ` x `.
###Code
x = layers.Dense(64, activation='relu')(x)
outputs = layers.Dense(10, activation='softmax')(x)
###Output
_____no_output_____
###Markdown
LLegados a este punto, podemos crear un ` Modelo ` especificando sus entradas y salidas en las capas de graficas.
###Code
model = keras.Model(inputs=inputs, outputs=outputs)
###Output
_____no_output_____
###Markdown
Recapitulando, esta es nuestra definción completa del proceso:
###Code
inputs = keras.Input(shape=(784,), name='img')
x = layers.Dense(64, activation='relu')(inputs)
x = layers.Dense(64, activation='relu')(x)
outputs = layers.Dense(10, activation='softmax')(x)
model = keras.Model(inputs=inputs, outputs=outputs, name='mnist_model')
###Output
_____no_output_____
###Markdown
Veamos como se muestra el model summary:
###Code
model.summary()
###Output
_____no_output_____
###Markdown
También podemos trazar el modelo como un gráfico:
###Code
keras.utils.plot_model(model, 'my_first_model.png')
###Output
_____no_output_____
###Markdown
Y opcionalmente mostrar la entrada y la salida de la forma de cada capa en la gráfica ploteada:
###Code
keras.utils.plot_model(model, 'my_first_model_with_shape_info.png', show_shapes=True)
###Output
_____no_output_____
###Markdown
Esta figura y el código que escribimos son prácticamente idénticos. En la versión de código, las flechas de conexión simplemente se reemplazan por la operación de llamada.Un "gráfico de capas" es una imagen mental muy intuitiva para un modelo de aprendizaje profundo, y la API funcional es una forma de crear modelos que reflejan de cerca esta imagen mental. Entrenamiento, evaluación e inferencia.El entrenamiento, la evaluación y la inferencia funcionan exactamente de la misma manera para los modelos construidosutilizando la API funcional como para los modelos secuenciales.Aquí hay una demostración rápida.Aquí cargamos datos de imagen MNIST, los rediseñamos en vectores,ajustar el modelo en los datos (mientras se monitorea el rendimiento en una división de validación),y finalmente evaluamos nuestro modelo en los datos de prueba:
###Code
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype('float32') / 255
x_test = x_test.reshape(10000, 784).astype('float32') / 255
model.compile(loss='sparse_categorical_crossentropy',
optimizer=keras.optimizers.RMSprop(),
metrics=['accuracy'])
history = model.fit(x_train, y_train,
batch_size=64,
epochs=5,
validation_split=0.2)
test_scores = model.evaluate(x_test, y_test, verbose=2)
print('Test loss:', test_scores[0])
print('Test accuracy:', test_scores[1])
###Output
_____no_output_____
###Markdown
Para obtener una guía completa sobre el entrenamiento y evaluación de modelos, consulta [Guía de entrenamiento y evaluación](./train_and_evaluate.ipynb). Almacenado y serializaciónEl almacenado y la serialización funcionan exactamente de la misma manera para los modelos construidosutilizando la API funcional como para los modelos secuenciales.Una forma estándar de guardar un modelo funcional es llamar a `model.save ()` para guardar todo el modelo en un solo archivo.Posteriormente, puede volver a crear el mismo modelo a partir de este archivo, incluso si ya no tiene acceso al código.eso creó el modelo.Este archivo incluye:- La arquitectura del modelo.- Los valores de peso del modelo (que se aprendieron durante el entrenamiento)- La configuración de entrenamiento del modelo (lo que pasó a `compilar`), si corresponde- El optimizador y su estado, si corresponde (esto le permite reiniciar el entrenamiento donde lo dejó)
###Code
model.save('path_to_my_model.h5')
del model
# Recrea el mismo modelo, desde el archivo:
model = keras.models.load_model('path_to_my_model.h5')
###Output
_____no_output_____
###Markdown
Para obtener una guía completa sobre el guardado de modelos, consulta [Guía para guardar y serializar modelos](./save_and_serialize.ipynb). Usando el mismo gráfico de capas para definir múltiples modelosEn la API funcional, los modelos se crean especificando sus entradasy salidas en un gráfico de capas. Eso significa que un solo gráfico de capasSe puede utilizar para generar múltiples modelos.En el siguiente ejemplo, usamos la misma arquitectura de capas para crear instancias de dos modelos:un modelo de "codificador" que convierte las entradas de imagen en vectores de 16 dimensiones,y un modelo completo de `autoencoder` para entrenamiento.
###Code
encoder_input = keras.Input(shape=(28, 28, 1), name='img')
x = layers.Conv2D(16, 3, activation='relu')(encoder_input)
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.Conv2D(16, 3, activation='relu')(x)
encoder_output = layers.GlobalMaxPooling2D()(x)
encoder = keras.Model(encoder_input, encoder_output, name='encoder')
encoder.summary()
x = layers.Reshape((4, 4, 1))(encoder_output)
x = layers.Conv2DTranspose(16, 3, activation='relu')(x)
x = layers.Conv2DTranspose(32, 3, activation='relu')(x)
x = layers.UpSampling2D(3)(x)
x = layers.Conv2DTranspose(16, 3, activation='relu')(x)
decoder_output = layers.Conv2DTranspose(1, 3, activation='relu')(x)
autoencoder = keras.Model(encoder_input, decoder_output, name='autoencoder')
autoencoder.summary()
###Output
_____no_output_____
###Markdown
Tenga en cuenta que hacemos que la arquitectura de decodificación sea estrictamente simétrica a la arquitectura de codificación,para que obtengamos una forma de salida que sea igual a la forma de entrada `(28, 28, 1)`.El reverso de una capa `Conv2D` es una capa` Conv2DTranspose`, y el reverso de una capa `MaxPooling2D`La capa es una capa `UpSampling2D`. Todos los modelos son invocables, al igual que las capas.Puede tratar cualquier modelo como si fuera una capa, llamándolo en una `Entrada` o en la salida de otra capa.Tenga en cuenta que al llamar a un modelo no solo está reutilizando la arquitectura del modelo, también está reutilizando sus pesos.Veamos esto en acción. Aquí hay una versión diferente del ejemplo de autoencoder que crea un modelo de codificador, un modelo de decodificador,y encadenarlos en dos llamadas para obtener el modelo de autoencoder:
###Code
encoder_input = keras.Input(shape=(28, 28, 1), name='original_img')
x = layers.Conv2D(16, 3, activation='relu')(encoder_input)
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.Conv2D(16, 3, activation='relu')(x)
encoder_output = layers.GlobalMaxPooling2D()(x)
encoder = keras.Model(encoder_input, encoder_output, name='encoder')
encoder.summary()
decoder_input = keras.Input(shape=(16,), name='encoded_img')
x = layers.Reshape((4, 4, 1))(decoder_input)
x = layers.Conv2DTranspose(16, 3, activation='relu')(x)
x = layers.Conv2DTranspose(32, 3, activation='relu')(x)
x = layers.UpSampling2D(3)(x)
x = layers.Conv2DTranspose(16, 3, activation='relu')(x)
decoder_output = layers.Conv2DTranspose(1, 3, activation='relu')(x)
decoder = keras.Model(decoder_input, decoder_output, name='decoder')
decoder.summary()
autoencoder_input = keras.Input(shape=(28, 28, 1), name='img')
encoded_img = encoder(autoencoder_input)
decoded_img = decoder(encoded_img)
autoencoder = keras.Model(autoencoder_input, decoded_img, name='autoencoder')
autoencoder.summary()
###Output
_____no_output_____
###Markdown
Como puede ver, el modelo puede estar anidado: un modelo puede contener submodelos (ya que un modelo es como una capa).Un caso de uso común para la anidación de modelos es * ensamblaje *.Como ejemplo, a continuación se explica cómo agrupar un conjunto de modelos en un solo modelo que promedia sus predicciones:
###Code
def get_model():
inputs = keras.Input(shape=(128,))
outputs = layers.Dense(1, activation='sigmoid')(inputs)
return keras.Model(inputs, outputs)
model1 = get_model()
model2 = get_model()
model3 = get_model()
inputs = keras.Input(shape=(128,))
y1 = model1(inputs)
y2 = model2(inputs)
y3 = model3(inputs)
outputs = layers.average([y1, y2, y3])
ensemble_model = keras.Model(inputs=inputs, outputs=outputs)
###Output
_____no_output_____
###Markdown
Manipulación de topologías gráficas complejas Modelos con múltiples entradas y salidasLa API funcional facilita la manipulación de múltiples entradas y salidas.Esto no se puede manejar con la API secuencial.Aquí hay un ejemplo simple.Supongamos que está creando un sistema para clasificar los tickets de emisión personalizados por prioridad y enrutarlos al departamento correcto.Tu modelo tendrá 3 entradas:- Título del ticket (entrada de texto)- Cuerpo del texto del ticket (entrada de texto)- Cualquier etiqueta agregada por el usuario (entrada categórica)Tendrá dos salidas:- Puntuación de prioridad entre 0 y 1 (salida sigmoidea escalar)- El departamento que debe manejar el ticket (salida softmax sobre el conjunto de departamentos)Construyamos este modelo en pocas líneas con la API funcional.
###Code
num_tags = 12 # Número de etiquetas de problemas únicos
num_words = 10000 # Tamaño del vocabulario obtenido al preprocesar datos de texto
num_departments = 4 # Número de departamentos para predicciones.
title_input = keras.Input(shape=(None,), name='title') # Secuencia de longitud variable de entradas
body_input = keras.Input(shape=(None,), name='body') # Secuencia de longitud variable de entradas
tags_input = keras.Input(shape=(num_tags,), name='tags') # Vectores binarios de tamaño `num_tags`
# Ingresa cada palabra en el título en un vector de 64 dimensiones
title_features = layers.Embedding(num_words, 64)(title_input)
# Ingresa cada palabra en el texto en un vector de 64 dimensiones
body_features = layers.Embedding(num_words, 64)(body_input)
# Reduce la secuencia de palabras ingresadas en el título en un solo vector de 128 dimensiones
title_features = layers.LSTM(128)(title_features)
# Reduce la secuencia de palabras ingresadas en el cuerpo en un solo vector de 32 dimensiones
body_features = layers.LSTM(32)(body_features)
# Combina todas las funciones disponibles en un solo vector grande mediante concatenación
x = layers.concatenate([title_features, body_features, tags_input])
# Pegua una regresión logística para la predicción de prioridad en la parte superior de las características
priority_pred = layers.Dense(1, activation='sigmoid', name='priority')(x)
# Stick a department classifier on top of the features
department_pred = layers.Dense(num_departments, activation='softmax', name='department')(x)
# Instancia un modelo de extremo a extremo que prediga tanto la prioridad como el departamento
model = keras.Model(inputs=[title_input, body_input, tags_input],
outputs=[priority_pred, department_pred])
###Output
_____no_output_____
###Markdown
Ploteando el modelo:
###Code
keras.utils.plot_model(model, 'multi_input_and_output_model.png', show_shapes=True)
###Output
_____no_output_____
###Markdown
Al compilar este modelo, podemos asignar diferentes pérdidas a cada salida.Incluso puede asignar diferentes pesos a cada pérdida, para modular sucontribución a la pérdida total de entrenamiento.
###Code
model.compile(optimizer=keras.optimizers.RMSprop(1e-3),
loss=['binary_crossentropy', 'categorical_crossentropy'],
loss_weights=[1., 0.2])
###Output
_____no_output_____
###Markdown
Como dimos nombres a nuestras capas de salida, también podríamos especificar la pérdida de esta manera:
###Code
model.compile(optimizer=keras.optimizers.RMSprop(1e-3),
loss={'priority': 'binary_crossentropy',
'department': 'categorical_crossentropy'},
loss_weights=[1., 0.2])
###Output
_____no_output_____
###Markdown
Podemos entrenar el modelo pasando listas de matrices Numpy de entradas y objetivos:
###Code
import numpy as np
# Datos de entrada ficticios
title_data = np.random.randint(num_words, size=(1280, 10))
body_data = np.random.randint(num_words, size=(1280, 100))
tags_data = np.random.randint(2, size=(1280, num_tags)).astype('float32')
# Datos objetivo ficticios
priority_targets = np.random.random(size=(1280, 1))
dept_targets = np.random.randint(2, size=(1280, num_departments))
model.fit({'title': title_data, 'body': body_data, 'tags': tags_data},
{'priority': priority_targets, 'department': dept_targets},
epochs=2,
batch_size=32)
###Output
_____no_output_____
###Markdown
Al llamar al ajuste con un objeto `Dataset`, debería producir untupla de listas como `([title_data, body_data, tags_data], [priority_targets, dept_targets])`o una tupla de diccionarios como`({'title': title_data, 'body': body_data, 'tags': tags_data}, {'priority': priority_targets, 'department': dept_targets})`.Para obtener una explicación más detallada, consulta la guía completa [Guía de entrenamiento y evaluación](./train_and_evaluate.ipynb). Un modelo de Red neuronal residual de jugueteAdemás de los modelos con múltiples entradas y salidas,La API funcional facilita la manipulación de topologías de conectividad no lineal,es decir, modelos donde las capas no están conectadas secuencialmente.Esto tampoco se puede manejar con la API secuencial (como su nombre lo indica).Un caso de uso común para esto son las conexiones residuales.Construyamos un modelo de ResNet de juguete para CIFAR10 para demostrar esto.
###Code
inputs = keras.Input(shape=(32, 32, 3), name='img')
x = layers.Conv2D(32, 3, activation='relu')(inputs)
x = layers.Conv2D(64, 3, activation='relu')(x)
block_1_output = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(64, 3, activation='relu', padding='same')(block_1_output)
x = layers.Conv2D(64, 3, activation='relu', padding='same')(x)
block_2_output = layers.add([x, block_1_output])
x = layers.Conv2D(64, 3, activation='relu', padding='same')(block_2_output)
x = layers.Conv2D(64, 3, activation='relu', padding='same')(x)
block_3_output = layers.add([x, block_2_output])
x = layers.Conv2D(64, 3, activation='relu')(block_3_output)
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dense(256, activation='relu')(x)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(10, activation='softmax')(x)
model = keras.Model(inputs, outputs, name='toy_resnet')
model.summary()
###Output
_____no_output_____
###Markdown
Ploteando el modelo:
###Code
keras.utils.plot_model(model, 'mini_resnet.png', show_shapes=True)
###Output
_____no_output_____
###Markdown
Vamos a entrenarlo:
###Code
(x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)
model.compile(optimizer=keras.optimizers.RMSprop(1e-3),
loss='categorical_crossentropy',
metrics=['acc'])
model.fit(x_train, y_train,
batch_size=64,
epochs=1,
validation_split=0.2)
###Output
_____no_output_____
###Markdown
Compartir capasOtro buen uso de la API funcional son los modelos que usan capas compartidas. Las capas compartidas son instancias de capa que se reutilizan varias veces en un mismo modelo: aprenden características que corresponden a múltiples rutas en el gráfico de capas.Las capas compartidas a menudo se usan para codificar entradas que provienen de espacios similares (por ejemplo, dos piezas de texto diferentes que presentan un vocabulario similar), ya que permiten compartir información entre estas diferentes entradas y hacen posible entrenar un modelo de este tipo en menos datos. Si se ve una palabra determinada en una de las entradas, eso beneficiará el procesamiento de todas las entradas que pasan por la capa compartida.Para compartir una capa en la API funcional, simplemente llame a la misma instancia de capa varias veces. Por ejemplo, aquí hay una capa `Ingresa (del ingles Embedding)` compartida entre dos entradas de texto diferentes:
###Code
# Ingreso de 1000 palabras únicas asignadas a vectores de 128 dimensiones
shared_embedding = layers.Embedding(1000, 128)
# Secuencia de longitud variable de enteros
text_input_a = keras.Input(shape=(None,), dtype='int32')
# Secuencia de longitud variable de enteros
text_input_b = keras.Input(shape=(None,), dtype='int32')
# Reutilizamos la misma capa para codificar ambas entradas
encoded_input_a = shared_embedding(text_input_a)
encoded_input_b = shared_embedding(text_input_b)
###Output
_____no_output_____
###Markdown
Extracción y reutilización de nodos en el gráfico de capas Debido a que el gráfico de capas que está manipulando en la API funcional es una estructura de datos estática, se puede acceder e inspeccionarlo. Así es como podemos trazar modelos funcionales como imágenes, por ejemplo.Esto también significa que podemos acceder a las activaciones de capas intermedias ("nodos" en el gráfico) y reutilizarlas en otros lugares. ¡Esto es extremadamente útil para la extracción de características, por ejemplo!Veamos un ejemplo. Este es un modelo VGG19 con pesas pre-entrenadas en ImageNet:
###Code
from tensorflow.keras.applications import VGG19
vgg19 = VGG19()
###Output
_____no_output_____
###Markdown
Y estas son las activaciones intermedias del modelo, obtenidas al consultar la estructura de datos del gráfico:
###Code
features_list = [layer.output for layer in vgg19.layers]
###Output
_____no_output_____
###Markdown
Podemos usar estas características para crear un nuevo modelo de extracción de características, que devuelve los valores de las activaciones de la capa intermedia, y podemos hacer todo esto en 3 líneas.
###Code
feat_extraction_model = keras.Model(inputs=vgg19.input, outputs=features_list)
img = np.random.random((1, 224, 224, 3)).astype('float32')
extracted_features = feat_extraction_model(img)
###Output
_____no_output_____
###Markdown
Esto es útil cuando [implementa la transferencia de estilo neural] (https://medium.com/tensorflow/neural-style-transfer-creating-art-with-deep-learning-using-tf-keras-and-eager-execution- 7d541ac31398), entre otras cosas. Extendiendo la API escribiendo capas personalizadastf.keras tiene una amplia gama de capas incorporadas. Aquí están algunos ejemplos:- Capas convolucionales: `Conv1D`,` Conv2D`, `Conv3D`,` Conv2DTranspose`, etc.- Capas de agrupación: `MaxPooling1D`,` MaxPooling2D`, `MaxPooling3D`,` AveragePooling1D`, etc.- Capas RNN: `GRU`,` LSTM`, `ConvLSTM2D`, etc.- `BatchNormalization`,` Dropout`, `Embedded`, etc.Si no encuentras lo que necesitas, es fácil extender la API creando tus propias capas.Todas las capas subclasifican la clase `Layer` e implementan:- Un método `call`, que especifica el cálculo realizado por la capa.- Un método `build`, que crea los pesos de la capa (tenga en cuenta que esto es solo una convención de estilo; también puede crear pesos en` __init__`).Para obtener más información sobre cómo crear capas desde cero, consulta la guía [Guía para escribir capas y modelos desde cero](./custom_layers_and_models.ipynb).Aquí hay una implementación simple de una capa `Densa`:
###Code
class CustomDense(layers.Layer):
def __init__(self, units=32):
super(CustomDense, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(shape=(input_shape[-1], self.units),
initializer='random_normal',
trainable=True)
self.b = self.add_weight(shape=(self.units,),
initializer='random_normal',
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
inputs = keras.Input((4,))
outputs = CustomDense(10)(inputs)
model = keras.Model(inputs, outputs)
###Output
_____no_output_____
###Markdown
Si deseas que tu capa personalizada admita la serialización, también debes definir un método `get_config`,que devuelve los argumentos del constructor de la instancia de capa:
###Code
class CustomDense(layers.Layer):
def __init__(self, units=32):
super(CustomDense, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(shape=(input_shape[-1], self.units),
initializer='random_normal',
trainable=True)
self.b = self.add_weight(shape=(self.units,),
initializer='random_normal',
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
return {'units': self.units}
inputs = keras.Input((4,))
outputs = CustomDense(10)(inputs)
model = keras.Model(inputs, outputs)
config = model.get_config()
new_model = keras.Model.from_config(
config, custom_objects={'CustomDense': CustomDense})
###Output
_____no_output_____
###Markdown
Opcionalmente, también podría implementar el método de clase `from_config (cls, config)`, que se encarga de recrear una instancia de capa dado su diccionario de configuración. La implementación predeterminada de `from_config` es:```pythondef from_config(cls, config): return cls(**config)``` Cuándo usar la API funcional¿Cómo decidir si usar la API funcional para crear un nuevo modelo o simplemente subclasificar la clase `Modelo` directamente?En general, la API funcional es de nivel superior, más fácil y segura de usar, y tiene una serie de características que los modelos de subclases no admiten.Sin embargo, la subclasificación de modelos le brinda una mayor flexibilidad al crear modelos que no se pueden expresar fácilmente como gráficos acíclicos dirigidos de capas (por ejemplo, no podría implementar un Tree-RNN con la API funcional, tendría que subclasificar `Model` directamente). Estas son las fortalezas de la API funcional:Las propiedades enumeradas a continuación también son ciertas para los modelos secuenciales (que también son estructuras de datos), pero no son ciertas para los modelos subclasificados (que son bytecode de Python, no estructuras de datos). Es menos detallado.No ` super (MyClass, self) .__ init __ (...)`, no `def call (self, ...): `, etc.Comparar:```pitóninput = keras.Input (shape = (32,))x = capas. Denso (64, activación = 'relu') (entradas)salidas = capas. Denso (10) (x)mlp = keras.Model (entradas, salidas)```Con la versión subclaseada:```pitónclase MLP (keras.Model): def __init __ (self, ** kwargs): super (MLP, self) .__ init __ (** kwargs) self.dense_1 = capas.Dense (64, activación = 'relu') self.dense_2 = layers.Dense (10) llamada def (auto, entradas): x = self.dense_1 (entradas) return self.dense_2 (x) Instanciar el modelo.mlp = MLP () Necesario para crear el estado del modelo. El modelo no tiene un estado hasta que se llama al menos una vez._ = mlp (tf.zeros ((1, 32)))``` Valida su modelo mientras lo está definiendo.En la API funcional, su especificación de entrada (forma y dtype) se crea de antemano (a través de `Input`), y cada vez que llama a una capa, la capa comprueba que la especificación que se le pasa coincide con sus supuestos, y generará un mensaje de error útil si no.Esto garantiza que se ejecutará cualquier modelo que pueda construir con la API funcional. Toda la depuración (que no sea la depuración relacionada con la convergencia) ocurrirá estáticamente durante la construcción del modelo, y no en el momento de la ejecución. Esto es similar a la comprobación de tipo en un compilador. Tu modelo funcional es trazable e inspeccionable.Puedes trazar el modelo como un gráfico, y puedes acceder fácilmente a los nodos intermedios en este gráfico, por ejemplo, para extraer y reutilizar las activaciones de las capas intermedias, como vimos en un ejemplo anterior:```pitónfeatures_list = [layer.output para la capa en vgg19.layers]feat_extraction_model = keras.Model (input = vgg19.input, salidas = features_list)``` Su modelo funcional puede ser serializado o clonado.Debido a que un modelo funcional es una estructura de datos en lugar de un fragmento de código, es serializable de forma segura y se puede guardar como un único archivo que le permite recrear exactamente el mismo modelo sin tener acceso a ninguno de los códigos originales. Consulta nuestra [guía de guardado y serialización] (./save_and_serialize.ipynb) para obtener más detalles. Estas son las debilidades de la API funcional: No admite arquitecturas dinámicas.La API funcional trata los modelos como DAG de capas. Esto es cierto para la mayoría de las arquitecturas de aprendizaje profundo, pero no para todas: por ejemplo, las redes recursivas o los RNN de árbol no siguen este supuesto y no se pueden implementar en la API funcional. A veces, solo necesitas escribir todo desde cero.Al escribir actividades avanzadas, es posible que desee hacer cosas que están fuera del alcance de "definir un DAG de capas": por ejemplo, es posible que desee exponer múltiples métodos personalizados de entrenamiento e inferencia en su instancia de modelo. Esto requiere subclases.---Para profundizar más en las diferencias entre la API funcional y la subclasificación de modelos, puede leer [¿Qué son las API simbólicas e imperativas en TensorFlow 2.0?] (Https://medium.com/tensorflow/what-are-symbolic-and -imperative-apis-in-tensorflow-2-0-dfccecb01021). Mezcla y combina diferentes estilos de APIEs importante destacar que elegir entre la subclasificación de API funcional o modelo no es una decisión binaria que lo restringe a una categoría de modelos. Todos los modelos en la API tf.keras pueden interactuar con cada uno, ya sean modelos secuenciales, modelos funcionales o modelos / capas subclasificados escritos desde cero.Siempre puede usar un modelo funcional o modelo secuencial como parte de un modelo / capa subclasificado:
###Code
units = 32
timesteps = 10
input_dim = 5
# Define a Functional model
inputs = keras.Input((None, units))
x = layers.GlobalAveragePooling1D()(inputs)
outputs = layers.Dense(1, activation='sigmoid')(x)
model = keras.Model(inputs, outputs)
class CustomRNN(layers.Layer):
def __init__(self):
super(CustomRNN, self).__init__()
self.units = units
self.projection_1 = layers.Dense(units=units, activation='tanh')
self.projection_2 = layers.Dense(units=units, activation='tanh')
# Our previously-defined Functional model
self.classifier = model
def call(self, inputs):
outputs = []
state = tf.zeros(shape=(inputs.shape[0], self.units))
for t in range(inputs.shape[1]):
x = inputs[:, t, :]
h = self.projection_1(x)
y = h + self.projection_2(state)
state = y
outputs.append(y)
features = tf.stack(outputs, axis=1)
print(features.shape)
return self.classifier(features)
rnn_model = CustomRNN()
_ = rnn_model(tf.zeros((1, timesteps, input_dim)))
###Output
_____no_output_____
###Markdown
Inversamente, puede usar cualquier Capa o Modelo subclasificado en la API Funcional siempre que implemente un método `call` que siga uno de los siguientes patrones:- `call (self, input, ** kwargs)` donde `input` es un tensor o una estructura anidada de tensores (por ejemplo, una lista de tensores), y donde` ** kwargs` son argumentos no tensoriales (no input )- `call (self, input, training = None, ** kwargs)` donde `training` es un valor booleano que indica si la capa debe comportarse en modo de entrenamiento y modo de inferencia.- `call (self, input, mask = None, ** kwargs)` donde `mask` es un tensor de máscara booleano (útil para RNN, por ejemplo).- `call (self, input, training = None, mask = None, ** kwargs)` - por supuesto, puede tener tanto un comportamiento específico de enmascaramiento como de entrenamiento al mismo tiempo.Además, si implementa el método `get_config` en su Capa o Modelo personalizado, los modelos funcionales que cree con él seguirán siendo serializables y clonables.Aquí hay un ejemplo rápido en el que usamos un RNN personalizado escrito desde cero en un modelo funcional:
###Code
units = 32
timesteps = 10
input_dim = 5
batch_size = 16
class CustomRNN(layers.Layer):
def __init__(self):
super(CustomRNN, self).__init__()
self.units = units
self.projection_1 = layers.Dense(units=units, activation='tanh')
self.projection_2 = layers.Dense(units=units, activation='tanh')
self.classifier = layers.Dense(1, activation='sigmoid')
def call(self, inputs):
outputs = []
state = tf.zeros(shape=(inputs.shape[0], self.units))
for t in range(inputs.shape[1]):
x = inputs[:, t, :]
h = self.projection_1(x)
y = h + self.projection_2(state)
state = y
outputs.append(y)
features = tf.stack(outputs, axis=1)
return self.classifier(features)
# Tenga en cuenta que especificamos un tamaño de lote estático para las entradas con `batch_shape`
# arg, porque el cálculo interno de `CustomRNN` requiere un tamaño de lote estático
# (cuando creamos el tensor de ceros `estado`).
inputs = keras.Input(batch_shape=(batch_size, timesteps, input_dim))
x = layers.Conv1D(32, 3)(inputs)
outputs = CustomRNN()(x)
model = keras.Model(inputs, outputs)
rnn_model = CustomRNN()
_ = rnn_model(tf.zeros((1, 10, 5)))
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
La API funcional "Keras" en TensorFlow Ver en TensorFlow.org Correr en Google Colab Ver código fuente en GitHub Descargar notebook Note: Nuestra comunidad de Tensorflow ha traducido estos documentos. Como las traducciones de la comunidadson basados en el "mejor esfuerzo", no hay ninguna garantia que esta sea un reflejo preciso y actual de la [Documentacion Oficial en Ingles](https://www.tensorflow.org/?hl=en).Si tienen sugerencias sobre como mejorar esta traduccion, por favor envian un "Pull request"al siguiente repositorio [tensorflow/docs](https://github.com/tensorflow/docs).Para ofrecerse como voluntario o hacer revision de las traducciones de la Comunidadpor favor contacten al siguiente grupo [[email protected] list](https://groups.google.com/a/tensorflow.org/forum/!forum/docs). Setup
###Code
import tensorflow as tf
tf.keras.backend.clear_session() # Reseteo sencillo
###Output
_____no_output_____
###Markdown
IntroduccionYa estás familiarizado con el uso del metodo `keras.Sequential()` para crear modelos.La API funcional es una forma de crear modelos mas dinamicos que con ` Sequential `: La API funcional puede manejar modelos con topología no lineal, modelos con capas compartidas y modelos con múltiples entradas o salidas.Se basa en la idea de que un modelo de aprendizaje profundosuele ser un gráfico acíclico dirigido (DAG) de capas.La API funcional es un conjunto de herramientas para **construir gráficos de capas**.Considera el siguiente modelo:```(input: 784-vectores dimensionales) ↧[Dense (64 units, activacion relu)] ↧[Dense (64 units, activacion relu)] ↧[Dense (10 units, activacion softmax)] ↧(output: distribución de probabilidad en 10 clases)```Es una simple grafica de tres capas.Para construir este modelo con la API funcional,comenzarías creando un nodo de entrada:
###Code
from tensorflow import keras
inputs = keras.Input(shape=(784,))
###Output
_____no_output_____
###Markdown
Aqui solo especificamos el tipo de nuestra data set: 784-vectores dimensionales.Nota que el tamaño del batch siempre debe ser omitido, solo se incluye el tipo de la data set.Para una input de tipo imágen ` (31,32,3) ` hubiese sido:
###Code
img_inputs = keras.Input(shape=(32, 32, 3))
###Output
_____no_output_____
###Markdown
Lo que se devuelve, ` input `, contiene información sobre la forma y el tipo de dato que se espera ingresa en tu modelo:
###Code
inputs.shape
inputs.dtype
###Output
_____no_output_____
###Markdown
Puedes crear un nuevo nodo en el grafico de capas mandando a llamar al objeto ` input `.
###Code
from tensorflow.keras import layers
dense = layers.Dense(64, activation='relu')
x = dense(inputs)
###Output
_____no_output_____
###Markdown
La acción "layer call" es como dibujar una flecha desde "entradas" a la capa que creamos.Estamos "pasando" las entradas a la capa `dense`, y afuera obtenemos` x`.Agreguemos algunas capas más a nuestro gráfico de capas:La acción "llamada a la capa" es como dibujar una flecha de "entradas" a la capa que creamos.Estamos pasando las entradas a una capa mas densa, y respecto a la salida obtenemos una ` x `.
###Code
x = layers.Dense(64, activation='relu')(x)
outputs = layers.Dense(10, activation='softmax')(x)
###Output
_____no_output_____
###Markdown
LLegados a este punto, podemos crear un ` Modelo ` especificando sus entradas y salidas en las capas de graficas.
###Code
model = keras.Model(inputs=inputs, outputs=outputs)
###Output
_____no_output_____
###Markdown
Recapitulando, esta es nuestra definción completa del proceso:
###Code
inputs = keras.Input(shape=(784,), name='img')
x = layers.Dense(64, activation='relu')(inputs)
x = layers.Dense(64, activation='relu')(x)
outputs = layers.Dense(10, activation='softmax')(x)
model = keras.Model(inputs=inputs, outputs=outputs, name='mnist_model')
###Output
_____no_output_____
###Markdown
Veamos como se muestra el model summary:
###Code
model.summary()
###Output
_____no_output_____
###Markdown
También podemos trazar el modelo como un gráfico:
###Code
keras.utils.plot_model(model, 'my_first_model.png')
###Output
_____no_output_____
###Markdown
Y opcionalmente mostrar la entrada y la salida de la forma de cada capa en la gráfica ploteada:
###Code
keras.utils.plot_model(model, 'my_first_model_with_shape_info.png', show_shapes=True)
###Output
_____no_output_____
###Markdown
Esta figura y el código que escribimos son prácticamente idénticos. En la versión de código, las flechas de conexión simplemente se reemplazan por la operación de llamada.Un "gráfico de capas" es una imagen mental muy intuitiva para un modelo de aprendizaje profundo, y la API funcional es una forma de crear modelos que reflejan de cerca esta imagen mental. Entrenamiento, evaluación e inferencia.El entrenamiento, la evaluación y la inferencia funcionan exactamente de la misma manera para los modelos construidosutilizando la API funcional como para los modelos secuenciales.Aquí hay una demostración rápida.Aquí cargamos datos de imagen MNIST, los rediseñamos en vectores,ajustar el modelo en los datos (mientras se monitorea el rendimiento en una división de validación),y finalmente evaluamos nuestro modelo en los datos de prueba:
###Code
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype('float32') / 255
x_test = x_test.reshape(10000, 784).astype('float32') / 255
model.compile(loss='sparse_categorical_crossentropy',
optimizer=keras.optimizers.RMSprop(),
metrics=['accuracy'])
history = model.fit(x_train, y_train,
batch_size=64,
epochs=5,
validation_split=0.2)
test_scores = model.evaluate(x_test, y_test, verbose=2)
print('Test loss:', test_scores[0])
print('Test accuracy:', test_scores[1])
###Output
_____no_output_____
###Markdown
Para obtener una guía completa sobre el entrenamiento y evaluación de modelos, consulta [Guía de entrenamiento y evaluación](./train_and_evaluate.ipynb). Almacenado y serializaciónEl almacenado y la serialización funcionan exactamente de la misma manera para los modelos construidosutilizando la API funcional como para los modelos secuenciales.Una forma estándar de guardar un modelo funcional es llamar a `model.save ()` para guardar todo el modelo en un solo archivo.Posteriormente, puede volver a crear el mismo modelo a partir de este archivo, incluso si ya no tiene acceso al código.eso creó el modelo.Este archivo incluye:- La arquitectura del modelo.- Los valores de peso del modelo (que se aprendieron durante el entrenamiento)- La configuración de entrenamiento del modelo (lo que pasó a `compilar`), si corresponde- El optimizador y su estado, si corresponde (esto le permite reiniciar el entrenamiento donde lo dejó)
###Code
model.save('path_to_my_model.h5')
del model
# Recrea el mismo modelo, desde el archivo:
model = keras.models.load_model('path_to_my_model.h5')
###Output
_____no_output_____
###Markdown
Para obtener una guía completa sobre el guardado de modelos, consulta [Guía para guardar y serializar modelos](./save_and_serialize.ipynb). Usando el mismo gráfico de capas para definir múltiples modelosEn la API funcional, los modelos se crean especificando sus entradasy salidas en un gráfico de capas. Eso significa que un solo gráfico de capasSe puede utilizar para generar múltiples modelos.En el siguiente ejemplo, usamos la misma arquitectura de capas para crear instancias de dos modelos:un modelo de "codificador" que convierte las entradas de imagen en vectores de 16 dimensiones,y un modelo completo de `autoencoder` para entrenamiento.
###Code
encoder_input = keras.Input(shape=(28, 28, 1), name='img')
x = layers.Conv2D(16, 3, activation='relu')(encoder_input)
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.Conv2D(16, 3, activation='relu')(x)
encoder_output = layers.GlobalMaxPooling2D()(x)
encoder = keras.Model(encoder_input, encoder_output, name='encoder')
encoder.summary()
x = layers.Reshape((4, 4, 1))(encoder_output)
x = layers.Conv2DTranspose(16, 3, activation='relu')(x)
x = layers.Conv2DTranspose(32, 3, activation='relu')(x)
x = layers.UpSampling2D(3)(x)
x = layers.Conv2DTranspose(16, 3, activation='relu')(x)
decoder_output = layers.Conv2DTranspose(1, 3, activation='relu')(x)
autoencoder = keras.Model(encoder_input, decoder_output, name='autoencoder')
autoencoder.summary()
###Output
_____no_output_____
###Markdown
Tenga en cuenta que hacemos que la arquitectura de decodificación sea estrictamente simétrica a la arquitectura de codificación,para que obtengamos una forma de salida que sea igual a la forma de entrada `(28, 28, 1)`.El reverso de una capa `Conv2D` es una capa` Conv2DTranspose`, y el reverso de una capa `MaxPooling2D`La capa es una capa `UpSampling2D`. Todos los modelos son invocables, al igual que las capas.Puede tratar cualquier modelo como si fuera una capa, llamándolo en una `Entrada` o en la salida de otra capa.Tenga en cuenta que al llamar a un modelo no solo está reutilizando la arquitectura del modelo, también está reutilizando sus pesos.Veamos esto en acción. Aquí hay una versión diferente del ejemplo de autoencoder que crea un modelo de codificador, un modelo de decodificador,y encadenarlos en dos llamadas para obtener el modelo de autoencoder:
###Code
encoder_input = keras.Input(shape=(28, 28, 1), name='original_img')
x = layers.Conv2D(16, 3, activation='relu')(encoder_input)
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.Conv2D(16, 3, activation='relu')(x)
encoder_output = layers.GlobalMaxPooling2D()(x)
encoder = keras.Model(encoder_input, encoder_output, name='encoder')
encoder.summary()
decoder_input = keras.Input(shape=(16,), name='encoded_img')
x = layers.Reshape((4, 4, 1))(decoder_input)
x = layers.Conv2DTranspose(16, 3, activation='relu')(x)
x = layers.Conv2DTranspose(32, 3, activation='relu')(x)
x = layers.UpSampling2D(3)(x)
x = layers.Conv2DTranspose(16, 3, activation='relu')(x)
decoder_output = layers.Conv2DTranspose(1, 3, activation='relu')(x)
decoder = keras.Model(decoder_input, decoder_output, name='decoder')
decoder.summary()
autoencoder_input = keras.Input(shape=(28, 28, 1), name='img')
encoded_img = encoder(autoencoder_input)
decoded_img = decoder(encoded_img)
autoencoder = keras.Model(autoencoder_input, decoded_img, name='autoencoder')
autoencoder.summary()
###Output
_____no_output_____
###Markdown
Como puede ver, el modelo puede estar anidado: un modelo puede contener submodelos (ya que un modelo es como una capa).Un caso de uso común para la anidación de modelos es * ensamblaje *.Como ejemplo, a continuación se explica cómo agrupar un conjunto de modelos en un solo modelo que promedia sus predicciones:
###Code
def get_model():
inputs = keras.Input(shape=(128,))
outputs = layers.Dense(1, activation='sigmoid')(inputs)
return keras.Model(inputs, outputs)
model1 = get_model()
model2 = get_model()
model3 = get_model()
inputs = keras.Input(shape=(128,))
y1 = model1(inputs)
y2 = model2(inputs)
y3 = model3(inputs)
outputs = layers.average([y1, y2, y3])
ensemble_model = keras.Model(inputs=inputs, outputs=outputs)
###Output
_____no_output_____
###Markdown
Manipulación de topologías gráficas complejas Modelos con múltiples entradas y salidasLa API funcional facilita la manipulación de múltiples entradas y salidas.Esto no se puede manejar con la API secuencial.Aquí hay un ejemplo simple.Supongamos que está creando un sistema para clasificar los tickets de emisión personalizados por prioridad y enrutarlos al departamento correcto.Tu modelo tendrá 3 entradas:- Título del ticket (entrada de texto)- Cuerpo del texto del ticket (entrada de texto)- Cualquier etiqueta agregada por el usuario (entrada categórica)Tendrá dos salidas:- Puntuación de prioridad entre 0 y 1 (salida sigmoidea escalar)- El departamento que debe manejar el ticket (salida softmax sobre el conjunto de departamentos)Construyamos este modelo en pocas líneas con la API funcional.
###Code
num_tags = 12 # Número de etiquetas de problemas únicos
num_words = 10000 # Tamaño del vocabulario obtenido al preprocesar datos de texto
num_departments = 4 # Número de departamentos para predicciones.
title_input = keras.Input(shape=(None,), name='title') # Secuencia de longitud variable de entradas
body_input = keras.Input(shape=(None,), name='body') # Secuencia de longitud variable de entradas
tags_input = keras.Input(shape=(num_tags,), name='tags') # Vectores binarios de tamaño `num_tags`
# Ingresa cada palabra en el título en un vector de 64 dimensiones
title_features = layers.Embedding(num_words, 64)(title_input)
# Ingresa cada palabra en el texto en un vector de 64 dimensiones
body_features = layers.Embedding(num_words, 64)(body_input)
# Reduce la secuencia de palabras ingresadas en el título en un solo vector de 128 dimensiones
title_features = layers.LSTM(128)(title_features)
# Reduce la secuencia de palabras ingresadas en el cuerpo en un solo vector de 32 dimensiones
body_features = layers.LSTM(32)(body_features)
# Combina todas las funciones disponibles en un solo vector grande mediante concatenación
x = layers.concatenate([title_features, body_features, tags_input])
# Pegua una regresión logística para la predicción de prioridad en la parte superior de las características
priority_pred = layers.Dense(1, activation='sigmoid', name='priority')(x)
# Stick a department classifier on top of the features
department_pred = layers.Dense(num_departments, activation='softmax', name='department')(x)
# Instancia un modelo de extremo a extremo que prediga tanto la prioridad como el departamento
model = keras.Model(inputs=[title_input, body_input, tags_input],
outputs=[priority_pred, department_pred])
###Output
_____no_output_____
###Markdown
Ploteando el modelo:
###Code
keras.utils.plot_model(model, 'multi_input_and_output_model.png', show_shapes=True)
###Output
_____no_output_____
###Markdown
Al compilar este modelo, podemos asignar diferentes pérdidas a cada salida.Incluso puede asignar diferentes pesos a cada pérdida, para modular sucontribución a la pérdida total de entrenamiento.
###Code
model.compile(optimizer=keras.optimizers.RMSprop(1e-3),
loss=['binary_crossentropy', 'categorical_crossentropy'],
loss_weights=[1., 0.2])
###Output
_____no_output_____
###Markdown
Como dimos nombres a nuestras capas de salida, también podríamos especificar la pérdida de esta manera:
###Code
model.compile(optimizer=keras.optimizers.RMSprop(1e-3),
loss={'priority': 'binary_crossentropy',
'department': 'categorical_crossentropy'},
loss_weights=[1., 0.2])
###Output
_____no_output_____
###Markdown
Podemos entrenar el modelo pasando listas de matrices Numpy de entradas y objetivos:
###Code
import numpy as np
# Datos de entrada ficticios
title_data = np.random.randint(num_words, size=(1280, 10))
body_data = np.random.randint(num_words, size=(1280, 100))
tags_data = np.random.randint(2, size=(1280, num_tags)).astype('float32')
# Datos objetivo ficticios
priority_targets = np.random.random(size=(1280, 1))
dept_targets = np.random.randint(2, size=(1280, num_departments))
model.fit({'title': title_data, 'body': body_data, 'tags': tags_data},
{'priority': priority_targets, 'department': dept_targets},
epochs=2,
batch_size=32)
###Output
_____no_output_____
###Markdown
Al llamar al ajuste con un objeto `Dataset`, debería producir untupla de listas como `([title_data, body_data, tags_data], [priority_targets, dept_targets])`o una tupla de diccionarios como`({'title': title_data, 'body': body_data, 'tags': tags_data}, {'priority': priority_targets, 'department': dept_targets})`.Para obtener una explicación más detallada, consulta la guía completa [Guía de entrenamiento y evaluación](./train_and_evaluate.ipynb). Un modelo de Red neuronal residual de jugueteAdemás de los modelos con múltiples entradas y salidas,La API funcional facilita la manipulación de topologías de conectividad no lineal,es decir, modelos donde las capas no están conectadas secuencialmente.Esto tampoco se puede manejar con la API secuencial (como su nombre lo indica).Un caso de uso común para esto son las conexiones residuales.Construyamos un modelo de ResNet de juguete para CIFAR10 para demostrar esto.
###Code
inputs = keras.Input(shape=(32, 32, 3), name='img')
x = layers.Conv2D(32, 3, activation='relu')(inputs)
x = layers.Conv2D(64, 3, activation='relu')(x)
block_1_output = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(64, 3, activation='relu', padding='same')(block_1_output)
x = layers.Conv2D(64, 3, activation='relu', padding='same')(x)
block_2_output = layers.add([x, block_1_output])
x = layers.Conv2D(64, 3, activation='relu', padding='same')(block_2_output)
x = layers.Conv2D(64, 3, activation='relu', padding='same')(x)
block_3_output = layers.add([x, block_2_output])
x = layers.Conv2D(64, 3, activation='relu')(block_3_output)
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dense(256, activation='relu')(x)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(10, activation='softmax')(x)
model = keras.Model(inputs, outputs, name='toy_resnet')
model.summary()
###Output
_____no_output_____
###Markdown
Ploteando el modelo:
###Code
keras.utils.plot_model(model, 'mini_resnet.png', show_shapes=True)
###Output
_____no_output_____
###Markdown
Vamos a entrenarlo:
###Code
(x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)
model.compile(optimizer=keras.optimizers.RMSprop(1e-3),
loss='categorical_crossentropy',
metrics=['acc'])
model.fit(x_train, y_train,
batch_size=64,
epochs=1,
validation_split=0.2)
###Output
_____no_output_____
###Markdown
Compartir capasOtro buen uso de la API funcional son los modelos que usan capas compartidas. Las capas compartidas son instancias de capa que se reutilizan varias veces en un mismo modelo: aprenden características que corresponden a múltiples rutas en el gráfico de capas.Las capas compartidas a menudo se usan para codificar entradas que provienen de espacios similares (por ejemplo, dos piezas de texto diferentes que presentan un vocabulario similar), ya que permiten compartir información entre estas diferentes entradas y hacen posible entrenar un modelo de este tipo en menos datos. Si se ve una palabra determinada en una de las entradas, eso beneficiará el procesamiento de todas las entradas que pasan por la capa compartida.Para compartir una capa en la API funcional, simplemente llame a la misma instancia de capa varias veces. Por ejemplo, aquí hay una capa `Ingresa (del ingles Embedding)` compartida entre dos entradas de texto diferentes:
###Code
# Ingreso de 1000 palabras únicas asignadas a vectores de 128 dimensiones
shared_embedding = layers.Embedding(1000, 128)
# Secuencia de longitud variable de enteros
text_input_a = keras.Input(shape=(None,), dtype='int32')
# Secuencia de longitud variable de enteros
text_input_b = keras.Input(shape=(None,), dtype='int32')
# Reutilizamos la misma capa para codificar ambas entradas
encoded_input_a = shared_embedding(text_input_a)
encoded_input_b = shared_embedding(text_input_b)
###Output
_____no_output_____
###Markdown
Extracción y reutilización de nodos en el gráfico de capas Debido a que el gráfico de capas que está manipulando en la API funcional es una estructura de datos estática, se puede acceder e inspeccionarlo. Así es como podemos trazar modelos funcionales como imágenes, por ejemplo.Esto también significa que podemos acceder a las activaciones de capas intermedias ("nodos" en el gráfico) y reutilizarlas en otros lugares. ¡Esto es extremadamente útil para la extracción de características, por ejemplo!Veamos un ejemplo. Este es un modelo VGG19 con pesas pre-entrenadas en ImageNet:
###Code
from tensorflow.keras.applications import VGG19
vgg19 = VGG19()
###Output
_____no_output_____
###Markdown
Y estas son las activaciones intermedias del modelo, obtenidas al consultar la estructura de datos del gráfico:
###Code
features_list = [layer.output for layer in vgg19.layers]
###Output
_____no_output_____
###Markdown
Podemos usar estas características para crear un nuevo modelo de extracción de características, que devuelve los valores de las activaciones de la capa intermedia, y podemos hacer todo esto en 3 líneas.
###Code
feat_extraction_model = keras.Model(inputs=vgg19.input, outputs=features_list)
img = np.random.random((1, 224, 224, 3)).astype('float32')
extracted_features = feat_extraction_model(img)
###Output
_____no_output_____
###Markdown
Esto es útil cuando [implementa la transferencia de estilo neural] (https://medium.com/tensorflow/neural-style-transfer-creating-art-with-deep-learning-using-tf-keras-and-eager-execution- 7d541ac31398), entre otras cosas. Extendiendo la API escribiendo capas personalizadastf.keras tiene una amplia gama de capas incorporadas. Aquí están algunos ejemplos:- Capas convolucionales: `Conv1D`,` Conv2D`, `Conv3D`,` Conv2DTranspose`, etc.- Capas de agrupación: `MaxPooling1D`,` MaxPooling2D`, `MaxPooling3D`,` AveragePooling1D`, etc.- Capas RNN: `GRU`,` LSTM`, `ConvLSTM2D`, etc.- `BatchNormalization`,` Dropout`, `Embedded`, etc.Si no encuentras lo que necesitas, es fácil extender la API creando tus propias capas.Todas las capas subclasifican la clase `Layer` e implementan:- Un método `call`, que especifica el cálculo realizado por la capa.- Un método `build`, que crea los pesos de la capa (tenga en cuenta que esto es solo una convención de estilo; también puede crear pesos en` __init__`).Para obtener más información sobre cómo crear capas desde cero, consulta la guía [Guía para escribir capas y modelos desde cero](./custom_layers_and_models.ipynb).Aquí hay una implementación simple de una capa `Densa`:
###Code
class CustomDense(layers.Layer):
def __init__(self, units=32):
super(CustomDense, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(shape=(input_shape[-1], self.units),
initializer='random_normal',
trainable=True)
self.b = self.add_weight(shape=(self.units,),
initializer='random_normal',
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
inputs = keras.Input((4,))
outputs = CustomDense(10)(inputs)
model = keras.Model(inputs, outputs)
###Output
_____no_output_____
###Markdown
Si deseas que tu capa personalizada admita la serialización, también debes definir un método `get_config`,que devuelve los argumentos del constructor de la instancia de capa:
###Code
class CustomDense(layers.Layer):
def __init__(self, units=32):
super(CustomDense, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(shape=(input_shape[-1], self.units),
initializer='random_normal',
trainable=True)
self.b = self.add_weight(shape=(self.units,),
initializer='random_normal',
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
return {'units': self.units}
inputs = keras.Input((4,))
outputs = CustomDense(10)(inputs)
model = keras.Model(inputs, outputs)
config = model.get_config()
new_model = keras.Model.from_config(
config, custom_objects={'CustomDense': CustomDense})
###Output
_____no_output_____
###Markdown
Opcionalmente, también podría implementar el método de clase `from_config (cls, config)`, que se encarga de recrear una instancia de capa dado su diccionario de configuración. La implementación predeterminada de `from_config` es:```pythondef from_config(cls, config): return cls(**config)``` Cuándo usar la API funcional¿Cómo decidir si usar la API funcional para crear un nuevo modelo o simplemente subclasificar la clase `Modelo` directamente?En general, la API funcional es de nivel superior, más fácil y segura de usar, y tiene una serie de características que los modelos de subclases no admiten.Sin embargo, la subclasificación de modelos le brinda una mayor flexibilidad al crear modelos que no se pueden expresar fácilmente como gráficos acíclicos dirigidos de capas (por ejemplo, no podría implementar un Tree-RNN con la API funcional, tendría que subclasificar `Model` directamente). Estas son las fortalezas de la API funcional:Las propiedades enumeradas a continuación también son ciertas para los modelos secuenciales (que también son estructuras de datos), pero no son ciertas para los modelos subclasificados (que son bytecode de Python, no estructuras de datos). Es menos detallado.No ` super (MyClass, self) .__ init __ (...)`, no `def call (self, ...): `, etc.Comparar:```pitóninput = keras.Input (shape = (32,))x = capas. Denso (64, activación = 'relu') (entradas)salidas = capas. Denso (10) (x)mlp = keras.Model (entradas, salidas)```Con la versión subclaseada:```pitónclase MLP (keras.Model): def __init __ (self, ** kwargs): super (MLP, self) .__ init __ (** kwargs) self.dense_1 = capas.Dense (64, activación = 'relu') self.dense_2 = layers.Dense (10) llamada def (auto, entradas): x = self.dense_1 (entradas) return self.dense_2 (x) Instanciar el modelo.mlp = MLP () Necesario para crear el estado del modelo. El modelo no tiene un estado hasta que se llama al menos una vez._ = mlp (tf.zeros ((1, 32)))``` Valida su modelo mientras lo está definiendo.En la API funcional, su especificación de entrada (forma y dtype) se crea de antemano (a través de `Input`), y cada vez que llama a una capa, la capa comprueba que la especificación que se le pasa coincide con sus supuestos, y generará un mensaje de error útil si no.Esto garantiza que se ejecutará cualquier modelo que pueda construir con la API funcional. Toda la depuración (que no sea la depuración relacionada con la convergencia) ocurrirá estáticamente durante la construcción del modelo, y no en el momento de la ejecución. Esto es similar a la comprobación de tipo en un compilador. Tu modelo funcional es trazable e inspeccionable.Puedes trazar el modelo como un gráfico, y puedes acceder fácilmente a los nodos intermedios en este gráfico, por ejemplo, para extraer y reutilizar las activaciones de las capas intermedias, como vimos en un ejemplo anterior:```pitónfeatures_list = [layer.output para la capa en vgg19.layers]feat_extraction_model = keras.Model (input = vgg19.input, salidas = features_list)``` Su modelo funcional puede ser serializado o clonado.Debido a que un modelo funcional es una estructura de datos en lugar de un fragmento de código, es serializable de forma segura y se puede guardar como un único archivo que le permite recrear exactamente el mismo modelo sin tener acceso a ninguno de los códigos originales. Consulta nuestra [guía de guardado y serialización] (./save_and_serialize.ipynb) para obtener más detalles. Estas son las debilidades de la API funcional: No admite arquitecturas dinámicas.La API funcional trata los modelos como DAG de capas. Esto es cierto para la mayoría de las arquitecturas de aprendizaje profundo, pero no para todas: por ejemplo, las redes recursivas o los RNN de árbol no siguen este supuesto y no se pueden implementar en la API funcional. A veces, solo necesitas escribir todo desde cero.Al escribir actividades avanzadas, es posible que desee hacer cosas que están fuera del alcance de "definir un DAG de capas": por ejemplo, es posible que desee exponer múltiples métodos personalizados de entrenamiento e inferencia en su instancia de modelo. Esto requiere subclases.---Para profundizar más en las diferencias entre la API funcional y la subclasificación de modelos, puede leer [¿Qué son las API simbólicas e imperativas en TensorFlow 2.0?] (Https://medium.com/tensorflow/what-are-symbolic-and -imperative-apis-in-tensorflow-2-0-dfccecb01021). Mezcla y combina diferentes estilos de APIEs importante destacar que elegir entre la subclasificación de API funcional o modelo no es una decisión binaria que lo restringe a una categoría de modelos. Todos los modelos en la API tf.keras pueden interactuar con cada uno, ya sean modelos secuenciales, modelos funcionales o modelos / capas subclasificados escritos desde cero.Siempre puede usar un modelo funcional o modelo secuencial como parte de un modelo / capa subclasificado:
###Code
units = 32
timesteps = 10
input_dim = 5
# Define a Functional model
inputs = keras.Input((None, units))
x = layers.GlobalAveragePooling1D()(inputs)
outputs = layers.Dense(1, activation='sigmoid')(x)
model = keras.Model(inputs, outputs)
class CustomRNN(layers.Layer):
def __init__(self):
super(CustomRNN, self).__init__()
self.units = units
self.projection_1 = layers.Dense(units=units, activation='tanh')
self.projection_2 = layers.Dense(units=units, activation='tanh')
# Our previously-defined Functional model
self.classifier = model
def call(self, inputs):
outputs = []
state = tf.zeros(shape=(inputs.shape[0], self.units))
for t in range(inputs.shape[1]):
x = inputs[:, t, :]
h = self.projection_1(x)
y = h + self.projection_2(state)
state = y
outputs.append(y)
features = tf.stack(outputs, axis=1)
print(features.shape)
return self.classifier(features)
rnn_model = CustomRNN()
_ = rnn_model(tf.zeros((1, timesteps, input_dim)))
###Output
_____no_output_____
###Markdown
Inversamente, puede usar cualquier Capa o Modelo subclasificado en la API Funcional siempre que implemente un método `call` que siga uno de los siguientes patrones:- `call (self, input, ** kwargs)` donde `input` es un tensor o una estructura anidada de tensores (por ejemplo, una lista de tensores), y donde` ** kwargs` son argumentos no tensoriales (no input )- `call (self, input, training = None, ** kwargs)` donde `training` es un valor booleano que indica si la capa debe comportarse en modo de entrenamiento y modo de inferencia.- `call (self, input, mask = None, ** kwargs)` donde `mask` es un tensor de máscara booleano (útil para RNN, por ejemplo).- `call (self, input, training = None, mask = None, ** kwargs)` - por supuesto, puede tener tanto un comportamiento específico de enmascaramiento como de entrenamiento al mismo tiempo.Además, si implementa el método `get_config` en su Capa o Modelo personalizado, los modelos funcionales que cree con él seguirán siendo serializables y clonables.Aquí hay un ejemplo rápido en el que usamos un RNN personalizado escrito desde cero en un modelo funcional:
###Code
units = 32
timesteps = 10
input_dim = 5
batch_size = 16
class CustomRNN(layers.Layer):
def __init__(self):
super(CustomRNN, self).__init__()
self.units = units
self.projection_1 = layers.Dense(units=units, activation='tanh')
self.projection_2 = layers.Dense(units=units, activation='tanh')
self.classifier = layers.Dense(1, activation='sigmoid')
def call(self, inputs):
outputs = []
state = tf.zeros(shape=(inputs.shape[0], self.units))
for t in range(inputs.shape[1]):
x = inputs[:, t, :]
h = self.projection_1(x)
y = h + self.projection_2(state)
state = y
outputs.append(y)
features = tf.stack(outputs, axis=1)
return self.classifier(features)
# Tenga en cuenta que especificamos un tamaño de lote estático para las entradas con `batch_shape`
# arg, porque el cálculo interno de `CustomRNN` requiere un tamaño de lote estático
# (cuando creamos el tensor de ceros `estado`).
inputs = keras.Input(batch_shape=(batch_size, timesteps, input_dim))
x = layers.Conv1D(32, 3)(inputs)
outputs = CustomRNN()(x)
model = keras.Model(inputs, outputs)
rnn_model = CustomRNN()
_ = rnn_model(tf.zeros((1, 10, 5)))
###Output
_____no_output_____ |
14_linear_algebra/14_String_Problem.ipynb | ###Markdown
14 Linear Algebra: String Problem Motivating problem: Two masses on three stringsTwo masses $M_1$ and $M_2$ are hung from a horizontal rod with length $L$ in such a way that a rope of length $L_1$ connects the left end of the rod to $M_1$, a rope of length $L_2$ connects $M_1$ and $M_2$, and a rope of length $L_3$ connects $M_2$ to the right end of the rod. The system is at rest (in equilibrium under gravity).![Schematic of the 1 rod/2 masses/3 strings problem.](1rod2masses3strings.png)Find the angles that the ropes make with the rod and the tension forces in the ropes. Theoretical backgroundTreat $\sin\theta_i$ and $\cos\theta_i$ together with $T_i$, $1\leq i \leq 3$, as unknowns that have to simultaneously fulfill the nine equations\begin{align}-T_1 \cos\theta_1 + T_2\cos\theta_2 &= 0\\ T_1 \sin\theta_1 - T_2\sin\theta_2 - W_1 &= 0\\ -T_2\cos\theta_2 + T_3\cos\theta_3 &= 0\\ T_2\sin\theta_2 + T_3\sin\theta_3 - W_2 &= 0\\ L_1\cos\theta_1 + L_2\cos\theta_2 + L_3\cos\theta_3 - L &= 0\\-L_1\sin\theta_1 -L_2\sin\theta_2 + L_3\sin\theta_3 &= 0\\\sin^2\theta_1 + \cos^2\theta_1 - 1 &= 0\\\sin^2\theta_2 + \cos^2\theta_2 - 1 &= 0\\\sin^2\theta_3 + \cos^2\theta_3 - 1 &= 0\end{align}Consider the nine equations a vector function $\mathbf{f}$ that takes a 9-vector $\mathbf{x}$ of the unknowns as argument:\begin{align}\mathbf{f}(\mathbf{x}) &= 0\\\mathbf{x} &= \left(\begin{array}{c}x_0 \\ x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \\ x_6 \\ x_7 \\ x_8\end{array}\right) =\left(\begin{array}{c}\sin\theta_1 \\ \sin\theta_2 \\ \sin\theta_3 \\\cos\theta_1 \\ \cos\theta_2 \\ \cos\theta_3 \\T_1 \\ T_2 \\ T_3\end{array}\right) \\\mathbf{L} &= \left(\begin{array}{c}L \\ L_1 \\ L_2 \\ L_3\end{array}\right), \quad\mathbf{W} = \left(\begin{array}{c}W_1 \\ W_2\end{array}\right)\end{align} Using the unknowns from above, our system of 9 coupled equations is:\begin{align}-x_6 x_3 + x_7 x_4 &= 0\\ x_6 x_0 - x_7 x_1 - W_1 &= 0\\-x_7x_4 + x_8 x_5 &= 0\\ x_7x_1 + x_8 x_2 - W_2 &= 0\\ L_1x_3 + L_2 x_4 + L_3 x_5 - L &= 0\\-L_1x_0 - L_2 x_1 + L_3 x_2 &= 0\\x_{0}^{2} + x_{3}^{2} - 1 &= 0\\x_{1}^{2} + x_{4}^{2} - 1 &= 0\\x_{2}^{2} + x_{5}^{2} - 1 &= 0\end{align} Solve the root-finding problem $\mathbf{f}(\mathbf{x}) = 0$ with the **generalized Newton-Raphson** algorithm:$$\mathsf{J}(\mathbf{x}) \Delta\mathbf{x} = -\mathbf{f}(\mathbf{x})$$and $$\mathbf{x} \leftarrow \mathbf{x} + \Delta\mathbf{x}.$$ Problem setupSet the problem parameters and the objective function $\mathbf{f}(\mathbf{x})$
###Code
import numpy as np
# problem parameters
W = np.array([10, 20])
L = np.array([8, 3, 4, 4])
def f_2masses(x, L, W):
return np.array([
-x[6]*x[3] + x[7]*x[4],
x[6]*x[0] - x[7]*x[1] - W[0],
-x[7]*x[4] + x[8]*x[5],
x[7]*x[1] + x[8]*x[2] - W[1],
L[1]*x[3] + L[2]*x[4] + L[3]*x[5] - L[0],
-L[1]*x[0] - L[2]*x[1] + L[3]*x[2],
x[0]**2 + x[3]**2 - 1,
x[1]**2 + x[4]**2 - 1,
x[2]**2 + x[5]**2 - 1,
])
def fLW(x, L=L, W=W):
return f_2masses(x, L, W)
###Output
_____no_output_____
###Markdown
Initial valuesGuess some initial values (they don't have to fullfil the equations!):
###Code
# initial parameters
#theta0 = np.deg2rad([45, 45, 90])
#T0 = np.array([1, 1, 2])
#x0 = np.concatenate([np.sin(theta0), np.cos(theta0), T0])
x0 = np.array([1.5, 0.5, 0.5, 0.5, 0.5, 0.5, 1, 1, 1])
x0
f_2masses(x0, L, W)
###Output
_____no_output_____
###Markdown
VisualizationPlot the positions of the 2 masses and the 3 strings for any solution vector $\mathbf{x}$:
###Code
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
def plot_2masses(x, L, W, **kwargs):
"""Plot 2 mass/3 string problem for parameter vector x and parameters L and W"""
kwargs.setdefault('linestyle', '-')
kwargs.setdefault('marker', 'o')
kwargs.setdefault('linewidth', 1)
ax = kwargs.pop('ax', None)
if ax is None:
ax = plt.subplot(111)
r0 = np.array([0, 0])
r1 = r0 + np.array([L[0], 0])
rod = np.transpose([r0, r1])
L1 = r0 + np.array([L[1]*x[3], -L[1]*x[0]])
L2 = L1 + np.array([L[2]*x[4], -L[2]*x[1]])
L3 = L2 + np.array([L[3]*x[5], L[3]*x[2]])
strings = np.transpose([r0, L1, L2, L3])
ax.plot(rod[0], rod[1], color="black", marker="d", linewidth=4)
ax.plot(strings[0], strings[1], **kwargs)
ax.set_aspect(1)
return ax
###Output
_____no_output_____
###Markdown
What does the initial guess look like?
###Code
plot_2masses(x0, L, W)
###Output
_____no_output_____
###Markdown
Jacobian Write a function `Jacobian(f, x, h=1e-5)` that computes the Jacobian matrix numerically (use the central difference algorithm).\begin{align}\mathbf{J} &= \frac{\partial \mathbf{f}}{\partial\mathbf{x}} \\J_{ij} &= \frac{\partial f_i(x_1, \dots, x_j, \dots)}{\partial x_j} \\ &\approx \frac{f_i(x_1, \dots, x_j + \frac{h}{2}, \dots) - f_i(x_1, \dots, x_j - \frac{h}{2}, \dots)}{h}\end{align}
###Code
def Jacobian(f, x, h=1e-5):
"""df_i/dx_j with central difference (fi(xj+h/2)-fi(xj-h/2))/h"""
J = np.zeros((len(f(x)), len(x)), dtype=np.float64)
hvec = np.zeros_like(x, dtype=np.float64)
for j in range(len(x)):
hvec *= 0
hvec[j] = 0.5*h
J[:, j] = (f(x + hvec) - f(x - hvec))/h
return J
###Output
_____no_output_____
###Markdown
Test `Jacobian()` on $$\mathbf{g}(\mathbf{x}) = \begin{pmatrix} x_0^2 - x_1 \\ x_0 \end{pmatrix}$$with analytical result$$\mathsf{J} = \left[\frac{\partial g_i}{\partial x_j}\right] =\begin{pmatrix} \frac{\partial g_0}{\partial x_0} & \frac{\partial g_0}{\partial x_1}\\ \frac{\partial g_1}{\partial x_0} & \frac{\partial g_1}{\partial x_1}\end{pmatrix}= \begin{pmatrix} 2 x_0 & -1\\ 1 & 0\end{pmatrix}$$ Given a test vector $\mathbf{x}_\text{test} = (1, 0)$, what is the numerical answer for $\mathsf{J}(\mathbf{x}_\text{test})$?$$\mathsf{J}(\mathbf{x}_\text{test}) = \begin{pmatrix} 2 & -1\\ 1 & 0\end{pmatrix}$$Test your `Jacobian()` function with $\mathbf{x}_\text{test}$ and check that you get the same answer:
###Code
def g(x):
return np.array([
x[0]**2 - x[1],
x[0]
])
x_test = np.array([1, 0])
J = Jacobian(g, x_test)
print(J)
###Output
[[ 2. -1.]
[ 1. 0.]]
###Markdown
Test that it also works for our starting vector:
###Code
J0 = Jacobian(fLW, x0)
J0
J0.shape
###Output
_____no_output_____
###Markdown
n-D Newton-Raphson Root Finding Write a function `newton_raphson(f, x, Nmax=100, tol=1e-8, h=1e-5)` to find a root for a vector function `f(x)=0`. (See also [13 Root-finding by trial-and-error](https://asu-compmethodsphysics-phy494.github.io/ASU-PHY494/2020/03/26/13_Root_finding/) and the _1D Newton-Raphson algorithm_ in [13-Root-finding.ipynb](https://github.com/ASU-CompMethodsPhysics-PHY494/PHY494-resources/blob/master/13_root_finding/13-Root-finding.ipynb).) As a convergence criterion we demand that the length of the vector `f(x)` (the norm --- see `np.linalg.norm`) be less than the tolerance.
###Code
def newton_raphson(f, x, Nmax=100, tol=1e-8, h=1e-5):
"""n-D Newton-Raphson: solves f(x) = 0.
Iterate until |f(x)| < tol or nmax steps.
"""
x = x.copy()
for istep in range(Nmax):
fx = f(x)
if np.linalg.norm(fx) < tol:
break
J = Jacobian(f, x, h=h)
Delta_x = np.linalg.solve(J, -fx)
x += Delta_x
else:
print("Newton-Raphson: no root found after {0} iterations (eps={1}); "
"best guess is {2} with error {3}".format(Nmax, tol, x, fx))
return x
###Output
_____no_output_____
###Markdown
Solve 2 masses/3 strings string problem Solution
###Code
x = newton_raphson(fLW, x0)
print(x0)
print(x)
###Output
[1.5 0.5 0.5 0.5 0.5 0.5 1. 1. 1. ]
[ 0.76100269 0.26495381 0.83570583 0.64874872 0.9642611 0.54917735
17.16020978 11.54527968 20.27152804]
###Markdown
Plot the starting configuration and the solution:
###Code
ax = plot_2masses(x0, L, W)
ax = plot_2masses(x, L, W, ax=ax)
###Output
_____no_output_____
###Markdown
Pretty-print the solution (angles in degrees):
###Code
def pretty_print(x):
theta = np.rad2deg(np.arcsin(x[0:3]))
tensions = x[6:]
print("theta1 = {0[0]:.1f} \t theta2 = {0[1]:.1f} \t theta3 = {0[2]:.1f}".format(theta))
print("T1 = {0[0]:.1f} \t T2 = {0[1]:.1f} \t T3 = {0[2]:.1f}".format(tensions))
print("Starting values")
pretty_print(x0)
print()
print("Solution")
pretty_print(x)
###Output
Starting values
theta1 = nan theta2 = 30.0 theta3 = 30.0
T1 = 1.0 T2 = 1.0 T3 = 1.0
Solution
theta1 = 49.6 theta2 = 15.4 theta3 = 56.7
T1 = 17.2 T2 = 11.5 T3 = 20.3
###Markdown
Show intermediate stepsCreate a new function `newton_raphson_intermediates()` based on `newton_raphson()` that returns *all* trial `x` values including the last one.
###Code
def newton_raphson_intermediates(f, x, Nmax=100, tol=1e-8, h=1e-5):
"""n-D Newton-Raphson: solves f(x) = 0.
Iterate until |f(x)| < tol or nmax steps.
Returns all intermediates.
"""
intermediates = []
x = x.copy()
for istep in range(Nmax):
fx = f(x)
if np.linalg.norm(fx) < tol:
break
J = Jacobian(f, x, h=h)
Delta_x = np.linalg.solve(J, -fx)
intermediates.append(x.copy())
x += Delta_x
else:
print("Newton-Raphson: no root found after {0} iterations (eps={1}); "
"best guess is {2} with error {3}".format(Nmax, tol, x, fx))
return np.array(intermediates)
###Output
_____no_output_____
###Markdown
Visualize the intermediate configurations:
###Code
x_series = newton_raphson_intermediates(fLW, x0)
ax = plt.subplot(111)
ax.set_prop_cycle("color", [plt.cm.viridis_r(i) for i in np.linspace(0, 1, len(x_series))])
for iteration, x in enumerate(x_series):
plot_2masses(x, L, W, label=str(iteration), ax=ax)
ax.legend(loc="best");
###Output
_____no_output_____
###Markdown
It's convenient to turn the above plotting code into a function that we can reuse:
###Code
def plot_series(x_series, L, W):
"""Plot all N masses/strings solution vectors in x_series (N, 9) array"""
ax = plt.subplot(111)
ax.set_prop_cycle("color", [plt.cm.viridis_r(i) for i in np.linspace(0, 1, len(x_series))])
for iteration, x in enumerate(x_series):
plot_2masses(x, L, W, label=str(iteration), ax=ax)
ax.legend(loc="best")
return ax
###Output
_____no_output_____
###Markdown
Additional workTry different masses, e.g. M1 = M2 = 10, or M1= 0 , M2 = 10. Use nicer starting parameters that already fulfill the angle equations (7) - (9) (but it works with pretty much any guess):
###Code
# initial parameters
theta0 = np.deg2rad([45, 45, 90])
T0 = np.array([1, 1, 2])
x0 = np.concatenate([np.sin(theta0), np.cos(theta0), T0])
###Output
_____no_output_____
###Markdown
M1 = M2 = 10
###Code
W_2 = np.array([10, 10])
def fLW_2(x, L=L, W=W_2):
return f_2masses(x, L, W)
x_series_2 = newton_raphson_intermediates(fLW_2, x0)
pretty_print(x_series_2[-1])
plot_series(x_series_2, L, W_2)
###Output
_____no_output_____
###Markdown
M1 = 0, M2 = 10
###Code
W_3 = np.array([0, 10])
def fLW_3(x):
return f_2masses(x, L=L, W=W_3)
x_series_3 = newton_raphson_intermediates(fLW_3, x0)
pretty_print(x_series_3[-1])
plot_series(x_series_3, L, W_3)
###Output
_____no_output_____
###Markdown
14 Linear Algebra: String Problem Motivating problem: Two masses on three stringsTwo masses $M_1$ and $M_2$ are hung from a horizontal rod with length $L$ in such a way that a rope of length $L_1$ connects the left end of the rod to $M_1$, a rope of length $L_2$ connects $M_1$ and $M_2$, and a rope of length $L_3$ connects $M_2$ to the right end of the rod. The system is at rest (in equilibrium under gravity).![Schematic of the 1 rod/2 masses/3 strings problem.](1rod2masses3strings.svg)Find the angles that the ropes make with the rod and the tension forces in the ropes. Theoretical backgroundTreat $\sin\theta_i$ and $\cos\theta_i$ together with $T_i$, $1\leq i \leq 3$, as unknowns that have to simultaneously fulfill the nine equations\begin{align}-T_1 \cos\theta_1 + T_2\cos\theta_2 &= 0\\ T_1 \sin\theta_1 - T_2\sin\theta_2 - W_1 &= 0\\ -T_2\cos\theta_2 + T_3\cos\theta_3 &= 0\\ T_2\sin\theta_2 + T_3\sin\theta_3 - W_2 &= 0\\ L_1\cos\theta_1 + L_2\cos\theta_2 + L_3\cos\theta_3 - L &= 0\\-L_1\sin\theta_1 -L_2\sin\theta_2 + L_3\sin\theta_3 &= 0\\\sin^2\theta_1 + \cos^2\theta_1 - 1 &= 0\\\sin^2\theta_2 + \cos^2\theta_2 - 1 &= 0\\\sin^2\theta_3 + \cos^2\theta_3 - 1 &= 0\end{align}Consider the nine equations a vector function $\mathbf{f}$ that takes a 9-vector $\mathbf{x}$ of the unknowns as argument:\begin{align}\mathbf{f}(\mathbf{x}) &= 0\\\mathbf{x} &= \left(\begin{array}{c}x_0 \\ x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \\ x_6 \\ x_7 \\ x_8\end{array}\right) =\left(\begin{array}{c}\sin\theta_1 \\ \sin\theta_2 \\ \sin\theta_3 \\\cos\theta_1 \\ \cos\theta_2 \\ \cos\theta_3 \\T_1 \\ T_2 \\ T_3\end{array}\right) \\\mathbf{L} &= \left(\begin{array}{c}L \\ L_1 \\ L_2 \\ L_3\end{array}\right), \quad\mathbf{W} = \left(\begin{array}{c}W_1 \\ W_2\end{array}\right)\end{align} Solve with generalized Newton-Raphson:$$\mathsf{J}(\mathbf{x}) \Delta\mathbf{x} = -\mathbf{f}(\mathbf{x})$$and $$\mathbf{x} \leftarrow \mathbf{x} + \Delta\mathbf{x}.$$ Problem setupSet the problem parameters and the objective function $\mathbf{f}(\mathbf{x})$
###Code
import numpy as np
# problem parameters
W = np.array([10, 20])
L = np.array([8, 3, 4, 4])
def f_2masses(x, L, W):
return np.array([
-x[6]*x[3] + x[7]*x[4],
x[6]*x[0] - x[7]*x[1] - W[0],
-x[7]*x[4] + x[8]*x[5],
x[7]*x[1] + x[8]*x[2] - W[1],
L[1]*x[3] + L[2]*x[4] + L[3]*x[5] - L[0],
-L[1]*x[0] - L[2]*x[1] + L[3]*x[2],
x[0]**2 + x[3]**2 - 1,
x[1]**2 + x[4]**2 - 1,
x[2]**2 + x[5]**2 - 1,
])
def fLW(x):
return f_2masses(x, L, W)
###Output
_____no_output_____
###Markdown
Initial valuesGuess some initial values (they don't have to fullfil the equations!):
###Code
# initial parameters
theta0 = np.deg2rad([45, 45, 90])
T0 = np.array([1, 1, 2])
x0 = np.concatenate([np.sin(theta0), np.cos(theta0), T0])
x0
f_2masses(x0, L, W)
###Output
_____no_output_____
###Markdown
VisualizationPlot the positions of the 2 masses and the 3 strings for any solution vector $\mathbf{x}$:
###Code
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
def plot_2masses(x, L, W, **kwargs):
"""Plot 2 mass/3 string problem for parameter vector x and parameters L and W"""
kwargs.setdefault('linestyle', '-')
kwargs.setdefault('marker', 'o')
kwargs.setdefault('linewidth', 1)
r0 = np.array([0, 0])
r1 = r0 + np.array([L[0], 0])
rod = np.transpose([r0, r1])
L1 = r0 + np.array([L[1]*x[3], -L[1]*x[0]])
L2 = L1 + np.array([L[2]*x[4], -L[2]*x[1]])
L3 = L2 + np.array([L[3]*x[5], L[3]*x[2]])
strings = np.transpose([r0, L1, L2, L3])
ax = plt.subplot(111)
ax.plot(rod[0], rod[1], color="black", marker="d", linewidth=4)
ax.plot(strings[0], strings[1], **kwargs)
ax.set_aspect(1)
return ax
###Output
_____no_output_____
###Markdown
What does the initial guess look like?
###Code
plot_2masses(x0, L, W)
###Output
_____no_output_____
###Markdown
Jacobian Write a function `Jacobian(f, x, h=1e-5)` that computes the Jacobian matrix numerically (use the central difference algorithm).
###Code
def Jacobian(f, x, h=1e-5):
"""df_i/dx_j with central difference (f(x+h/2)-f(x-h/2))/h"""
J = np.zeros((len(f(x)), len(x)), dtype=np.float64)
hvec = np.zeros_like(x, dtype=np.float64)
for j in range(len(x)):
hvec *= 0
hvec[j] = 0.5*h
J[:, j] = (f(x + hvec) - f(x - hvec))/h
return J
###Output
_____no_output_____
###Markdown
Test Jacobian on $$\mathbf{f}(\mathbf{x}) = \left( \begin{array}{c} x_0^2 - x_1 \\ x_0 \end{array}\right)$$with analytical result$$\mathsf{J} = \frac{\partial f_i}{\partial x_j} =\left( \begin{array}{cc} 2 x_0 & -1\\ 1 & 0\end{array}\right)$$
###Code
def ftest(x):
return np.array([
x[0]**2 - x[1],
x[0]
])
x0test = np.array([1, 0])
J = Jacobian(ftest, x0test)
print(J)
###Output
[[ 2. -1.]
[ 1. 0.]]
###Markdown
Test that it also works for our starting vector:
###Code
Jacobian(fLW, x0)
###Output
_____no_output_____
###Markdown
n-D Newton-Raphson Root Finding Write a function `newton_raphson(f, x, Nmax=100, tol=1e-8, h=1e-5)` to find a root for a vector function `f(x)=0`. (See also [13 Root-finding by trial-and-error](http://asu-compmethodsphysics-phy494.github.io/ASU-PHY494//2017/03/16/13_Root_finding/) and the _1D Newton-Raphson algorithm_ in [13-Root-finding.ipynb](https://github.com/ASU-CompMethodsPhysics-PHY494/PHY494-resources/blob/master/13_root_finding/13-Root-finding.ipynb).) As a convergence criterion we demand that the length of the vector `f(x)` (the norm --- see `np.linalg.norm`) be less than the tolerance.
###Code
def newton_raphson(f, x, Nmax=100, tol=1e-8, h=1e-5):
"""n-D Newton-Raphson: solves f(x) = 0.
Iterate until |f(x)| < tol or nmax steps.
"""
x = x.copy()
for istep in range(Nmax):
fx = f(x)
if np.linalg.norm(fx) < tol:
break
J = Jacobian(f, x, h=h)
Delta_x = np.linalg.solve(J, -fx)
x += Delta_x
else:
print("Newton-Raphson: no root found after {0} iterations (eps={1}); "
"best guess is {2} with error {3}".format(Nmax, tol, x, fx))
return x
###Output
_____no_output_____
###Markdown
Solve 2 masses/3 strings string problem Solution
###Code
x = newton_raphson(fLW, x0)
print(x0)
print(x)
###Output
[ 7.07106781e-01 7.07106781e-01 1.00000000e+00 7.07106781e-01
7.07106781e-01 6.12323400e-17 1.00000000e+00 1.00000000e+00
2.00000000e+00]
[ 0.76100269 0.26495381 0.83570583 0.64874872 0.9642611
0.54917735 17.16020978 11.54527968 20.27152804]
###Markdown
Plot the starting configuration and the solution:
###Code
plot_2masses(x0, L, W)
plot_2masses(x, L, W)
###Output
_____no_output_____
###Markdown
Pretty-print the solution (angles in degrees):
###Code
def pretty_print(x):
theta = np.rad2deg(np.arcsin(x[0:3]))
tensions = x[6:]
print("theta1 = {0[0]:.1f} \t theta2 = {0[1]:.1f} \t theta3 = {0[2]:.1f}".format(theta))
print("T1 = {0[0]:.1f} \t T2 = {0[1]:.1f} \t T3 = {0[2]:.1f}".format(tensions))
print("Starting values")
pretty_print(x0)
print()
print("Solution")
pretty_print(x)
###Output
Starting values
theta1 = 45.0 theta2 = 45.0 theta3 = 90.0
T1 = 1.0 T2 = 1.0 T3 = 2.0
Solution
theta1 = 49.6 theta2 = 15.4 theta3 = 56.7
T1 = 17.2 T2 = 11.5 T3 = 20.3
###Markdown
Show intermediate stepsCreate a new function `newton_raphson_intermediates()` based on `newton_raphson()` that returns *all* trial `x` values including the last one.
###Code
def newton_raphson_intermediates(f, x, Nmax=100, tol=1e-8, h=1e-5):
"""n-D Newton-Raphson: solves f(x) = 0.
Iterate until |f(x)| < tol or nmax steps.
Returns all intermediates.
"""
intermediates = []
x = x.copy()
for istep in range(Nmax):
fx = f(x)
if np.linalg.norm(fx) < tol:
break
J = Jacobian(f, x, h=h)
Delta_x = np.linalg.solve(J, -fx)
intermediates.append(x.copy())
x += Delta_x
else:
print("Newton-Raphson: no root found after {0} iterations (eps={1}); "
"best guess is {2} with error {3}".format(Nmax, tol, x, fx))
return np.array(intermediates)
###Output
_____no_output_____
###Markdown
Visualize the intermediate configurations:
###Code
x_series = newton_raphson_intermediates(fLW, x0)
ax = plt.subplot(111)
ax.set_prop_cycle("color", [plt.cm.viridis_r(i) for i in np.linspace(0, 1, len(x_series))])
for x in x_series:
plot_2masses(x, L, W)
###Output
_____no_output_____
###Markdown
It's convenient to turn the above plotting code into a function that we can reuse:
###Code
def plot_series(x_series, L, W):
"""Plot all N masses/strings solution vectors in x_series (N, 9) array"""
ax = plt.subplot(111)
ax.set_prop_cycle("color", [plt.cm.viridis_r(i) for i in np.linspace(0, 1, len(x_series))])
for x in x_series:
plot_2masses(x, L, W)
return ax
###Output
_____no_output_____
###Markdown
Additional workTry different masses, e.g. M1 = M2 = 10, or M1= 0 , M2 = 10. M1 = M2 = 10
###Code
W_2 = np.array([10, 10])
def fLW_2(x):
return f_2masses(x, L, W_2)
x_series_2 = newton_raphson_intermediates(fLW_2, x0)
pretty_print(x_series_2[-1])
plot_series(x_series_2, L, W_2)
###Output
_____no_output_____
###Markdown
M1 = 0, M2 = 10
###Code
W_3 = np.array([0, 10])
def fLW_3(x):
return f_2masses(x, L, W_3)
x_series_3 = newton_raphson_intermediates(fLW_3, x0)
pretty_print(x_series_3[-1])
plot_series(x_series_3, L, W_3)
###Output
_____no_output_____
###Markdown
14 Linear Algebra: String Problem Motivating problem: Two masses on three stringsTwo masses $M_1$ and $M_2$ are hung from a horizontal rod with length $L$ in such a way that a rope of length $L_1$ connects the left end of the rod to $M_1$, a rope of length $L_2$ connects $M_1$ and $M_2$, and a rope of length $L_3$ connects $M_2$ to the right end of the rod. The system is at rest (in equilibrium under gravity).![Schematic of the 1 rod/2 masses/3 strings problem.](1rod2masses3strings.svg)Find the angles that the ropes make with the rod and the tension forces in the ropes. Theoretical backgroundTreat $\sin\theta_i$ and $\cos\theta_i$ together with $T_i$, $1\leq i \leq 3$, as unknowns that have to simultaneously fulfill the nine equations\begin{align}-T_1 \cos\theta_1 + T_2\cos\theta_2 &= 0\\ T_1 \sin\theta_1 - T_2\sin\theta_2 - W_1 &= 0\\ -T_2\cos\theta_2 + T_3\cos\theta_3 &= 0\\ T_2\sin\theta_2 + T_3\sin\theta_3 - W_2 &= 0\\ L_1\cos\theta_1 + L_2\cos\theta_2 + L_3\cos\theta_3 - L &= 0\\-L_1\sin\theta_1 -L_2\sin\theta_2 + L_3\sin\theta_3 &= 0\\\sin^2\theta_1 + \cos^2\theta_1 - 1 &= 0\\\sin^2\theta_2 + \cos^2\theta_2 - 1 &= 0\\\sin^2\theta_3 + \cos^2\theta_3 - 1 &= 0\end{align}Consider the nine equations a vector function $\mathbf{f}$ that takes a 9-vector $\mathbf{x}$ of the unknowns as argument:\begin{align}\mathbf{f}(\mathbf{x}) &= 0\\\mathbf{x} &= \left(\begin{array}{c}x_0 \\ x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \\ x_6 \\ x_7 \\ x_8\end{array}\right) =\left(\begin{array}{c}\sin\theta_1 \\ \sin\theta_2 \\ \sin\theta_3 \\\cos\theta_1 \\ \cos\theta_2 \\ \cos\theta_3 \\T_1 \\ T_2 \\ T_3\end{array}\right) \\\mathbf{L} &= \left(\begin{array}{c}L \\ L_1 \\ L_2 \\ L_3\end{array}\right), \quad\mathbf{W} = \left(\begin{array}{c}W_1 \\ W_2\end{array}\right)\end{align} Solve with generalized Newton-Raphson:$$\mathsf{J}(\mathbf{x}) \Delta\mathbf{x} = -\mathbf{f}(\mathbf{x})$$and $$\mathbf{x} \leftarrow \mathbf{x} + \Delta\mathbf{x}.$$ Problem setupSet the problem parameters and the objective function $\mathbf{f}(\mathbf{x})$
###Code
import numpy as np
# problem parameters
W = np.array([10, 20])
L = np.array([8, 3, 4, 4])
def f_2masses(x, L, W):
return np.array([
-x[6]*x[3] + x[7]*x[4],
x[6]*x[0] - x[7]*x[1] - W[0],
-x[7]*x[4] + x[8]*x[5],
x[7]*x[1] + x[8]*x[2] - W[1],
L[1]*x[3] + L[2]*x[4] + L[3]*x[5] - L[0],
-L[1]*x[0] - L[2]*x[1] + L[3]*x[2],
x[0]**2 + x[3]**2 - 1,
x[1]**2 + x[4]**2 - 1,
x[2]**2 + x[5]**2 - 1,
])
def fLW(x):
return f_2masses(x, L, W)
###Output
_____no_output_____
###Markdown
Initial valuesGuess some initial values (they don't have to fullfil the equations!):
###Code
# initial parameters
#theta0 = np.deg2rad([45, 45, 90])
#T0 = np.array([1, 1, 2])
#x0 = np.concatenate([np.sin(theta0), np.cos(theta0), T0])
x0 = np.array([1.5, 0.5, 0.5, 0.5, 0.5, 0.5, 1, 1, 1])
x0
f_2masses(x0, L, W)
###Output
_____no_output_____
###Markdown
VisualizationPlot the positions of the 2 masses and the 3 strings for any solution vector $\mathbf{x}$:
###Code
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
def plot_2masses(x, L, W, **kwargs):
"""Plot 2 mass/3 string problem for parameter vector x and parameters L and W"""
kwargs.setdefault('linestyle', '-')
kwargs.setdefault('marker', 'o')
kwargs.setdefault('linewidth', 1)
r0 = np.array([0, 0])
r1 = r0 + np.array([L[0], 0])
rod = np.transpose([r0, r1])
L1 = r0 + np.array([L[1]*x[3], -L[1]*x[0]])
L2 = L1 + np.array([L[2]*x[4], -L[2]*x[1]])
L3 = L2 + np.array([L[3]*x[5], L[3]*x[2]])
strings = np.transpose([r0, L1, L2, L3])
ax = plt.subplot(111)
ax.plot(rod[0], rod[1], color="black", marker="d", linewidth=4)
ax.plot(strings[0], strings[1], **kwargs)
ax.set_aspect(1)
return ax
###Output
_____no_output_____
###Markdown
What does the initial guess look like?
###Code
plot_2masses(x0, L, W)
###Output
_____no_output_____
###Markdown
Jacobian Write a function `Jacobian(f, x, h=1e-5)` that computes the Jacobian matrix numerically (use the central difference algorithm).\begin{align}\mathbf{J} &= \frac{\partial \mathbf{f}}{\partial\mathbf{x}} \\J_{ij} &= \frac{\partial f_i(x_1, \dots, x_j, \dots)}{\partial x_j} \\ &\approx \frac{f_i(x_1, \dots, x_j + \frac{h}{2}, \dots) - f_i(x_1, \dots, x_j - \frac{h}{2}, \dots)}{h}\end{align}
###Code
def Jacobian(f, x, h=1e-5):
"""df_i/dx_j with central difference (fi(xj+h/2)-fi(xj-h/2))/h"""
J = np.zeros((len(f(x)), len(x)), dtype=np.float64)
hvec = np.zeros_like(x, dtype=np.float64)
for j in range(len(x)):
hvec *= 0
hvec[j] = 0.5*h
J[:, j] = (f(x + hvec) - f(x - hvec))/h
return J
###Output
_____no_output_____
###Markdown
Test Jacobian on $$\mathbf{f}(\mathbf{x}) = \left( \begin{array}{c} x_0^2 - x_1 \\ x_0 \end{array}\right)$$with analytical result$$\mathsf{J} = \frac{\partial f_i}{\partial x_j} =\left( \begin{array}{cc} 2 x_0 & -1\\ 1 & 0\end{array}\right)$$
###Code
def ftest(x):
return np.array([
x[0]**2 - x[1],
x[0]
])
x0test = np.array([1, 0])
J = Jacobian(ftest, x0test)
print(J)
###Output
[[ 2. -1.]
[ 1. 0.]]
###Markdown
Test that it also works for our starting vector:
###Code
Jacobian(fLW, x0)
###Output
_____no_output_____
###Markdown
n-D Newton-Raphson Root Finding Write a function `newton_raphson(f, x, Nmax=100, tol=1e-8, h=1e-5)` to find a root for a vector function `f(x)=0`. (See also [13 Root-finding by trial-and-error](https://asu-compmethodsphysics-phy494.github.io/ASU-PHY494/2019/03/19/13_Root_finding/) and the _1D Newton-Raphson algorithm_ in [13-Root-finding.ipynb](https://github.com/ASU-CompMethodsPhysics-PHY494/PHY494-resources/blob/master/13_root_finding/13-Root-finding.ipynb).) As a convergence criterion we demand that the length of the vector `f(x)` (the norm --- see `np.linalg.norm`) be less than the tolerance.
###Code
def newton_raphson(f, x, Nmax=100, tol=1e-8, h=1e-5):
"""n-D Newton-Raphson: solves f(x) = 0.
Iterate until |f(x)| < tol or nmax steps.
"""
x = x.copy()
for istep in range(Nmax):
fx = f(x)
if np.linalg.norm(fx) < tol:
break
J = Jacobian(f, x, h=h)
Delta_x = np.linalg.solve(J, -fx)
x += Delta_x
else:
print("Newton-Raphson: no root found after {0} iterations (eps={1}); "
"best guess is {2} with error {3}".format(Nmax, tol, x, fx))
return x
###Output
_____no_output_____
###Markdown
Solve 2 masses/3 strings string problem Solution
###Code
x = newton_raphson(fLW, x0)
print(x0)
print(x)
###Output
[ 1.5 0.5 0.5 0.5 0.5 0.5 1. 1. 1. ]
[ 0.76100269 0.26495381 0.83570583 0.64874872 0.9642611
0.54917735 17.16020978 11.54527968 20.27152804]
###Markdown
Plot the starting configuration and the solution:
###Code
plot_2masses(x0, L, W)
plot_2masses(x, L, W)
###Output
_____no_output_____
###Markdown
Pretty-print the solution (angles in degrees):
###Code
def pretty_print(x):
theta = np.rad2deg(np.arcsin(x[0:3]))
tensions = x[6:]
print("theta1 = {0[0]:.1f} \t theta2 = {0[1]:.1f} \t theta3 = {0[2]:.1f}".format(theta))
print("T1 = {0[0]:.1f} \t T2 = {0[1]:.1f} \t T3 = {0[2]:.1f}".format(tensions))
print("Starting values")
pretty_print(x0)
print()
print("Solution")
pretty_print(x)
###Output
Starting values
theta1 = 45.0 theta2 = 45.0 theta3 = 90.0
T1 = 1.0 T2 = 1.0 T3 = 2.0
Solution
theta1 = 49.6 theta2 = 15.4 theta3 = 56.7
T1 = 17.2 T2 = 11.5 T3 = 20.3
###Markdown
Show intermediate stepsCreate a new function `newton_raphson_intermediates()` based on `newton_raphson()` that returns *all* trial `x` values including the last one.
###Code
def newton_raphson_intermediates(f, x, Nmax=100, tol=1e-8, h=1e-5):
"""n-D Newton-Raphson: solves f(x) = 0.
Iterate until |f(x)| < tol or nmax steps.
Returns all intermediates.
"""
intermediates = []
x = x.copy()
for istep in range(Nmax):
fx = f(x)
if np.linalg.norm(fx) < tol:
break
J = Jacobian(f, x, h=h)
Delta_x = np.linalg.solve(J, -fx)
intermediates.append(x.copy())
x += Delta_x
else:
print("Newton-Raphson: no root found after {0} iterations (eps={1}); "
"best guess is {2} with error {3}".format(Nmax, tol, x, fx))
return np.array(intermediates)
###Output
_____no_output_____
###Markdown
Visualize the intermediate configurations:
###Code
x_series = newton_raphson_intermediates(fLW, x0)
ax = plt.subplot(111)
ax.set_prop_cycle("color", [plt.cm.viridis_r(i) for i in np.linspace(0, 1, len(x_series))])
for iteration, x in enumerate(x_series):
plot_2masses(x, L, W, label=str(iteration))
plt.legend(loc="best");
###Output
_____no_output_____
###Markdown
It's convenient to turn the above plotting code into a function that we can reuse:
###Code
def plot_series(x_series, L, W):
"""Plot all N masses/strings solution vectors in x_series (N, 9) array"""
ax = plt.subplot(111)
ax.set_prop_cycle("color", [plt.cm.viridis_r(i) for i in np.linspace(0, 1, len(x_series))])
for iteration, x in enumerate(x_series):
plot_2masses(x, L, W, label=str(iteration))
plt.legend(loc="best")
return ax
###Output
_____no_output_____
###Markdown
Additional workTry different masses, e.g. M1 = M2 = 10, or M1= 0 , M2 = 10. Use nicer starting parameters that already fulfill the angle equations (7) - (9) (but it works with pretty much any guess):
###Code
# initial parameters
theta0 = np.deg2rad([45, 45, 90])
T0 = np.array([1, 1, 2])
x0 = np.concatenate([np.sin(theta0), np.cos(theta0), T0])
###Output
_____no_output_____
###Markdown
M1 = M2 = 10
###Code
W_2 = np.array([10, 10])
def fLW_2(x):
return f_2masses(x, L, W_2)
x_series_2 = newton_raphson_intermediates(fLW_2, x0)
pretty_print(x_series_2[-1])
plot_series(x_series_2, L, W_2)
###Output
_____no_output_____
###Markdown
M1 = 0, M2 = 10
###Code
W_3 = np.array([0, 10])
def fLW_3(x):
return f_2masses(x, L, W_3)
x_series_3 = newton_raphson_intermediates(fLW_3, x0)
pretty_print(x_series_3[-1])
plot_series(x_series_3, L, W_3)
###Output
_____no_output_____ |
notebooks/subside_example.ipynb | ###Markdown
Using a BMI: Flexural SubsidenceThis example explores how to use a BMI implementation using sedflux's subsidence model as an example. Links* [sedflux source code](https://github.com/mcflugen/sedflux): Look at the files that have *deltas* in their name.* [sedflux description on CSDMS](https://csdms.colorado.edu/wiki/Model_help:Sedflux): Detailed information on the CEM model. Interacting with the Subside BMI using Python Some magic that allows us to view images within the notebook.
###Code
from __future__ import print_function
%matplotlib inline
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Import the `Subside` class, and instantiate it. In Python, a model with a BMI will have no arguments for its constructor. Note that although the class has been instantiated, it's not yet ready to be run. We'll get to that later!
###Code
from pymt import plugins
subside = plugins.Subside()
###Output
[32m✓ Sedflux3D[39;49;00m
[32m✓ Child[39;49;00m
[32m✓ Hydrotrend[39;49;00m
[32m✓ OverlandFlow[39;49;00m
[32m✓ BmiFrostNumberMethod[39;49;00m
[32m✓ BmiKuMethod[39;49;00m
[32m✓ Waves[39;49;00m
[32m✓ BmiFrostNumberMethod[39;49;00m
[32m✓ BmiKuMethod[39;49;00m
[32m✓ Cem[39;49;00m
[32m✓ Waves[39;49;00m
[32m✓ Avulsion[39;49;00m
[32m✓ Plume[39;49;00m
[32m✓ Sedflux3D[39;49;00m
[32m✓ Subside[39;49;00m
###Markdown
Even though we can't run our waves model yet, we can still get some information about it. *Just don't try to run it.* Some things we can do with our model are get the names of the input variables.
###Code
subside.output_var_names
subside.input_var_names
###Output
_____no_output_____
###Markdown
We can also get information about specific variables. Here we'll look at some info about lithospheric deflections. This is the main input of the Subside model. Notice that BMI components always use [CSDMS standard names](http://csdms.colorado.edu/wiki/CSDMS_Standard_Names). With that name we can get information about that variable and the grid that it is on. OK. We're finally ready to run the model. Well not quite. First we initialize the model with the BMI **initialize** method. Normally we would pass it a string that represents the name of an input file. For this example we'll pass **None**, which tells Cem to use some defaults.
###Code
config_file, config_folder = subside.setup()
subside.initialize(config_file, dir=config_folder)
subside.var["earth_material_load__pressure"]
subside.grid[0].node_shape
###Output
_____no_output_____
###Markdown
Before running the model, let's set an input parameter - the overlying load.
###Code
import numpy as np
load = np.zeros((500, 500))
load[250, 250] = 1e3
###Output
_____no_output_____
###Markdown
The main output variable for this model is *deflection*. In this case, the CSDMS Standard Name is: "lithosphere__increment_of_elevation"First we find out which of Subside's grids contains deflection.
###Code
subside.var['lithosphere__increment_of_elevation']
###Output
_____no_output_____
###Markdown
With the *grid_id*, we can now get information about the grid. For instance, the number of dimension and the type of grid (structured, unstructured, etc.). This grid happens to be *uniform rectilinear*. If you were to look at the "grid" types for wave height and period, you would see that they aren't on grids at all but instead are scalars.
###Code
subside.grid[0]
###Output
_____no_output_____
###Markdown
Because this grid is uniform rectilinear, it is described by a set of BMI methods that are only available for grids of this type. These methods include:* get_grid_shape* get_grid_spacing* get_grid_origin Allocate memory for the water depth grid and get the current values from `cem`.
###Code
subside.set_value("earth_material_load__pressure", load)
subside.update()
dz = subside.get_value('lithosphere__increment_of_elevation')
plt.imshow(dz.reshape((500, 500)))
load[125, 125] = 2e3
subside.set_value("earth_material_load__pressure", load)
subside.update()
dz = subside.get_value('lithosphere__increment_of_elevation')
plt.imshow(dz.reshape((500, 500)))
###Output
_____no_output_____ |
content/04_Walk_Forward_Modeling.ipynb | ###Markdown
Key Takeaways:* Traditional methods of validation and cross-validation are problematic for time series prediction problems* The solution is to use a "walk-forward" approach which incorporates new information as it becomes available.* This approach gives us a more realistic view of how effective our model would truly have been in the past, and helps to avoid the overfitting trap.* It's often important to exclude data which is too far in the past using a rolling window. IntroductionThis is the fourth post in my series on transforming data into alpha. If you haven't yet see the [framework overview](data_management.html), [feature engineering guide](feature_engineering.html), or [feature_selection_guide](feature_selection.html), please take a minute to read those first. This post will build on the framework and philosophy presented there. This post is going to delve into the mechanics of _walk-forward modeling_ which is, in my view, the most robust way to train and apply machine learning models in inherently sequential domains like finance. If you'd like to replicate and experiment with the below code, _you can download the source notebook for this post by right-clicking on the below button and choosing "save link as"_ The domain of market prediction presents several unique challenges for machine learning practitioners which do not exist in spam detection, natural language processing, image recognition, or other common areas of machine learning success, including:* Low signal-to-noise ratio* Non-stationarity (aka regime switching)* Market adaptation (aka reflexivity)The combination of these challenges make it difficult to train and tune models which _generalize_ to the future, where it really matters. This, in my view, has led many to conclude that markets cannot be predicted with machine learning techniques.This is a flawed conclusion. Instead, we should conclude that it's imperative to follow a modeling process which (1) avoids the trap of overfitting models to noise, (2) takes into account the sequential dimension of time series data, and (3) allows us to get an accurate view about how various models would have performed in the past _given the information we would have had at that time_."But wait!" I hear you saying, "Overfitting is not unique to market prediction!". True, but the commonly used techniques of _train-test split_ and _cross-validation_ each have major flaws when applied to an inherently sequential set of financial time series data. The overriding objective of the methods described here is to __overcome the issues inherent in traditional cross validation approachs__. Issues with typical approaches__Train-Test Split:__ One frequently recommended approach in finance ML tutorials is to split the data (commonly, by time) and to train on the first chunk of data and test out-of-sample on the second. This is not wrong per se, but creates two problems:1. Only a fraction of data can be used to generate out-of-sample performance of the model. If you split 80/20, then you only get results for 2 of every 10 years of data. Yuck.2. You're implicitly biased towards the most recent period. Since we're likely going to use the older data to train and newer data to test (for good reason...) you are implicitly searching for a model which was in favor during the most recent couple of years. Maybe last year is a good approximation for next year, but maybe it's not. Maybe 2008 would hold some important lessons for us to evaluate, but these would be lost to us. You might consider train/test split with random (or non-random) sampling but this creates other fairly obvious issues with peeking into the future. __Cross Validation:__ In normal ML usage, cross-validation methods, such as the [K-fold cross validation](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.KFold.html), provide an elegant way to have our digital cake and eat it too. However, with inherently sequential datasets, it's really not kosher to train a model on 2016 and 2018 to predict 2017. Another common method, [leave one out cross-validation](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.LeaveOneOut.htmlsklearn.model_selection.LeaveOneOut), creates even more extreme peeking into the future. _These issues are especially intolerable if your model predicts values multiple bars in the future_, which results in your models being trained on data which significantly overlaps with your test data - a subtle but deadly form of data spillover. The Solution: Walk-forward Train/TestBut we are not stuck with either of these problematic approaches. The better - and I think much more intuitive - approach is to simulate models in a "walk-forward" sequence, periodically re-training the model to incorporate all data available at that point in time. The below picture is worth at least 1000 words. Say we have six years of data to train + test. At the end of the first year (T=1) we could have built model M1, which was trained on the first year's data (the grey bar). After the 5th year, we could use up to five years of data to train model M5. Getting StartedAs in prior posts, we'll begin by fetching some real stock price data and then generating from it a synthetic dataset which will make it easier to follow the principles of this tutorial. Please refer to the previous post [Feature Selection](feature_selection.html) for further explanation of the reasoning and mechanics of creating synthetic data.
###Code
from IPython.core.display import HTML,Image
import sys
sys.path.append('/anaconda/')
import config
import numpy as np
import pandas as pd
pd.core.common.is_list_like = pd.api.types.is_list_like
# Needed for this issue: https://github.com/pydata/pandas-datareader/issues/534
import pandas_datareader.data as web
%matplotlib inline
def get_symbols(symbols,data_source, begin_date=None,end_date=None):
out = pd.DataFrame()
for symbol in symbols:
df = web.DataReader(symbol, data_source,begin_date, end_date)[['AdjOpen','AdjHigh','AdjLow','AdjClose','AdjVolume']].reset_index()
df.columns = ['date','open','high','low','close','volume'] #my convention: always lowercase
df['symbol'] = symbol # add a new column which contains the symbol so we can keep multiple symbols in the same dataframe
df = df.set_index(['date','symbol'])
out = pd.concat([out,df],axis=0) #stacks on top of previously collected data
return out.sort_index()
prices = get_symbols(['AAPL','CSCO','AMZN','YHOO','MSFT'],data_source='quandl',begin_date='2012-01-01',end_date=None)
# note, we're only using real price data to get an accurate date/symbol index set.
###Output
_____no_output_____
###Markdown
Similar to the [previous post](feature_selection.html), we will generate some random data to serve as features _and then derive_ an outcome (response) variable from the features (to ensure that our made-up features will have ability to partially predict the target variable). However, this go around we'll add two non-random patterns to the data. First, we will add some amount of "memory" (autoregression) to the white noise with a short function called `add_memory()`. This will make the data somewhat less random and more similar to actual markets - which I think we can all agree is not a purely gaussian random walk... Second, we'll add trend to the importance of each feature to the target variable to simulate the reality that various features - and the market factors they represent - move in and out of favor over time (aka "regime shift"). * Feature `f01` is becoming more important* Feature `f02` is becoming less important* Feature `f03` is oscillating between very important and unimportant (using a sine wave pattern)* Feature `f04` is not changing
###Code
num_obs = prices.close.count()
def add_memory(s,n_days=50,memory_strength=0.1):
''' adds autoregressive behavior to series of data'''
add_ewm = lambda x: (1-memory_strength)*x + memory_strength*x.ewm(n_days).mean()
out = s.groupby(level='symbol').apply(add_ewm)
return out
# generate feature data
f01 = pd.Series(np.random.randn(num_obs),index=prices.index)
f01 = add_memory(f01,10,0.1)
f02 = pd.Series(np.random.randn(num_obs),index=prices.index)
f02 = add_memory(f02,10,0.1)
f03 = pd.Series(np.random.randn(num_obs),index=prices.index)
f03 = add_memory(f03,10,0.1)
f04 = pd.Series(np.random.randn(num_obs),index=prices.index)
f04 = f04 # no memory
## now, create response variable such that it is related to features
# f01 becomes increasingly important, f02 becomes decreasingly important,
# f03 oscillates in importance, f04 is stationary, finally a noise component is added
outcome = f01 * np.linspace(0.5,1.5,num_obs) + \
f02 * np.linspace(1.5,0.5,num_obs) + \
f03 * pd.Series(np.sin(2*np.pi*np.linspace(0,1,num_obs)*2)+1,index=f03.index) + \
f04 + \
np.random.randn(num_obs) * 3
outcome.name = 'outcome'
###Output
_____no_output_____
###Markdown
So, we've now got four features which contain random and non-random elements, and we have an `outcome` target variable which can be partially explained by these features. Let's run a quick linear model to verify that this worked as expected:
###Code
from sklearn.linear_model import LinearRegression
model = LinearRegression()
features = pd.concat([f01,f02,f03,f04],axis=1)
features.columns = ['f01','f02','f03','f04']
model.fit(X=features,y=outcome)
print('RSQ: '+str(model.score(X=features,y=outcome)))
print('Regression Coefficients: '+str(model.coef_))
###Output
RSQ: 0.26872811401332664
Regression Coefficients: [0.97587665 1.04585218 1.01461189 0.99374324]
###Markdown
Indeed. A simple, linear model run on the entire dataset shows a remarkable (for this domain) RSQ of >0.25 and approximately equal coefficients for each of the features, suggesting that they're all contributing strongly to the model. However, this overlooks the very important reality that our data is non-stationary and what matters is how the model will _generalize_ to future, unseen data. To illustrate, I'll re-run the same model after splitting the data into train/test sets, with the first 80% used to train and the last 20% used to test.
###Code
split_point = int(0.80*len(outcome))
X_train = features.iloc[:split_point,:]
y_train = outcome.iloc[:split_point]
X_test = features.iloc[split_point:,:]
y_test = outcome.iloc[split_point:]
model = LinearRegression()
model.fit(X=X_train,y=y_train)
print('RSQ in sample: '+str(model.score(X=X_train,y=y_train)))
print('RSQ out of sample: '+str(model.score(X=X_test,y=y_test)))
print('Regression Coefficients: '+str(model.coef_))
###Output
RSQ in sample: 0.29476591841646305
RSQ out of sample: 0.13667733307962338
Regression Coefficients: [0.8361922 1.15876943 1.20372473 1.01599709]
###Markdown
Here, the results tell a different story. We see that the model's RSQ on the unseen test data degraded to only about half of the value on the training data. Even if we were still satisfied with the degraded RSQ value, we would carry concern that perhaps the next period of time the models would degrade even further. This is where our walk-forward process comes in. A better approach... Here, I'll outline the basic design pattern for a robust approach to walk-forward modeling. This is simplified. In real-world usage, we'd likely add many enhancements but this will illustrate the concept. Training modelsThe first step of the walk-forward approach is to train our model _using only the data available at any given point in time_. Theoretically, we have new data every bar so could retrain the model daily if we wished. However, for simplicity we'll retrain much less frequently - at the end of each calendar year.We'll first create a list of timestamps for all end-of-year dates in our dataset. These will become our days on which to retrain models. Then, on each of the recalculation dates, we'll follow a simple procedure: 1. slice out only the data which has happened up until that point2. train a model with that data3. save a copy of that fitted model object in a pandas Series called `models` which is indexed by the recalculation date. We could just as easily have used a dict or other data structure, but I find that pandas Series works well for this purpose. Pandas is not just for numbers, after all...
###Code
recalc_dates = features.resample('Q',level='date').mean().index.values[:-1]
#print('recalc_dates:')
#print(recalc_dates)
#print()
models = pd.Series(index=recalc_dates)
for date in recalc_dates:
X_train = features.xs(slice(None,date),level='date',drop_level=False)
y_train = outcome.xs(slice(None,date),level='date',drop_level=False)
model = LinearRegression()
model.fit(X_train,y_train)
models.loc[date] = model
print("Training on the first {} records, through {}"\
.format(len(y_train),y_train.index.get_level_values('date').max()))
#print("Coefficients: {}".format((model.coef_)))
#print()
###Output
Training on the first 310 records, through 2012-03-30 00:00:00
Training on the first 625 records, through 2012-06-29 00:00:00
Training on the first 940 records, through 2012-09-28 00:00:00
Training on the first 1250 records, through 2012-12-31 00:00:00
Training on the first 1550 records, through 2013-03-28 00:00:00
Training on the first 1870 records, through 2013-06-28 00:00:00
Training on the first 2190 records, through 2013-09-30 00:00:00
Training on the first 2510 records, through 2013-12-31 00:00:00
Training on the first 2815 records, through 2014-03-31 00:00:00
Training on the first 3130 records, through 2014-06-30 00:00:00
Training on the first 3450 records, through 2014-09-30 00:00:00
Training on the first 3770 records, through 2014-12-31 00:00:00
Training on the first 4075 records, through 2015-03-31 00:00:00
Training on the first 4390 records, through 2015-06-30 00:00:00
Training on the first 4710 records, through 2015-09-30 00:00:00
Training on the first 5030 records, through 2015-12-31 00:00:00
Training on the first 5335 records, through 2016-03-31 00:00:00
Training on the first 5655 records, through 2016-06-30 00:00:00
Training on the first 5975 records, through 2016-09-30 00:00:00
Training on the first 6290 records, through 2016-12-30 00:00:00
Training on the first 6600 records, through 2017-03-31 00:00:00
Training on the first 6905 records, through 2017-06-30 00:00:00
Training on the first 7155 records, through 2017-09-29 00:00:00
Training on the first 7403 records, through 2017-12-29 00:00:00
###Markdown
To visualize how the model is changing over time, I'll create a simple plotting function of coefficients:
###Code
def extract_coefs(models):
coefs = pd.DataFrame()
for i,model in enumerate(models):
model_coefs = pd.Series(model.coef_,index=['f01','f02','f03','f04']) #extract coefficients for model
model_coefs.name = models.index[i] # name it with the recalc date
coefs = pd.concat([coefs,model_coefs],axis=1)
return coefs.T
extract_coefs(models).plot(title='Coefficients for Expanding Window Model')
###Output
_____no_output_____
###Markdown
Notice a few things about how the model's coefficients change over time: * The first feature gets increasing weight as it becomes a more important explanatory factor in the target variable,* the second feature does just the opposite, as it should,* the third feature oscillates before converging on a relatively lower endpoint, and * the fourth feature is mostly unchangedBut, in a world of constantly shifting regimes, we can do better. We know that, by the end fo the dataset, `f01` is actually much superior to `f02` however they end at about equal weights. This is because in our final model building instance, we are using _all_ of our data to train, and this entire set of data correctly shows that _on average_ f02 is as useful as `f01`, ignoring that `f02`'s best days are long past.
###Code
recalc_dates = features.resample('Q',level='date').mean().index.values[:-1]
models = pd.Series(index=recalc_dates)
for date in recalc_dates:
X_train = features.xs(slice(date-pd.Timedelta('90 days'),date),level='date',drop_level=False)
y_train = outcome.xs(slice(date-pd.Timedelta('90 days'),date),level='date',drop_level=False)
model = LinearRegression()
model.fit(X_train,y_train)
models.loc[date] = model
print("Training on the most recent {} records".format(len(y_train)))
#print("Coefficients: {}".format((model.coef_)))
extract_coefs(models).plot(title='Coefficients for Rolling Window Model')
###Output
Training on the most recent 310 records
Training on the most recent 315 records
Training on the most recent 315 records
Training on the most recent 305 records
Training on the most recent 305 records
Training on the most recent 320 records
Training on the most recent 315 records
Training on the most recent 315 records
Training on the most recent 310 records
Training on the most recent 315 records
Training on the most recent 315 records
Training on the most recent 315 records
Training on the most recent 310 records
Training on the most recent 315 records
Training on the most recent 315 records
Training on the most recent 315 records
Training on the most recent 305 records
Training on the most recent 320 records
Training on the most recent 315 records
Training on the most recent 315 records
Training on the most recent 310 records
Training on the most recent 305 records
Training on the most recent 250 records
Training on the most recent 248 records
###Markdown
This approach of only considering the 90 days' data prior to each model re-training has had a big impact. We now see that `f01` has overtaken `f02` in significance by the end of the time period, as we'd hope. The on-again, off-again feature `f03` has risen and fallen in prominence over time, again just as we'd hope. Using modelsThe second stage of the process is _using_ these walk-forward models. The process is similar and equally simple. First, we create two arrays, `begin_dates` and `end_dates`, which contain the dates on which each model is used. For instance, for the first model (i=0), we will apply this model to features beginning on the date we trained the model (`recalc_date`) until the day the next model is trained. The `end_dates` array therefore drops the 0th element and appends to the end a far-off date (in the year 2099). We can equally easily use this pattern to apply the models to true out of sample features. As long as it's not later than the year 2099, we'll simply apply the latest and greatest model we have.
###Code
begin_dates = models.index
end_dates = models.index[1:].append(pd.to_datetime(['2099-12-31']))
predictions = pd.Series(index=features.index)
for i,model in enumerate(models): #loop thru each models object in collection
X = features.xs(slice(begin_dates[i],end_dates[i]),level='date',drop_level=False)
p = pd.Series(model.predict(X),index=X.index)
predictions.loc[X.index] = p
predictions.shape
###Output
_____no_output_____
###Markdown
So, this looped through each of the models we had trained at various points in time and, for each, used that model to predict the period of time until the next model became available. It runs! Now we can confirm that the rolling model is, in fact, better than the "expanding" model by making predictions with each and comparing to truth.
###Code
models_expanding_window = pd.Series(index=recalc_dates)
for date in recalc_dates:
X_train = features.xs(slice(None,date),level='date',drop_level=False)
y_train = outcome.xs(slice(None,date),level='date',drop_level=False)
model = LinearRegression()
model.fit(X_train,y_train)
models_expanding_window.loc[date] = model
models_rolling_window = pd.Series(index=recalc_dates)
for date in recalc_dates:
X_train = features.xs(slice(date-pd.Timedelta('90 days'),date),level='date',drop_level=False)
y_train = outcome.xs(slice(date-pd.Timedelta('90 days'),date),level='date',drop_level=False)
model = LinearRegression()
model.fit(X_train,y_train)
models_rolling_window.loc[date] = model
begin_dates = models.index
end_dates = models.index[1:].append(pd.to_datetime(['2099-12-31']))
predictions_expanding_window = pd.Series(index=features.index)
for i,model in enumerate(models_expanding_window): #loop thru each models object in collection
X = features.xs(slice(begin_dates[i],end_dates[i]),level='date',drop_level=False)
p = pd.Series(model.predict(X),index=X.index)
predictions_expanding_window.loc[X.index] = p
predictions_rolling_window = pd.Series(index=features.index)
for i,model in enumerate(models_rolling_window): #loop thru each models object in collection
X = features.xs(slice(begin_dates[i],end_dates[i]),level='date',drop_level=False)
p = pd.Series(model.predict(X),index=X.index)
predictions_rolling_window.loc[X.index] = p
from sklearn.metrics import r2_score
common_idx = outcome.dropna().index.intersection(predictions_expanding_window.dropna().index)
rsq_expanding = r2_score(y_true = outcome[common_idx],y_pred=predictions_expanding_window[common_idx])
rsq_rolling = r2_score(y_true = outcome[common_idx],y_pred=predictions_rolling_window[common_idx])
print("Expanding Window RSQ: {}".format(round(rsq_expanding,3)))
print("Rolling Window RSQ: {}".format(round(rsq_rolling,3)))
###Output
Expanding Window RSQ: 0.256
Rolling Window RSQ: 0.296
###Markdown
Great! All of that work added about 3 points to the RSQ, which is certainly worth the effort. Avoiding complexityThe value of this walk-forward methodology is greatest when it helps you to avoid the scourge of overfitting. The linear regression model used up to this point is relatively resistant to overfit, since it has few parameters. Let's say that, instead of a simple linear regression, we used the much more overfit-prone DecisionTree, which has tendency to "memorize the past" rather than to recognize patterns in it.
###Code
from sklearn.tree import DecisionTreeRegressor
split_point = int(0.80*len(outcome))
X_train = features.iloc[:split_point,:]
y_train = outcome.iloc[:split_point]
X_test = features.iloc[split_point:,:]
y_test = outcome.iloc[split_point:]
model = DecisionTreeRegressor(max_depth=3)
model.fit(X=X_train,y=y_train)
print('RSQ in sample: '+str(round(model.score(X=X_train,y=y_train),3)))
print('RSQ out of sample: '+str(round(model.score(X=X_test,y=y_test),3)))
###Output
RSQ in sample: 0.179
RSQ out of sample: 0.008
###Markdown
This leads to overfit disaster! Now, we'll do the same with our walk-forward framework:
###Code
recalc_dates = features.resample('Q',level='date').mean().index.values[:-1]
models_rolling_window = pd.Series(index=recalc_dates)
for date in recalc_dates:
X_train = features.xs(slice(date-pd.Timedelta('365 days'),date),level='date',drop_level=False)
y_train = outcome.xs(slice(date-pd.Timedelta('365 days'),date),level='date',drop_level=False)
model = DecisionTreeRegressor(max_depth=3)
model.fit(X_train,y_train)
models_rolling_window.loc[date] = model
predictions_rolling_window = pd.Series(index=features.index)
for i,model in enumerate(models_rolling_window): #loop thru each models object in collection
X = features.xs(slice(begin_dates[i],end_dates[i]),level='date',drop_level=False)
p = pd.Series(model.predict(X),index=X.index)
predictions_rolling_window.loc[X.index] = p
common_idx = y_test.dropna().index.intersection(predictions_rolling_window.dropna().index)
rsq_rolling = r2_score(y_true = y_test[common_idx],y_pred=predictions_rolling_window[common_idx])
print("RSQ out of sample (rolling): {}".format(round(rsq_rolling,3)))
###Output
RSQ out of sample (rolling): 0.108
###Markdown
Key Takeaways:* Traditional methods of validation and cross-validation are problematic for time series prediction problems* The solution is to use a "walk-forward" approach which incorporates new information as it becomes available.* This approach gives us a more realistic view of how effective our model would truly have been in the past, and helps to avoid the overfitting trap.* It's often important to exclude data which is too far in the past using a rolling window. IntroductionThis is the fourth post in my series on transforming data into alpha. If you haven't yet see the [framework overview](data_management.html), [feature engineering guide](feature_engineering.html), or [feature_selection_guide](feature_selection.html), please take a minute to read those first. This post will build on the framework and philosophy presented there. This post is going to delve into the mechanics of _walk-forward modeling_ which is, in my view, the most robust way to train and apply machine learning models in inherently sequential domains like finance. If you'd like to replicate and experiment with the below code, _you can download the source notebook for this post by right-clicking on the below button and choosing "save link as"_ The domain of market prediction presents several unique challenges for machine learning practitioners which do not exist in spam detection, natural language processing, image recognition, or other common areas of machine learning success, including:* Low signal-to-noise ratio* Non-stationarity (aka regime switching)* Market adaptation (aka reflexivity)The combination of these challenges make it difficult to train and tune models which _generalize_ to the future, where it really matters. This, in my view, has led many to conclude that markets cannot be predicted with machine learning techniques.This is a flawed conclusion. Instead, we should conclude that it's imperative to follow a modeling process which (1) avoids the trap of overfitting models to noise, (2) takes into account the sequential dimension of time series data, and (3) allows us to get an accurate view about how various models would have performed in the past _given the information we would have had at that time_."But wait!" I hear you saying, "Overfitting is not unique to market prediction!". True, but the commonly used techniques of _train-test split_ and _cross-validation_ each have major flaws when applied to an inherently sequential set of financial time series data. The overriding objective of the methods described here is to __overcome the issues inherent in traditional cross validation approachs__. Issues with typical approaches__Train-Test Split:__ One frequently recommended approach in finance ML tutorials is to split the data (commonly, by time) and to train on the first chunk of data and test out-of-sample on the second. This is not wrong per se, but creates two problems:1. Only a fraction of data can be used to generate out-of-sample performance of the model. If you split 80/20, then you only get results for 2 of every 10 years of data. Yuck.2. You're implicitly biased towards the most recent period. Since we're likely going to use the older data to train and newer data to test (for good reason...) you are implicitly searching for a model which was in favor during the most recent couple of years. Maybe last year is a good approximation for next year, but maybe it's not. Maybe 2008 would hold some important lessons for us to evaluate, but these would be lost to us. You might consider train/test split with random (or non-random) sampling but this creates other fairly obvious issues with peeking into the future. __Cross Validation:__ In normal ML usage, cross-validation methods, such as the [K-fold cross validation](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.KFold.html), provide an elegant way to have our digital cake and eat it too. However, with inherently sequential datasets, it's really not kosher to train a model on 2016 and 2018 to predict 2017. Another common method, [leave one out cross-validation](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.LeaveOneOut.htmlsklearn.model_selection.LeaveOneOut), creates even more extreme peeking into the future. _These issues are especially intolerable if your model predicts values multiple bars in the future_, which results in your models being trained on data which significantly overlaps with your test data - a subtle but deadly form of data spillover. The Solution: Walk-forward Train/TestBut we are not stuck with either of these problematic approaches. The better - and I think much more intuitive - approach is to simulate models in a "walk-forward" sequence, periodically re-training the model to incorporate all data available at that point in time. The below picture is worth at least 1000 words. Say we have six years of data to train + test. At the end of the first year (T=1) we could have built model M1, which was trained on the first year's data (the grey bar). After the 5th year, we could use up to five years of data to train model M5. Getting StartedAs in prior posts, we'll begin by fetching some real stock price data and then generating from it a synthetic dataset which will make it easier to follow the principles of this tutorial. Please refer to the previous post [Feature Selection](feature_selection.html) for further explanation of the reasoning and mechanics of creating synthetic data.
###Code
from IPython.core.display import HTML,Image
import sys
sys.path.append('/anaconda/')
import config
import numpy as np
import pandas as pd
pd.core.common.is_list_like = pd.api.types.is_list_like
# Needed for this issue: https://github.com/pydata/pandas-datareader/issues/534
import pandas_datareader.data as web
%matplotlib inline
def get_symbols(symbols,data_source, begin_date=None,end_date=None):
out = pd.DataFrame()
for symbol in symbols:
df = web.DataReader(symbol, data_source,begin_date, end_date)[['AdjOpen','AdjHigh','AdjLow','AdjClose','AdjVolume']].reset_index()
df.columns = ['date','open','high','low','close','volume'] #my convention: always lowercase
df['symbol'] = symbol # add a new column which contains the symbol so we can keep multiple symbols in the same dataframe
df = df.set_index(['date','symbol'])
out = pd.concat([out,df],axis=0) #stacks on top of previously collected data
return out.sort_index()
prices = get_symbols(['AAPL','CSCO','AMZN','YHOO','MSFT'],data_source='quandl',begin_date='2012-01-01',end_date=None)
# note, we're only using real price data to get an accurate date/symbol index set.
###Output
_____no_output_____
###Markdown
Similar to the [previous post](feature_selection.html), we will generate some random data to serve as features _and then derive_ an outcome (response) variable from the features (to ensure that our made-up features will have ability to partially predict the target variable). However, this go around we'll add two non-random patterns to the data. First, we will add some amount of "memory" (autoregression) to the white noise with a short function called `add_memory()`. This will make the data somewhat less random and more similar to actual markets - which I think we can all agree is not a purely gaussian random walk... Second, we'll add trend to the importance of each feature to the target variable to simulate the reality that various features - and the market factors they represent - move in and out of favor over time (aka "regime shift"). * Feature `f01` is becoming more important* Feature `f02` is becoming less important* Feature `f03` is oscillating between very important and unimportant (using a sine wave pattern)* Feature `f04` is not changing
###Code
num_obs = prices.close.count()
def add_memory(s,n_days=50,memory_strength=0.1):
''' adds autoregressive behavior to series of data'''
add_ewm = lambda x: (1-memory_strength)*x + memory_strength*x.ewm(n_days).mean()
out = s.groupby(level='symbol').apply(add_ewm)
return out
# generate feature data
f01 = pd.Series(np.random.randn(num_obs),index=prices.index)
f01 = add_memory(f01,10,0.1)
f02 = pd.Series(np.random.randn(num_obs),index=prices.index)
f02 = add_memory(f02,10,0.1)
f03 = pd.Series(np.random.randn(num_obs),index=prices.index)
f03 = add_memory(f03,10,0.1)
f04 = pd.Series(np.random.randn(num_obs),index=prices.index)
f04 = f04 # no memory
## now, create response variable such that it is related to features
# f01 becomes increasingly important, f02 becomes decreasingly important,
# f03 oscillates in importance, f04 is stationary, finally a noise component is added
outcome = f01 * np.linspace(0.5,1.5,num_obs) + \
f02 * np.linspace(1.5,0.5,num_obs) + \
f03 * pd.Series(np.sin(2*np.pi*np.linspace(0,1,num_obs)*2)+1,index=f03.index) + \
f04 + \
np.random.randn(num_obs) * 3
outcome.name = 'outcome'
###Output
_____no_output_____
###Markdown
So, we've now got four features which contain random and non-random elements, and we have an `outcome` target variable which can be partially explained by these features. Let's run a quick linear model to verify that this worked as expected:
###Code
from sklearn.linear_model import LinearRegression
model = LinearRegression()
features = pd.concat([f01,f02,f03,f04],axis=1)
features.columns = ['f01','f02','f03','f04']
model.fit(X=features,y=outcome)
print('RSQ: '+str(model.score(X=features,y=outcome)))
print('Regression Coefficients: '+str(model.coef_))
###Output
RSQ: 0.26872811401332664
Regression Coefficients: [0.97587665 1.04585218 1.01461189 0.99374324]
###Markdown
Indeed. A simple, linear model run on the entire dataset shows a remarkable (for this domain) RSQ of >0.25 and approximately equal coefficients for each of the features, suggesting that they're all contributing strongly to the model. However, this overlooks the very important reality that our data is non-stationary and what matters is how the model will _generalize_ to future, unseen data. To illustrate, I'll re-run the same model after splitting the data into train/test sets, with the first 80% used to train and the last 20% used to test.
###Code
split_point = int(0.80*len(outcome))
X_train = features.iloc[:split_point,:]
y_train = outcome.iloc[:split_point]
X_test = features.iloc[split_point:,:]
y_test = outcome.iloc[split_point:]
model = LinearRegression()
model.fit(X=X_train,y=y_train)
print('RSQ in sample: '+str(model.score(X=X_train,y=y_train)))
print('RSQ out of sample: '+str(model.score(X=X_test,y=y_test)))
print('Regression Coefficients: '+str(model.coef_))
###Output
RSQ in sample: 0.29476591841646305
RSQ out of sample: 0.13667733307962338
Regression Coefficients: [0.8361922 1.15876943 1.20372473 1.01599709]
###Markdown
Here, the results tell a different story. We see that the model's RSQ on the unseen test data degraded to only about half of the value on the training data. Even if we were still satisfied with the degraded RSQ value, we would carry concern that perhaps the next period of time the models would degrade even further. This is where our walk-forward process comes in. A better approach... Here, I'll outline the basic design pattern for a robust approach to walk-forward modeling. This is simplified. In real-world usage, we'd likely add many enhancements but this will illustrate the concept. Training modelsThe first step of the walk-forward approach is to train our model _using only the data available at any given point in time_. Theoretically, we have new data every bar so could retrain the model daily if we wished. However, for simplicity we'll retrain much less frequently - at the end of each calendar year.We'll first create a list of timestamps for all end-of-year dates in our dataset. These will become our days on which to retrain models. Then, on each of the recalculation dates, we'll follow a simple procedure: 1. slice out only the data which has happened up until that point2. train a model with that data3. save a copy of that fitted model object in a pandas Series called `models` which is indexed by the recalculation date. We could just as easily have used a dict or other data structure, but I find that pandas Series works well for this purpose. Pandas is not just for numbers, after all...
###Code
recalc_dates = features.resample('Q',level='date').mean().index.values[:-1]
#print('recalc_dates:')
#print(recalc_dates)
#print()
models = pd.Series(index=recalc_dates)
for date in recalc_dates:
X_train = features.xs(slice(None,date),level='date',drop_level=False)
y_train = outcome.xs(slice(None,date),level='date',drop_level=False)
model = LinearRegression()
model.fit(X_train,y_train)
models.loc[date] = model
print("Training on the first {} records, through {}"\
.format(len(y_train),y_train.index.get_level_values('date').max()))
#print("Coefficients: {}".format((model.coef_)))
#print()
###Output
Training on the first 310 records, through 2012-03-30 00:00:00
Training on the first 625 records, through 2012-06-29 00:00:00
Training on the first 940 records, through 2012-09-28 00:00:00
Training on the first 1250 records, through 2012-12-31 00:00:00
Training on the first 1550 records, through 2013-03-28 00:00:00
Training on the first 1870 records, through 2013-06-28 00:00:00
Training on the first 2190 records, through 2013-09-30 00:00:00
Training on the first 2510 records, through 2013-12-31 00:00:00
Training on the first 2815 records, through 2014-03-31 00:00:00
Training on the first 3130 records, through 2014-06-30 00:00:00
Training on the first 3450 records, through 2014-09-30 00:00:00
Training on the first 3770 records, through 2014-12-31 00:00:00
Training on the first 4075 records, through 2015-03-31 00:00:00
Training on the first 4390 records, through 2015-06-30 00:00:00
Training on the first 4710 records, through 2015-09-30 00:00:00
Training on the first 5030 records, through 2015-12-31 00:00:00
Training on the first 5335 records, through 2016-03-31 00:00:00
Training on the first 5655 records, through 2016-06-30 00:00:00
Training on the first 5975 records, through 2016-09-30 00:00:00
Training on the first 6290 records, through 2016-12-30 00:00:00
Training on the first 6600 records, through 2017-03-31 00:00:00
Training on the first 6905 records, through 2017-06-30 00:00:00
Training on the first 7155 records, through 2017-09-29 00:00:00
Training on the first 7403 records, through 2017-12-29 00:00:00
###Markdown
To visualize how the model is changing over time, I'll create a simple plotting function of coefficients:
###Code
def extract_coefs(models):
coefs = pd.DataFrame()
for i,model in enumerate(models):
model_coefs = pd.Series(model.coef_,index=['f01','f02','f03','f04']) #extract coefficients for model
model_coefs.name = models.index[i] # name it with the recalc date
coefs = pd.concat([coefs,model_coefs],axis=1)
return coefs.T
extract_coefs(models).plot(title='Coefficients for Expanding Window Model')
###Output
_____no_output_____
###Markdown
Notice a few things about how the model's coefficients change over time: * The first feature gets increasing weight as it becomes a more important explanatory factor in the target variable,* the second feature does just the opposite, as it should,* the third feature oscillates before converging on a relatively lower endpoint, and * the fourth feature is mostly unchangedBut, in a world of constantly shifting regimes, we can do better. We know that, by the end fo the dataset, `f01` is actually much superior to `f02` however they end at about equal weights. This is because in our final model building instance, we are using _all_ of our data to train, and this entire set of data correctly shows that _on average_ f02 is as useful as `f01`, ignoring that `f02`'s best days are long past.
###Code
recalc_dates = features.resample('Q',level='date').mean().index.values[:-1]
models = pd.Series(index=recalc_dates)
for date in recalc_dates:
X_train = features.xs(slice(date-pd.Timedelta('90 days'),date),level='date',drop_level=False)
y_train = outcome.xs(slice(date-pd.Timedelta('90 days'),date),level='date',drop_level=False)
model = LinearRegression()
model.fit(X_train,y_train)
models.loc[date] = model
print("Training on the most recent {} records".format(len(y_train)))
#print("Coefficients: {}".format((model.coef_)))
extract_coefs(models).plot(title='Coefficients for Rolling Window Model')
###Output
Training on the most recent 310 records
Training on the most recent 315 records
Training on the most recent 315 records
Training on the most recent 305 records
Training on the most recent 305 records
Training on the most recent 320 records
Training on the most recent 315 records
Training on the most recent 315 records
Training on the most recent 310 records
Training on the most recent 315 records
Training on the most recent 315 records
Training on the most recent 315 records
Training on the most recent 310 records
Training on the most recent 315 records
Training on the most recent 315 records
Training on the most recent 315 records
Training on the most recent 305 records
Training on the most recent 320 records
Training on the most recent 315 records
Training on the most recent 315 records
Training on the most recent 310 records
Training on the most recent 305 records
Training on the most recent 250 records
Training on the most recent 248 records
###Markdown
This approach of only considering the 90 days' data prior to each model re-training has had a big impact. We now see that `f01` has overtaken `f02` in significance by the end of the time period, as we'd hope. The on-again, off-again feature `f03` has risen and fallen in prominence over time, again just as we'd hope. Using modelsThe second stage of the process is _using_ these walk-forward models. The process is similar and equally simple. First, we create two arrays, `begin_dates` and `end_dates`, which contain the dates on which each model is used. For instance, for the first model (i=0), we will apply this model to features beginning on the date we trained the model (`recalc_date`) until the day the next model is trained. The `end_dates` array therefore drops the 0th element and appends to the end a far-off date (in the year 2099). We can equally easily use this pattern to apply the models to true out of sample features. As long as it's not later than the year 2099, we'll simply apply the latest and greatest model we have.
###Code
begin_dates = models.index
end_dates = models.index[1:].append(pd.to_datetime(['2099-12-31']))
predictions = pd.Series(index=features.index)
for i,model in enumerate(models): #loop thru each models object in collection
X = features.xs(slice(begin_dates[i],end_dates[i]),level='date',drop_level=False)
p = pd.Series(model.predict(X),index=X.index)
predictions.loc[X.index] = p
predictions.shape
###Output
_____no_output_____
###Markdown
So, this looped through each of the models we had trained at various points in time and, for each, used that model to predict the period of time until the next model became available. It runs! Now we can confirm that the rolling model is, in fact, better than the "expanding" model by making predictions with each and comparing to truth.
###Code
models_expanding_window = pd.Series(index=recalc_dates)
for date in recalc_dates:
X_train = features.xs(slice(None,date),level='date',drop_level=False)
y_train = outcome.xs(slice(None,date),level='date',drop_level=False)
model = LinearRegression()
model.fit(X_train,y_train)
models_expanding_window.loc[date] = model
models_rolling_window = pd.Series(index=recalc_dates)
for date in recalc_dates:
X_train = features.xs(slice(date-pd.Timedelta('90 days'),date),level='date',drop_level=False)
y_train = outcome.xs(slice(date-pd.Timedelta('90 days'),date),level='date',drop_level=False)
model = LinearRegression()
model.fit(X_train,y_train)
models_rolling_window.loc[date] = model
begin_dates = models.index
end_dates = models.index[1:].append(pd.to_datetime(['2099-12-31']))
predictions_expanding_window = pd.Series(index=features.index)
for i,model in enumerate(models_expanding_window): #loop thru each models object in collection
X = features.xs(slice(begin_dates[i],end_dates[i]),level='date',drop_level=False)
p = pd.Series(model.predict(X),index=X.index)
predictions_expanding_window.loc[X.index] = p
predictions_rolling_window = pd.Series(index=features.index)
for i,model in enumerate(models_rolling_window): #loop thru each models object in collection
X = features.xs(slice(begin_dates[i],end_dates[i]),level='date',drop_level=False)
p = pd.Series(model.predict(X),index=X.index)
predictions_rolling_window.loc[X.index] = p
from sklearn.metrics import r2_score
common_idx = outcome.dropna().index.intersection(predictions_expanding_window.dropna().index)
rsq_expanding = r2_score(y_true = outcome[common_idx],y_pred=predictions_expanding_window[common_idx])
rsq_rolling = r2_score(y_true = outcome[common_idx],y_pred=predictions_rolling_window[common_idx])
print("Expanding Window RSQ: {}".format(round(rsq_expanding,3)))
print("Rolling Window RSQ: {}".format(round(rsq_rolling,3)))
###Output
Expanding Window RSQ: 0.256
Rolling Window RSQ: 0.296
###Markdown
Great! All of that work added about 3 points to the RSQ, which is certainly worth the effort. Avoiding complexityThe value of this walk-forward methodology is greatest when it helps you to avoid the scourge of overfitting. The linear regression model used up to this point is relatively resistant to overfit, since it has few parameters. Let's say that, instead of a simple linear regression, we used the much more overfit-prone DecisionTree, which has tendency to "memorize the past" rather than to recognize patterns in it.
###Code
from sklearn.tree import DecisionTreeRegressor
split_point = int(0.80*len(outcome))
X_train = features.iloc[:split_point,:]
y_train = outcome.iloc[:split_point]
X_test = features.iloc[split_point:,:]
y_test = outcome.iloc[split_point:]
model = DecisionTreeRegressor(max_depth=3)
model.fit(X=X_train,y=y_train)
print('RSQ in sample: '+str(round(model.score(X=X_train,y=y_train),3)))
print('RSQ out of sample: '+str(round(model.score(X=X_test,y=y_test),3)))
###Output
RSQ in sample: 0.179
RSQ out of sample: 0.008
###Markdown
This leads to overfit disaster! Now, we'll do the same with our walk-forward framework:
###Code
recalc_dates = features.resample('Q',level='date').mean().index.values[:-1]
models_rolling_window = pd.Series(index=recalc_dates)
for date in recalc_dates:
X_train = features.xs(slice(date-pd.Timedelta('365 days'),date),level='date',drop_level=False)
y_train = outcome.xs(slice(date-pd.Timedelta('365 days'),date),level='date',drop_level=False)
model = DecisionTreeRegressor(max_depth=3)
model.fit(X_train,y_train)
models_rolling_window.loc[date] = model
predictions_rolling_window = pd.Series(index=features.index)
for i,model in enumerate(models_rolling_window): #loop thru each models object in collection
X = features.xs(slice(begin_dates[i],end_dates[i]),level='date',drop_level=False)
p = pd.Series(model.predict(X),index=X.index)
predictions_rolling_window.loc[X.index] = p
common_idx = y_test.dropna().index.intersection(predictions_rolling_window.dropna().index)
rsq_rolling = r2_score(y_true = y_test[common_idx],y_pred=predictions_rolling_window[common_idx])
print("RSQ out of sample (rolling): {}".format(round(rsq_rolling,3)))
###Output
RSQ out of sample (rolling): 0.108
###Markdown
Key Takeaways:* Traditional methods of validation and cross-validation are problematic for time series prediction problems* The solution is to use a "walk-forward" approach which incorporates new information as it becomes available.* This approach gives us a more realistic view of how effective our model would truly have been in the past, and helps to avoid the overfitting trap.* It's often important to exclude data which is too far in the past using a rolling window. IntroductionThis is the fourth post in my series on transforming data into alpha. If you haven't yet see the [framework overview](data_management.html), [feature engineering guide](feature_engineering.html), or [feature_selection_guide](feature_selection.html), please take a minute to read those first. This post will build on the framework and philosophy presented there. This post is going to delve into the mechanics of _walk-forward modeling_ which is, in my view, the most robust way to train and apply machine learning models in inherently sequential domains like finance. If you'd like to replicate and experiment with the below code, _you can download the source notebook for this post by right-clicking on the below button and choosing "save link as"_ The domain of market prediction presents several unique challenges for machine learning practitioners which do not exist in spam detection, natural language processing, image recognition, or other common areas of machine learning success, including:* Low signal-to-noise ratio* Non-stationarity (aka regime switching)* Market adaptation (aka reflexivity)The combination of these challenges make it difficult to train and tune models which _generalize_ to the future, where it really matters. This, in my view, has led many to conclude that markets cannot be predicted with machine learning techniques.This is a flawed conclusion. Instead, we should conclude that it's imperative to follow a modeling process which (1) avoids the trap of overfitting models to noise, (2) takes into account the sequential dimension of time series data, and (3) allows us to get an accurate view about how various models would have performed in the past _given the information we would have had at that time_."But wait!" I hear you saying, "Overfitting is not unique to market prediction!". True, but the commonly used techniques of _train-test split_ and _cross-validation_ each have major flaws when applied to an inherently sequential set of financial time series data. The overriding objective of the methods described here is to __overcome the issues inherent in traditional cross validation approachs__. Issues with typical approaches__Train-Test Split:__ One frequently recommended approach in finance ML tutorials is to split the data (commonly, by time) and to train on the first chunk of data and test out-of-sample on the second. This is not wrong per se, but creates two problems:1. Only a fraction of data can be used to generate out-of-sample performance of the model. If you split 80/20, then you only get results for 2 of every 10 years of data. Yuck.2. You're implicitly biased towards the most recent period. Since we're likely going to use the older data to train and newer data to test (for good reason...) you are implicitly searching for a model which was in favor during the most recent couple of years. Maybe last year is a good approximation for next year, but maybe it's not. Maybe 2008 would hold some important lessons for us to evaluate, but these would be lost to us. You might consider train/test split with random (or non-random) sampling but this creates other fairly obvious issues with peeking into the future. __Cross Validation:__ In normal ML usage, cross-validation methods, such as the [K-fold cross validation](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.KFold.html), provide an elegant way to have our digital cake and eat it too. However, with inherently sequential datasets, it's really not kosher to train a model on 2016 and 2018 to predict 2017. Another common method, [leave one out cross-validation](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.LeaveOneOut.htmlsklearn.model_selection.LeaveOneOut), creates even more extreme peeking into the future. _These issues are especially intolerable if your model predicts values multiple bars in the future_, which results in your models being trained on data which significantly overlaps with your test data - a subtle but deadly form of data spillover. The Solution: Walk-forward Train/TestBut we are not stuck with either of these problematic approaches. The better - and I think much more intuitive - approach is to simulate models in a "walk-forward" sequence, periodically re-training the model to incorporate all data available at that point in time. The below picture is worth at least 1000 words. Say we have six years of data to train + test. At the end of the first year (T=1) we could have built model M1, which was trained on the first year's data (the grey bar). After the 5th year, we could use up to five years of data to train model M5. Getting StartedAs in prior posts, we'll begin by fetching some real stock price data and then generating from it a synthetic dataset which will make it easier to follow the principles of this tutorial. Please refer to the previous post [Feature Selection](feature_selection.html) for further explanation of the reasoning and mechanics of creating synthetic data.
###Code
from IPython.core.display import HTML,Image
import sys
sys.path.append('/anaconda/')
import config
import numpy as np
import pandas as pd
pd.core.common.is_list_like = pd.api.types.is_list_like
# Needed for this issue: https://github.com/pydata/pandas-datareader/issues/534
import pandas_datareader.data as web
%matplotlib inline
def get_symbols(symbols,data_source, begin_date=None,end_date=None):
out = pd.DataFrame()
for symbol in symbols:
df = web.DataReader(symbol, data_source,begin_date, end_date)[['AdjOpen','AdjHigh','AdjLow','AdjClose','AdjVolume']].reset_index()
df.columns = ['date','open','high','low','close','volume'] #my convention: always lowercase
df['symbol'] = symbol # add a new column which contains the symbol so we can keep multiple symbols in the same dataframe
df = df.set_index(['date','symbol'])
out = pd.concat([out,df],axis=0) #stacks on top of previously collected data
return out.sort_index()
prices = get_symbols(['AAPL','CSCO','AMZN','YHOO','MSFT'],data_source='quandl',begin_date='2012-01-01',end_date=None)
# note, we're only using real price data to get an accurate date/symbol index set.
###Output
_____no_output_____
###Markdown
Similar to the [previous post](feature_selection.html), we will generate some random data to serve as features _and then derive_ an outcome (response) variable from the features (to ensure that our made-up features will have ability to partially predict the target variable). However, this go around we'll add two non-random patterns to the data. First, we will add some amount of "memory" (autoregression) to the white noise with a short function called `add_memory()`. This will make the data somewhat less random and more similar to actual markets - which I think we can all agree is not a purely gaussian random walk... Second, we'll add trend to the importance of each feature to the target variable to simulate the reality that various features - and the market factors they represent - move in and out of favor over time (aka "regime shift"). * Feature `f01` is becoming more important* Feature `f02` is becoming less important* Feature `f03` is oscillating between very important and unimportant (using a sine wave pattern)* Feature `f04` is not changing
###Code
num_obs = prices.close.count()
def add_memory(s,n_days=50,memory_strength=0.1):
''' adds autoregressive behavior to series of data'''
add_ewm = lambda x: (1-memory_strength)*x + memory_strength*x.ewm(n_days).mean()
out = s.groupby(level='symbol').apply(add_ewm)
return out
# generate feature data
f01 = pd.Series(np.random.randn(num_obs),index=prices.index)
f01 = add_memory(f01,10,0.1)
f02 = pd.Series(np.random.randn(num_obs),index=prices.index)
f02 = add_memory(f02,10,0.1)
f03 = pd.Series(np.random.randn(num_obs),index=prices.index)
f03 = add_memory(f03,10,0.1)
f04 = pd.Series(np.random.randn(num_obs),index=prices.index)
f04 = f04 # no memory
## now, create response variable such that it is related to features
# f01 becomes increasingly important, f02 becomes decreasingly important,
# f03 oscillates in importance, f04 is stationary, finally a noise component is added
outcome = f01 * np.linspace(0.5,1.5,num_obs) + \
f02 * np.linspace(1.5,0.5,num_obs) + \
f03 * pd.Series(np.sin(2*np.pi*np.linspace(0,1,num_obs)*2)+1,index=f03.index) + \
f04 + \
np.random.randn(num_obs) * 3
outcome.name = 'outcome'
###Output
_____no_output_____
###Markdown
So, we've now got four features which contain random and non-random elements, and we have an `outcome` target variable which can be partially explained by these features. Let's run a quick linear model to verify that this worked as expected:
###Code
from sklearn.linear_model import LinearRegression
model = LinearRegression()
features = pd.concat([f01,f02,f03,f04],axis=1)
features.columns = ['f01','f02','f03','f04']
model.fit(X=features,y=outcome)
print('RSQ: '+str(model.score(X=features,y=outcome)))
print('Regression Coefficients: '+str(model.coef_))
###Output
RSQ: 0.26872811401332664
Regression Coefficients: [0.97587665 1.04585218 1.01461189 0.99374324]
###Markdown
Indeed. A simple, linear model run on the entire dataset shows a remarkable (for this domain) RSQ of >0.25 and approximately equal coefficients for each of the features, suggesting that they're all contributing strongly to the model. However, this overlooks the very important reality that our data is non-stationary and what matters is how the model will _generalize_ to future, unseen data. To illustrate, I'll re-run the same model after splitting the data into train/test sets, with the first 80% used to train and the last 20% used to test.
###Code
split_point = int(0.80*len(outcome))
X_train = features.iloc[:split_point,:]
y_train = outcome.iloc[:split_point]
X_test = features.iloc[split_point:,:]
y_test = outcome.iloc[split_point:]
model = LinearRegression()
model.fit(X=X_train,y=y_train)
print('RSQ in sample: '+str(model.score(X=X_train,y=y_train)))
print('RSQ out of sample: '+str(model.score(X=X_test,y=y_test)))
print('Regression Coefficients: '+str(model.coef_))
###Output
RSQ in sample: 0.29476591841646305
RSQ out of sample: 0.13667733307962338
Regression Coefficients: [0.8361922 1.15876943 1.20372473 1.01599709]
###Markdown
Here, the results tell a different story. We see that the model's RSQ on the unseen test data degraded to only about half of the value on the training data. Even if we were still satisfied with the degraded RSQ value, we would carry concern that perhaps the next period of time the models would degrade even further. This is where our walk-forward process comes in. A better approach... Here, I'll outline the basic design pattern for a robust approach to walk-forward modeling. This is simplified. In real-world usage, we'd likely add many enhancements but this will illustrate the concept. Training modelsThe first step of the walk-forward approach is to train our model _using only the data available at any given point in time_. Theoretically, we have new data every bar so could retrain the model daily if we wished. However, for simplicity we'll retrain much less frequently - at the end of each calendar year.We'll first create a list of timestamps for all end-of-year dates in our dataset. These will become our days on which to retrain models. Then, on each of the recalculation dates, we'll follow a simple procedure: 1. slice out only the data which has happened up until that point2. train a model with that data3. save a copy of that fitted model object in a pandas Series called `models` which is indexed by the recalculation date. We could just as easily have used a dict or other data structure, but I find that pandas Series works well for this purpose. Pandas is not just for numbers, after all...
###Code
recalc_dates = features.resample('Q',level='date').mean().index.values[:-1]
#print('recalc_dates:')
#print(recalc_dates)
#print()
models = pd.Series(index=recalc_dates)
for date in recalc_dates:
X_train = features.xs(slice(None,date),level='date',drop_level=False)
y_train = outcome.xs(slice(None,date),level='date',drop_level=False)
model = LinearRegression()
model.fit(X_train,y_train)
models.loc[date] = model
print("Training on the first {} records, through {}"\
.format(len(y_train),y_train.index.get_level_values('date').max()))
#print("Coefficients: {}".format((model.coef_)))
#print()
###Output
Training on the first 310 records, through 2012-03-30 00:00:00
Training on the first 625 records, through 2012-06-29 00:00:00
Training on the first 940 records, through 2012-09-28 00:00:00
Training on the first 1250 records, through 2012-12-31 00:00:00
Training on the first 1550 records, through 2013-03-28 00:00:00
Training on the first 1870 records, through 2013-06-28 00:00:00
Training on the first 2190 records, through 2013-09-30 00:00:00
Training on the first 2510 records, through 2013-12-31 00:00:00
Training on the first 2815 records, through 2014-03-31 00:00:00
Training on the first 3130 records, through 2014-06-30 00:00:00
Training on the first 3450 records, through 2014-09-30 00:00:00
Training on the first 3770 records, through 2014-12-31 00:00:00
Training on the first 4075 records, through 2015-03-31 00:00:00
Training on the first 4390 records, through 2015-06-30 00:00:00
Training on the first 4710 records, through 2015-09-30 00:00:00
Training on the first 5030 records, through 2015-12-31 00:00:00
Training on the first 5335 records, through 2016-03-31 00:00:00
Training on the first 5655 records, through 2016-06-30 00:00:00
Training on the first 5975 records, through 2016-09-30 00:00:00
Training on the first 6290 records, through 2016-12-30 00:00:00
Training on the first 6600 records, through 2017-03-31 00:00:00
Training on the first 6905 records, through 2017-06-30 00:00:00
Training on the first 7155 records, through 2017-09-29 00:00:00
Training on the first 7403 records, through 2017-12-29 00:00:00
###Markdown
To visualize how the model is changing over time, I'll create a simple plotting function of coefficients:
###Code
def extract_coefs(models):
coefs = pd.DataFrame()
for i,model in enumerate(models):
model_coefs = pd.Series(model.coef_,index=['f01','f02','f03','f04']) #extract coefficients for model
model_coefs.name = models.index[i] # name it with the recalc date
coefs = pd.concat([coefs,model_coefs],axis=1)
return coefs.T
extract_coefs(models).plot(title='Coefficients for Expanding Window Model')
###Output
_____no_output_____
###Markdown
Notice a few things about how the model's coefficients change over time: * The first feature gets increasing weight as it becomes a more important explanatory factor in the target variable,* the second feature does just the opposite, as it should,* the third feature oscillates before converging on a relatively lower endpoint, and * the fourth feature is mostly unchangedBut, in a world of constantly shifting regimes, we can do better. We know that, by the end fo the dataset, `f01` is actually much superior to `f02` however they end at about equal weights. This is because in our final model building instance, we are using _all_ of our data to train, and this entire set of data correctly shows that _on average_ f02 is as useful as `f01`, ignoring that `f02`'s best days are long past.
###Code
recalc_dates = features.resample('Q',level='date').mean().index.values[:-1]
models = pd.Series(index=recalc_dates)
for date in recalc_dates:
X_train = features.xs(slice(date-pd.Timedelta('90 days'),date),level='date',drop_level=False)
y_train = outcome.xs(slice(date-pd.Timedelta('90 days'),date),level='date',drop_level=False)
model = LinearRegression()
model.fit(X_train,y_train)
models.loc[date] = model
print("Training on the most recent {} records".format(len(y_train)))
#print("Coefficients: {}".format((model.coef_)))
extract_coefs(models).plot(title='Coefficients for Rolling Window Model')
###Output
Training on the most recent 310 records
Training on the most recent 315 records
Training on the most recent 315 records
Training on the most recent 305 records
Training on the most recent 305 records
Training on the most recent 320 records
Training on the most recent 315 records
Training on the most recent 315 records
Training on the most recent 310 records
Training on the most recent 315 records
Training on the most recent 315 records
Training on the most recent 315 records
Training on the most recent 310 records
Training on the most recent 315 records
Training on the most recent 315 records
Training on the most recent 315 records
Training on the most recent 305 records
Training on the most recent 320 records
Training on the most recent 315 records
Training on the most recent 315 records
Training on the most recent 310 records
Training on the most recent 305 records
Training on the most recent 250 records
Training on the most recent 248 records
###Markdown
This approach of only considering the 90 days' data prior to each model re-training has had a big impact. We now see that `f01` has overtaken `f02` in significance by the end of the time period, as we'd hope. The on-again, off-again feature `f03` has risen and fallen in prominence over time, again just as we'd hope. Using modelsThe second stage of the process is _using_ these walk-forward models. The process is similar and equally simple. First, we create two arrays, `begin_dates` and `end_dates`, which contain the dates on which each model is used. For instance, for the first model (i=0), we will apply this model to features beginning on the date we trained the model (`recalc_date`) until the day the next model is trained. The `end_dates` array therefore drops the 0th element and appends to the end a far-off date (in the year 2099). We can equally easily use this pattern to apply the models to true out of sample features. As long as it's not later than the year 2099, we'll simply apply the latest and greatest model we have.
###Code
begin_dates = models.index
end_dates = models.index[1:].append(pd.to_datetime(['2099-12-31']))
predictions = pd.Series(index=features.index)
for i,model in enumerate(models): #loop thru each models object in collection
X = features.xs(slice(begin_dates[i],end_dates[i]),level='date',drop_level=False)
p = pd.Series(model.predict(X),index=X.index)
predictions.loc[X.index] = p
predictions.shape
###Output
_____no_output_____
###Markdown
So, this looped through each of the models we had trained at various points in time and, for each, used that model to predict the period of time until the next model became available. It runs! Now we can confirm that the rolling model is, in fact, better than the "expanding" model by making predictions with each and comparing to truth.
###Code
models_expanding_window = pd.Series(index=recalc_dates)
for date in recalc_dates:
X_train = features.xs(slice(None,date),level='date',drop_level=False)
y_train = outcome.xs(slice(None,date),level='date',drop_level=False)
model = LinearRegression()
model.fit(X_train,y_train)
models_expanding_window.loc[date] = model
models_rolling_window = pd.Series(index=recalc_dates)
for date in recalc_dates:
X_train = features.xs(slice(date-pd.Timedelta('90 days'),date),level='date',drop_level=False)
y_train = outcome.xs(slice(date-pd.Timedelta('90 days'),date),level='date',drop_level=False)
model = LinearRegression()
model.fit(X_train,y_train)
models_rolling_window.loc[date] = model
begin_dates = models.index
end_dates = models.index[1:].append(pd.to_datetime(['2099-12-31']))
predictions_expanding_window = pd.Series(index=features.index)
for i,model in enumerate(models_expanding_window): #loop thru each models object in collection
X = features.xs(slice(begin_dates[i],end_dates[i]),level='date',drop_level=False)
p = pd.Series(model.predict(X),index=X.index)
predictions_expanding_window.loc[X.index] = p
predictions_rolling_window = pd.Series(index=features.index)
for i,model in enumerate(models_rolling_window): #loop thru each models object in collection
X = features.xs(slice(begin_dates[i],end_dates[i]),level='date',drop_level=False)
p = pd.Series(model.predict(X),index=X.index)
predictions_rolling_window.loc[X.index] = p
from sklearn.metrics import r2_score
common_idx = outcome.dropna().index.intersection(predictions_expanding_window.dropna().index)
rsq_expanding = r2_score(y_true = outcome[common_idx],y_pred=predictions_expanding_window[common_idx])
rsq_rolling = r2_score(y_true = outcome[common_idx],y_pred=predictions_rolling_window[common_idx])
print("Expanding Window RSQ: {}".format(round(rsq_expanding,3)))
print("Rolling Window RSQ: {}".format(round(rsq_rolling,3)))
###Output
Expanding Window RSQ: 0.256
Rolling Window RSQ: 0.296
###Markdown
Great! All of that work added about 3 points to the RSQ, which is certainly worth the effort. Avoiding complexityThe value of this walk-forward methodology is greatest when it helps you to avoid the scourge of overfitting. The linear regression model used up to this point is relatively resistant to overfit, since it has few parameters. Let's say that, instead of a simple linear regression, we used the much more overfit-prone DecisionTree, which has tendency to "memorize the past" rather than to recognize patterns in it.
###Code
from sklearn.tree import DecisionTreeRegressor
split_point = int(0.80*len(outcome))
X_train = features.iloc[:split_point,:]
y_train = outcome.iloc[:split_point]
X_test = features.iloc[split_point:,:]
y_test = outcome.iloc[split_point:]
model = DecisionTreeRegressor(max_depth=3)
model.fit(X=X_train,y=y_train)
print('RSQ in sample: '+str(round(model.score(X=X_train,y=y_train),3)))
print('RSQ out of sample: '+str(round(model.score(X=X_test,y=y_test),3)))
###Output
RSQ in sample: 0.179
RSQ out of sample: 0.008
###Markdown
This leads to overfit disaster! Now, we'll do the same with our walk-forward framework:
###Code
recalc_dates = features.resample('Q',level='date').mean().index.values[:-1]
models_rolling_window = pd.Series(index=recalc_dates)
for date in recalc_dates:
X_train = features.xs(slice(date-pd.Timedelta('365 days'),date),level='date',drop_level=False)
y_train = outcome.xs(slice(date-pd.Timedelta('365 days'),date),level='date',drop_level=False)
model = DecisionTreeRegressor(max_depth=3)
model.fit(X_train,y_train)
models_rolling_window.loc[date] = model
predictions_rolling_window = pd.Series(index=features.index)
for i,model in enumerate(models_rolling_window): #loop thru each models object in collection
X = features.xs(slice(begin_dates[i],end_dates[i]),level='date',drop_level=False)
p = pd.Series(model.predict(X),index=X.index)
predictions_rolling_window.loc[X.index] = p
common_idx = y_test.dropna().index.intersection(predictions_rolling_window.dropna().index)
rsq_rolling = r2_score(y_true = y_test[common_idx],y_pred=predictions_rolling_window[common_idx])
print("RSQ out of sample (rolling): {}".format(round(rsq_rolling,3)))
###Output
RSQ out of sample (rolling): 0.108
|
MotionPlanningHigherDimensions.ipynb | ###Markdown
Section III. MOTION PLANNING Chapter 10. Motion Planning in Higher Dimensionsdiv.figcaption { text-align: center; margin-left:1em; margin-top:1em; margin-right:1em; margin-bottom:1em; } The algorithms we have studied for the 2D case do not apply to many other systems of interest: quadcopters that avoid obstacles in 3D, articulated robot arms, multi-robot teams,and robots that manipulate objects. The main issue that we are facedwith is that previous geometric planning methods require the ability toexplicitly perform computations about the shape of obstacles. Inhigher-dimensional configuration spaces, this is rather challenging.This chapter will consider how to address problems with configurationspaces of arbitrary dimension. (In fact, these algorithms can also beapplied to problems of lower dimension too --- in low-dimensionalspaces, planning is fairly easy!)Geometric planning methods like visibility graphs and cell decompositioncan often be extended to 3D environments with polyhedral obstacles withsome work. There are also algorithms for path planning with constraintsexpressed as semi-algebraic sets that work in spaces of arbitrarily highdimension, but their running time is exponential in dimensionality, andare more of an intellectual curiosity since they have never beenpractically implemented. Grid search planners can also be extended tohigher dimensional grids, but they must in the worst case explore anexponential number of grid vertices.In fact, it has been proven that even feasible path planning is NP-hardin the case of articulated $n$-joint robots. Surprisingly, optimal pathplanning in the presence of 3D polygonal obstacles is also NP-hard inthe number of obstacle vertices! This dramatic increase in thecomplexity of exact algorithms in dimensions 3 and higher has led to thedevelopment of several approximate methods, which are discussed indetail in this chapter. Implicit C-obstacle representation----------------------------------C-obstacles for general articulated robotsare in general, even more complex than they were in 2D and 3D. As aresult, most motion planning algorithms outside of planar environmentsdo not attempt to build an explicit representation of C-obstacles, butinstead opt to perform Boolean *feasibility queries* in which thecollision status of a robot is queried for a given configuration: inother words, we can test whether $q\in \mathcal{C}O$ for a specificconfiguration $q$, but we do not have a representation of any points on$\partial \mathcal{C}O$. Specifically, the user of a planner defines asubroutine as follows: $$Feasible(q) = \left\{ \begin{array}{ll}T & \text{ if $q$ is in the free space} \\F & \text{ if $q$ is in the forbidden region}\end{array} \right.$$ A planner can then call this subroutine to probewhether a configuration is feasible. Since this will be called thousandsor millions of times, fast planning in high-dimensional spaces requiresefficient collision tests as described in [Chapter 7](Geometry.ipynbCollision-Queries).Often, we will also need to check whether a *motion* is feasible aswell, usually a short segment of a path $\overline{pq}$ betweenconfigurations $p,q \in \mathcal{C}$. The process is called a*visibility query* in the case of a straight line path, and can be auser-defined subroutine or performed by the planner. The query is specified asfollows:$$Visible(p,q) = \left\{\begin{array}{ll}T & \text{ if $\overline{pq}$ is completely inside the free space} \\F & \text{ if $\overline{pq}$ intersects the forbidden region}\end{array} \right.$$In general the process of checking motions for collision is known as*dynamic collision checking*. The simplest method for doing so is simplyto take small steps along the path and perform feasibility queries ateach configuration. More details about this and other techniques aredescribed in the[section below](Dynamic-collision-checking).In addition to the Boolean feasibility query computational model, wealso consider some planners that exploit knowledge encoded in animplicit function model$\mathcal{C}O = \{ q \quad |\quad f(q) \leq 0 \}$. For example, one suchimplicit function $f$ may be the signed distance in workspace betweenthe robot and obstacles. (Specifically, this would return the distancewhen there is no collision, and the negative of the penetration depthwhen collision exists.) For most complex geometric models it is far morecomputationally expensive to perform distance and penetration depthcomputations than collision queries. As a result there is a trade offbetween using a computational query that provides richer information vsthe added complexity of invoking the query. Grid Search and the Curse of Dimensionality-------------------------------------------Let us begin by considering the case of extending grid search to $n$-Dspace. It is fairly straightforward to build such a grid, and collisionchecking for arbitrary $n$-D robots at configurations or paths can beperformed relatively quickly (we shall describe methods for doing sobelow). However, the number of vertices that may need to be exploredgrows exponentially with the dimension of the space. This growth rapidlyoverwhelms the available computational resources, both in time andmemory.It is helpful to get a sense of the absolute scale of exponentialincrease to appreciate how difficult this makes the problem. Considercreating a grid in a 6-D unit hypercube $[0,1]^6$ with resolution $h$ oneach axis. The number of vertices in the grid is listed in the right ofthe below table. Clearly, at high resolutions itwould be impractical to search the entire grid.| **Resolution $h$** | **\ vertices** || -------------------- |------------------- || 0.5 | 64 || 0.25 | 46,656 || 0.1 | 1,000,000 || 0.05 | 64,000,000 || 0.025 | 46,656,000,000 || 0.01 | 1,000,000,000,000 |Let us also fix a relatively manageable resolution, say 0.1, and observewhat happens as dimension varies. The following table shows how many vertices are in a gridof variable dimension $[0,1]^d$.| **Dimension $d$** | **\ vertices** || ------------------- |-----------------------------|| 2 | 100 || 3 | 1,000 || 6 | 1,000,000 || 8 | 100,000,000 || 10 | 10,000,000,000 || 15 | 1,000,000,000,000,000 || 20 | 100,000,000,000,000,000,000 |Yikes! Even if feasibility checking andvisibility checking were super-fast, this becomes impractical for use indimensions of around 8. This problem is generally known as the *curse ofdimensionality*.Besides the combinatorial explosion in the number of grid cells neededto span a space, there are several other odd effects in high dimensionalspaces that are counterintuitive to our experience in 2D and 3D spaces.Examples include the fact that the volume of a hypersphere dropsdramatically as dimension increases. In fact the volume of a unithypersphere approaches 0 as $d\rightarrow \infty$! This implies that*almost all points are far* in a high dimensional space, for mostreasonable definitions of "far". Another effect is that the complexityof a polytope grows dramatically. Consider a polytope in $d$-D spacewith $n$ faces, such as that defined by a linear equality $A x \leq b$,with $A$ an $n \times d$ matrix and $b$ an $n$-vector. The number ofvertices of the polytope is $O( {n \choose d } )$, which growsexponentially in $n$ and $d$. As a result, the complexity of the freespace can be exponential even for simple constraint representations.Since this "curse" appears so often in computational problems, it is ofgreat interest (and often surprising) to find algorithms that circumventthese limitations. However, they tend to trade-off computationalcomplexity for other limitations. One example is *Monte-Carlointegration*, in which a sum of function values of randomly sampledpoints are used to estimate the integral of a function:$$\int_a^b f(x) dx \approx \frac{b-a}{N} \sum_{i=1}^N f(x_i)\label{eq:MonteCarlo1D}$$ where the points $x_1,\ldots,x_N$ are sampleduniformly at random from the range $[a,b]$. The approximation error ofthis estimate, assuming $f$ is well-behaved, is on the order of$O((b-a)/\sqrt{N})$.Monte-Carlo integration can be generalized to higher dimensionalfunctions. If $B$ is an axis-aligned, $n$-dimensional box$[a_1,b_1]\times\cdots\times[a_n,b_n]$, then the integral over $B$ canbe approximated$$\int_B f(x) dx \approx \frac{|B|}{N} \sum_{i=1}^N f(x_i)\label{eq:MonteCarloND}$$ with $|B|=\prod_{i=1}^n (b_i-a_i)$ the volumeof $B$ and $x_1,\ldots,x_n$ is a uniform sampling over $B$.Specifically, a uniform sampling random variables $\epsilon_i \in [0,1]$uniformly at random, and sets $$x_i = a_i + \epsilon_i (b_i - a_i).$$The approximation errorof ($\ref{eq:MonteCarloND}$) is very similar to the 1D case:$O(|B|/\sqrt{N})$. Observe that the dimensionality $n$ did not appear atall in this equation! The sampling-based planning methods introduced in this chapter aresomewhat inspired by Monte-Carlo integration. In particular,their performance is *not immediatelyaffected by C-space dimensionality*. This is extremely appealing! However, like Monte-Carlo methods, these wins come at a price. In Monte-Carlo sampling, there is a hidden constant in the approximation error that depends on the variance of $f$ across the domain. Likewise, sampling-based planning induces a probabilistic chance of failure, and the risk of failure is highly dependent on the *visibility properties* of the free space. We will investigate these concepts more formally below. Sampling-based motion planning------------------------------The most popular set of techniques for motion planning on robots with 5or more DOFs is the class of sampling-based motion planners, mostnotably the probabilistic roadmap (PRMs) and rapidly-exploring randomtree (RRTs) planners. All such techniques are roadmap methods that builda network of paths in C-space, but they use different strategies fordoing so.There are three general reasons for their popularity:1. The same algorithm can be generalized to new problems of arbitrary dimensionality simply with changes of the $Feasible(q)$ and $Visible(p,q)$ subroutines.2. They often produce paths quickly for high-dimensional problems that are not too-maze like, and given enough time can eventually solve problems of arbitrary complexity (probabilistic completeness).3. They can be implemented fairly quickly by a student competent in algorithms and graph data structures.By "high-dimensional" we mean that sampling-based planners can routinelysolve problems in spaces of approximately 10D, and with tuning (or luck)can also find feasible paths in dozens or even hundreds of dimensions. However, these planners are not a "magic bullet", and a deeper analysis of their performance characteristics — both algorithmically and as they relate to the *visibility properties* of the underlying planning problem — is required to understand when they can be used effectively. Probabilistic roadmapsProbabilistic roadmaps (PRM) are an approximate roadmap of the robot'sfree space built by randomly sampling free connections and attemptingconnections between them. The roadmap can then be searched as usual fora path from the start to the goal. The basic algorithm for constructinga PRM is as follows:1. Sample $N$ configurations at random from the C-space ([Figure 1](fig:PRM).a--b).2. Add all feasible configurations and the start and goal to the roadmap. These are known as *milestones*. ([Figure 1](fig:PRM).c)3. Test pairs of nearby milestones for visibility, and add visible edges to the roadmap. ([Figure 1](fig:PRM).d--e)4. Search for a path from the start to the goal. ([Figure 1](fig:PRM).f)************![fig:PRM](figures/planning/prm.svg)Figure 1.Illustrating the main steps of the PRM algorithm.************It can be shown that the method is *probabilistically complete*, in thatif it finds a path, the path will be feasible. If it does not find apath, then this answer might be incorrect. However, the chance of thisincorrect answer decreases to 0 as $N$ increases, given some relativelymild assumptions on the shape of the free space. Another useful propertyis that the likelihood of success is *not directly* dependent ondimensionality, but rather then *visibility properties* of the freespace. As a result, it can reliably solve high-dimensional problems thathave good visibility properties, and perform poorly in low-dimensionalproblems that have poor visibility properties (such as narrow passages). Algorithm and key parametersThe basic PRM algorithm is given in[Algorithm Basic-PRM](alg:PRM).The first for loop adds any sampled feasible configurations to theroadmap $(V,E)$. The second for loop checks for pairs of "nearby"milestones, and adds edges as long as the path between them iscollision-free.******************Algorithm Basic-PRM(s,g,N)**1. $V \gets \{ s, g \}$.2. $E \gets \{ \}$.3. **for** $i=1,\ldots,N$ **do**4. $q \gets Sample()$5. **if** not Feasible($q$) **then** return to Line 3.6. Add $q$ to $V$ (add $q$ as a new milestone)7. **for** all $p \in near (q,V)$8. **if** Visible($p,q$) **then**9. Add $(p,q)$ to $E$.10. Search $G = (V,E)$, with Cartesian distance as the edge cost, to connect $s$ and $g$.11. **return** the path if one is found.****************There are two key subroutines to tune, which greatly affect running timeand performance:- The sampling distribution over $\mathcal{C}$ (the $Sample()$ subroutine in Line 4).- The method for determining nearby milestones in Line 7.Since at most $N$ milestones will be added to the roadmap, it isimportant to place as many as possible inside $\mathcal{F}$ at criticallocations that aid the planner in connecting the start and goal. The$Sample()$ subroutine can be tuned to do so. First, in order to have anonnegligible chance of sampling a milestone in a useful location, PRMsrequire that the C-space is bounded. In the most basic uniformdistribution it is assumed that $\mathcal{C}$ is a box$[a_1,b_1]\times \cdots \times [a_n,b_n]$, and $Sample()$ samplesconfigurations uniformly across the box. However, there are othermethods for improving performance, which we shall discuss later.If we were to check all edges in the roadmap, this would lead to a totalof $O(N^2)$ visibility checks. This can be rather computationallyexpensive for large $N$, and long paths are much more likely to collidethan short ones. Hence the idea of restricting visibility tests only to"nearby" points is a fast way of determining a small set of potentialedges that have a better chance of being feasible. To do this, we needfirst to determine a distance metric $$d(p,q)$$ that measures somenotion of path length. The simplest form might measure Euclideandistance, but for configuration spaces that blend translation androtation it is often more suitable to use a weighted geodesic metric.Once a metric is defined, most frequently one of two methods areemployed to calculate the set of milestones "near" $q$:- The $k$-*nearest neighbors* of $q$ in $V$.- The $R$-*neighborhood*: all milestones in $V$ within some radius $R$ of $q$.Using fast *nearest neighbors data structures*, which will be describedlater, the $k$-nearest neighbors can be computed in $O(k + \log N)$time, and the $R$-neighborhood can be computed in $O(h + \log N)$ time,where $h$ is the number of points returned. In any case, this usuallysaves a huge amount of time because $k$, $h$, and $\log N$ are muchsmaller than $N$, and distance queries are fast. If we are careful notto double-check the reverse of an edge that has been checked before, atmost $kN/2$ (or $hN/2$) edges will be checked in the roadmap. Incremental variantAs written, the basic PRM algorithm places all of the (up to $N$)samples and edges into the roadmap first before performing search. Butin easy problems, perhaps fewer samples were needed to construct aroadmap that contained a solution -- say, the first $M << N$. If all wewanted was a feasible path from $s$ to $g$, it would be foolish tocontinue adding unnecessary points! Fortunately, it is straightforwardto implement an *incremental* variant of PRM with very little additionalcost. (This is in fact the typical variant used for PRM planningqueries.)***************![fig:IncrementalPRM](figures/planning/incremental_prm.svg)Figure 2. An incremental version of PRM adds more samples to the roadmap while maintaining the roadmap's connected components. It terminates once thestart and goal are in the same component.***************[Algorithm Incremental-PRM](alg:IncrementalPRM) gives an implementation ofIncremental-PRM using a special data structure to determine each of the*connected components* of the graph. A connected component consists ofall milestones that are mutually connected by any path.******************Algorithm Incremental-PRM**(s,g,T)1. $V \gets \{ s, g \}$.2. $E \gets \{ \}$.3. $CC[s] \gets \{s\}$.4. $CC[g] \gets \{g\}$.5. **while** time-elapsed $< T$ **do**6. $q \gets Sample()$7. **if** not Feasible($q$) return to Line 5.8. Add $q$ to $V$9. $CC[q] \gets \{q\}$ (for now, $q$ gets its own connected component)10. **for** all $p \in near (q,V)$11. **if** Visible($p,q$) **then**12. Add $(p,q)$ to $E$.13. Merge $CC[p]$ and $CC[q]$ (merge connected components of connected edge)14. **if** $g \in CC[s]$ **then** (start and goal in same connected component)15. **return** the shortest path in $G = (V,E)$ between $s$ and $g$. 16. **return** "no path"****************Every time an isolated milestone gets added to the graph (Lines 3, 4, and 9), it gets assigneda connected component in the data structure $CC$. The connectedcomponents are maintained as more edges are added to the roadmap([Figure 2](fig:IncrementalPRM)). Once $s$ and $g$ are in the sameconnected component (Line 14), the algorithm stops. In Line 5, the mainloop is stopped by a *time limit* $T$ rather than an iteration limit.This means the overall running time can be controlled more precisely,which is useful if the robot needs to generate a path within a certaindeadline.To give more details about the $CC$ data structure, it can be thought ofas a map from each milestone to its connected component: that is,$CC[v]$ set of milestones $w \in V$ that can be reached from $v$ via apath in the $G=(V,E)$. After each change of $G$, $CC$ is updated toreflect any changes in connected component. The difficulty is that eachtime an edge gets added, the connected components of those two pointsneed to be *merged* (Line 13). If this were done in a naive fashion(say, by storing a list of connected milestones per milestone), it couldtake an extremely long time ($O(|V|)$, where $|V|$ is the number ofvertices currently in $V$) for each update. Fortunately, there is aspecial *disjoint set* (aka union-find) data structure that is very fastat maintaining these sets. With this data structure, it has been proventhat a merge takes O($\alpha(|V|)$) time on average, where $\alpha(n)$is a very, very slow growing function of $n$ called the inverseAckermann function. It grows so slowly, in fact, that for all practicalvalues of $n$ it is less than 5, and hence this can be considered aconstant. Overall, to perform $|E|$ edge insertions the overheadintroduced by this data structure is $O(|E| \alpha(|V|))$, which isessentially $O(|E|)$, and hence no additional asymptotic running timepenalty is incurred by the incremental algorithm. Empirical performanceThe performance of PRMs in practice can be quite variable from problemto problem, and even run to run (depending on the initial seed of arandom number generator). In typical problems, the probability ofsuccessfully finding a path increases as $N$ increases. After a "burnin" phase with low $N$ where there is no change of finding a solution,the probability of success rises fairly sharply before tapering off. Theslope of this tapering off depends on the visibility characteristics ofthe problem, which we shall discuss below.***************| (a) PRM, w=0.01 | (b) PRM, w=0.005 | (c) PRM, w=0.0025 ||------------------------------------------------------------------|-----------------------------------------------------------|------------------------------------------------------------------|| ![fig:PRMPerformance_a](figures/planning/prm_0.01_histogram.png) | ![fig:PRMPerformance_b](figures/planning/prm_0.005_histogram.png) | ![fig:PRMPerformance_c](figures/planning/prm_0.0025_histogram.png) |Figure 3. Histograms of PRM failure rate as more time is spent planning, taken over 100 runs on the same narrow passage problem. As the narrow passage width w shrinks, the likelihood of failure for a given time duration increases.***************The average time needed for incremental PRM to solve a given problemdepends on a number of factors. Firstly, efficient collision queries areessential, since thousands or millions of queries will be made during a"reasonably" sized run $(N=$1,000 to 100,000, usually). Second, thenearness criterion (either number of neighbors $k$ or radius $R$) shouldbe set so that a small number of edges are checked, but not too small.The path quality of PRM solutions can be quite variable. If there existsa suboptimal wide passage in $\mathcal{F}$ while the optimal passage isnarrow, incremental PRM will find the suboptimal one first (with veryhigh likelihood). Basic PRM will find the suboptimal one when $N$ islow, but does have a chance to find the optimal one once $N$ is largeenough. There is also an issue of jerkiness of the produced path, whichtends to be more pronounced with incremental PRM is used, or fewerneighbors are connected. We shall see strategies to address thejerkiness of PRM paths when discussing [optimizing roadmaps](Optimizing-probabilistic-roadmaps) and [shortcutting](Shortcutting). Analysis of visibility and performanceThe running time of PRM is composed of $N$ configuration collision checks, $kn$ edge collision checks (for the $k$-nearest-neighbor variant), and $n$ nearest neighbor querie-. Let's assume the likelihood of finding a sampling a configuration is constant and nonzero, and that configuration and edge collision checks are $O(1)$. Assuming brute-force nearest-neighbor queries, the overall running time is $O(n^2)$. However, using [more sophisticated nearest-neighbor queries](Nearest-neighbor-queries), this can be reduced to $O(n \log n)$. The basic PRM and incremental PRM algorithms have been shown to be [probabilistically complete](WhatIsMotionPlanning.ipynbCompleteness-and-optimality) under relatively mild conditions. This implies that the likelihood that the roadmap connects the start and goal (assuming they are connected in $\mathcal{F}$) approaches 1 as more milestones are added to the roadmap. But how quickly do they converge?A key factor in the theoretical analysis (and empirical performance) of a PRM are the *visibility properties* of the free space. Using the language of unfavorable / favorable visibility properties, we can mathematically formalize the intuitive notions of a "narrow passage" and "wide passage". To do this, we will need to introduce several definitions.First, we define a *measure* $\mu(X)$ that assigns any free space subset $X \subseteq \mathcal{F}$ a nonnegative scalar. Measures have a whole host of requirements, but intuitively, $\mu(X)$ measures the volume of $X$. (Note that if $X$ has lower dimension than $\mathcal{C}$, then $\mu(X)=0$; such sets include points, lines, etc.) Next, we define the visibility sets.> The **Visibility set** of a free configuration $q$ is the subset $\mathcal{V}(q) \subseteq \mathcal{F}$> of points that are visible from $q$.> Specifically, $\mathcal{V}(q) = \{ q'\in \mathcal{F} \,|\,\text{IsVisible}(q,q')\}$. It is typically useful to think of visibility as respecting a given connection radius, i.e., the constant $R$ in an $R$-neighborhood connection strategy. We can also similarly define the visibility set of a set of points as the union of the visibility sets of each point: $\mathcal{V}(X) = \{ q' \in \mathcal{F}\,|\,q' \in \mathcal{V}(q) \text{ for some }q\in X \}$. ***************![fig:Visibility](figures/planning/visibility.svg)Figure 4. Some visibility sets of various points and spaces. A PRM will require more samples to connect to points with small visibility sets. Moreover, $\epsilon$-goodness is determined by the point in the free space with the smallest visibility set. A convex space is $(\epsilon=1)$-good, while spaces with cusps and features of lower dimension are not $\epsilon$-good for any $\epsilon > 0$. ***************Intuitively, the milestones in a PRM $(V,E)$ with $n$ milestones are likely to connect to a new milestone if the visibility set of $V$ is large. Formally, if a new milestone $q$ is sampled uniformly at random from $\mathcal{F}$, then the probability that it can be connected to $V$ is exactly $\mu(V)/\mu(\mathcal{F})$. Since visibility is symmetric, the probability that $q$ cannot be connected to any of the milestones in $V$ is equal to the probability that $q$ cannot be connected to any of $n$ random configurations. Since the milestones are drawn independently at random, this probability is $(1-\mu(\mathcal{V}(q))/\mu(\mathcal{F}))^{n}$. Hence, obtain the result:> The probability that a configuration $q$ can be connected to a PRM with $n$ milestones is $Pr(q\text{ connected}) = 1 - (1-\mu(\mathcal{V}(q))/\mu(\mathcal{F}))^{n}$, assuming that $q$ and each of the milestones is drawn at random from $\mathcal{F}$.Note that this value rapidly approaches 1 as $n$ increases, as long as $\mu(\mathcal{V}(q))>0$. What this also shows is that if visibility properties are not uniform across the free space — that is, visibility sets are small in some areas (narrow passages) and large in others (wide passages) — PRMs will have a harder time connecting milestones in narrow passages. This is because the speed at which $Pr(q\text{ connected})$ approaches 1 is dependent on $\mu(\mathcal{V}(q))/\mu(\mathcal{F})$, with larger values converging to 1 much faster than smaller values. (On average, $\mu(\mathcal{F})/\mu(\mathcal{V}(q))$ milestones will be needed in $|V|$ before $q$ lies in the visibility set of $V$.) We can analyze this situation further using bounds that depend on the shape of the free space. Suppose that the minimum volume of any configuration's visibility set is $\epsilon = \inf_{q\in \mathcal{F}}\mu(\mathcal{V}(q))/\mu(\mathcal{F})$. Then, for any point $q$ sampled at random, the probability that it can be connected to a given point $q'$ is at least $\epsilon$. If $\epsilon > 0$, we say that the free space is $\epsilon$**-good**. Since the visibility bound $\epsilon$ holds across the space, we can see that the probability that any $q$ is in the visibility set of $V$ is at least $1 - (1-\epsilon)^{n}$. Keep in mind that we have not mentioned dimensionality $d$ in this discussion and only volumetric ratios, so the performance here has no direct relation to $d$. However, note that with a fixed connection radius $R$, the volume of any visible set cannot be greater than $O(R^d)$ (the volume of an $R$-ball), and hence there is an implicit exponential dependence of performance on dimension. This also shows that to improve a PRM's visibility set in spaces of higher dimension, it is necessary to set the connection radius $R$ relatively large.Is $\epsilon$-goodness all that we need to analyze PRM performance? No! Notice that we have only addressed the problem of whether a point can be connected to a single milestone in the PRM, but not whether it can reach all other reachable milestones with a feasible path. Specifically, we need to examine whether milestones in the same connected component of $\mathcal{F}$ are also in the same connected component of $(V,E)$. For this, we need a concept called **expansiveness**. Intuitively, a space $\mathcal{F}$ has high expansiveness if for any partition of $\mathcal{F}$ into two sets $A$ and $B$, a significant portion of $A$ can see a significant portion of $B$. This means that the likelihood that one or more edges of the PRM cross any boundary in $\mathcal{F}$ increases as more milestones are added.***************![fig:Expansiveness](figures/planning/expansiveness.svg)Figure 5. (a) Even if a space is $\epsilon$-good, a PRM may have a difficult time connecting two regions. (b) A $\beta$-lookout of a set $X$ is the subset of $X$ that can see a $\beta$ fraction of its complement. (c) A narrow passage causes certain $\beta$-lookouts to have small volume, reducing the expansiveness of the space. (d) A convex set is maximally expansive ($\beta=1$). ***************More formally, we describe a simplified version of the argument in Hsu et al 97. Let us define the $\beta$*-lookout* of a subset $X\subset \mathcal{F}$ as the subset of configurations in $X$ that can see an $\beta$-fraction of the complement of $X$. Mathematically, this is defined as follows:> The $\mathbf{\beta}$**-lookout** of $X$ is the set $\beta\text{-lookout}(X) = \{ q \in X\,|\,\mu(\mathcal{V}(q)\cap \bar{X}) \geq \alpha \mu(\bar{X}) \}$, where $\bar{X} = \mathcal{F} \setminus X$ is the complement of $X$. We define the _expansiveness_ $\beta$ of $\mathcal{F}$ as the largest value such that, for any partition $\mathcal{F} = X\cup \bar{X}$, the volume of $\beta$-lookout$(X)$ is at least $\beta \mu(X)$. If $\beta > 0$, then we say that the free space is $\beta$**-expansive**. (Note that $\beta \leq \epsilon$, since each point must see a $\beta$ fraction of its complement.)It has been proven that for any $\beta$-expansive space, the probability that a roadmap fails to connect the start and the goal with a feasible path drops *exponentially* toward 0 (provided that they are in the same connected component of $\mathcal{F}$). Specifically, a bound can be formulated in the following form:$$Pr(\text{failure} | n\text{ milestones}) \leq c(\beta) e^{-d(\beta) n}.$$Moreover, the convergence constants are directly related to $\beta$, with larger values of $\beta$ leading to faster convergence (smaller $c$ and larger $d$). Exponential convergence bounds are favorable because they show that the expected running time and its variance are bounded, which is not true for all convergence rates (consider, for example, the bound $Pr(\text{failure} | n) \propto 1/n$). Intuitively, the method of proof considers the idea of a *linking sequence* of regions connecting $s$ and $g$, such that a milestone is sampled in each region, then $s$ and $g$ will be connected. If the space is expansive, then it can be shown that such a linking sequence exists, has finite length, and the regions have non-zero measure. The details of these proofs are out of the scope of this book. PRM variants Rapidly-Exploring Random Trees (RRTs)One of the most popular PRM variants is the Rapidly-Exploring RandomTree (RRT) algorithm, which grows a tree rather than a graph. Originallydeveloped for kinodynamic planning, it is easily adapted to kinematicplanning as well. The specific variant we will discuss is calledRRT-Connect, which is a *bidirectional* search.RRT-Connect grows two trees of feasible paths, one rooted at the startand the other at the goal. At each iteration, both the start and thegoal trees are *extended* toward a randomly sampled configuration. Then,if the trees are close enough, a connection will be attempted betweenthem. If connected, the joined trees contain a unique path from thestart to the goal.**********************Algorithm RRT-Connect**1. $T_s \gets \{ s \}$.* $T_g \gets \{ g \}$.* **for** $i=1,...,N$ **do*** $q_{rand} \gets Sample()$* $q_e \gets$Extend-Tree$(T_s,q_{rand},\delta)$ (extend start tree at most $\delta$ distance toward $q_{rand}$)* $q_e^\prime \gets$Extend-Tree$(T_g,q_{rand},\delta)$ (extend goal tree at most $\delta$ distance toward $q_{rand}$)* **if** $d(q_e,q_e^\prime) \leq \delta$ and Visible($q_e,q_e^\prime$) **then** (trees are close enough)* Add edge $q_e\rightarrow q_e^\prime$ to connect $T_s$ and $T_g$* **return** the path from $s$ to $g$* **return** "no path"*********************************************Algorithm Extend-Tree**$(T,q_{rand},\delta)$1. $q_{near} \gets Nearest(T,q_{rand})$2. $q \gets q_{near} + \min{1,\frac{\delta}{d(q_{rand},q_{near})}}(q_{rand}-q_{near})$3. **if** Visible$(q_{near},q)$ **then**4. Add edge $q_{near}\rightarrow q$ to $T$.5. **return** $q$.6. **return** $q_{near}$.********************Specifically, the pseudocode is listed in[Alg. RRT-Connect](alg:RRTConnect). $T_s$ and $T_g$ denote the trees rooted atthe start and goal, respectively. In Line 3, a random configuration isdrawn, and in Lines 4 – 5, the trees are extended toward it along astraight line path using the Extend-Tree subroutine. RRT has a keyparameter $\delta$, which a limit to how far a tree can be extended oneach step. In other words, every edge in each tree has length no morethan $\delta$. Also, if the two extended milestones are within distance$\delta$, they are connected. For small values of $\delta$, it is morelikely for each extension to succeed, but the tree makes slower progressin exploring the free space.Pseudocode for Extend-Tree is given in[Alg. Extend-Tree](alg:ExtendTree). It first performs a nearest-neighbor queryon the milestones in the given tree to determine a milestone $q_{near}$.It then extends a short path no more than distance $\delta$ toward thedestination $q_{rand}$. If this edge is visible, then it is added to thetree.Unlike PRMs, RRTs do not use the configurations coming from the$Sample()$ function directly, nor do they attempt more than one edgeconnection per iteration. Hence, they sample points in a differentdistribution than PRMs. But what is this distribution? We firstintroduce the concept of a Voronoi diagram, which defined for some setof points $X = \{\mathbf{x}_1,\ldots,\mathbf{x}_n\}$. The Voronoi diagram is apartition of the space $\mathcal{C}$ into Voronoi cells , one per point.The cell $C_i$ corresponding to a point $\mathbf{x}_i$ is the subset of$\mathcal{C}$ for which $\mathbf{x}_i$ is the closest point. In other words,$$C_i \equiv \{ \mathbf{x} \in \mathcal{C}\, | \, i = \arg \min_{i=1,\ldots,n} d(\mathbf{x},\mathbf{x}_i) \}$$RRT is said to employ a Voronoi bias strategy because each milestone ina tree is selected for expansion (i.e., be the nearest node to$q_{rand}$) with probability proportional to the volume of its Voronoicell. This means that milestones that are closer to unexplored areas of$\mathcal{C}$ have a higher likelihood of being expanded. Moreover, theextended milestone will have a higher likelihood of extending the treein unexplored directions (and hence the term *rapidly exploring* applieshere).RRTs are appealing because tree data structures are a bit simpler toimplement than graphs. Also, the RRT explores locally first, so if thestart and goal are nearby, the RRT may do significantly less work than aPRM. However, RRT performance is generally more sensitive to the choiceof a distance metric, and is generally better at exploring thanrefining. *******************![fig:RRTBugtrap](figures/planning/bugtrap.svg)Figure 6To escape the mouth of a bugtrap, an RRT needs to sample a verycarefully chosen sequence of milestones within the general area that ithas already explored. But due to the Voronoi bias, it frequentlyattempts infeasible extensions from the highlighted frontiernodes.*******************As an example, the "bugtrap" problem illustrated in[Figure 6](fig:RRTBugtrap) tends to pose challenges for the RRT. Inthis and many other problems, a planner needs to strike a balancebetween *exploration* toward new areas and *refinement* of the roadmapin existing areas. Let's assume RRT only grows a tree from the start; itis easy to imagine double-bugtraps that cause the same behavior for thegoal. Here, the bug has a very difficult time wiggling out of the opening ofthe trap because it appears purely from the Voronoi bias that thefrontier has not yet been adequately explored. However, each attemptedextension ends up bumping into the walls of the trap. A *sequence* ofprecisely-chosen values of $q_{rand}$ are needed to escape the trap,which is highly unlikely to occur by chance. Moreover, the theoretical analysis of RRT is more challenging because its tree expansion strategyis history-dependent. In fact, the probabilistic completeness proof containedin the original RRT paper was been shown to be flawed, and has only beencorrected recently! The best exponential convergence bound found so far alsothat the expected running time is dependent on a factor of the form $c^{-d}$ where $c$is the minimum of $\delta$ and the clearance of some feasible path connectingthe start and goal, and $d$ is the dimension (Kleinbort et al, 2018). This bound is, however, extremely loose, and RRT empirical performance is not directlycorrelated with dimensionality, and like PRM typically enjoys betterperformance in spaces with favorable visibility properties. One caveat is thatthe expansion radius $\delta$ must be set larger in spaces of higher dimension to avoidextremely slow convergence. In general it can be challenging to say whether an RRT orPRM will work better for a given problem without empirical testing. Nonuniform sampling strategiesSince PRM and RRT performance depends highly on how well samples areplaced in critical regions, several strategies have been developed toboost performance with nonuniform sampling. PRMs benefit from placingmore samples in *low-visibility regions*, which requires identifyingareas that are relatively constrained or close to obstacles. One way todo this is to record how many feasible and infeasible edges wereattempted for each milestone (these are stored as counts $n_f[q]$ and$n_i[q]$, respectively, for each $q\in V$). After $N$ samples, moresamples are added near the milestones with a large fraction ofinfeasible edges, with the hope that these milestones are located inlow-visibility regions where a denser sampling is needed to makeconnections. Specifically, we might pick a milestone $q \in V$ withprobability proportional to $n_i[q] / (n_i[q] + n_f[q])$ and then samplea new configuration from a disk centered at $q$ with radius $R$. Iffeasible, the new milestone is connected to the roadmap as usual.Another method that can boost PRM performance in low-visibility spacesis the Gaussian sampling strategy. The idea is to increase the densityof milestones near the boundaries of obstacles, since low-visibilityregions will certainly be located near obstacles. The method actuallydraws two samples: one $q_1$ at random, and the second $q_2$ from amultivariate Gaussian distribution (see[Appendix A.3.](Probability.ipynbMultivariate-Gaussians)) centered at $q_1$ and withstandard deviation $\sigma$. Then, *only if exactly one of the samplesis feasible*, that sample is kept. Otherwise, both are thrown out. Thisensures that the segment between $q_1$ and $q_2$ straddles the boundarybetween the free space and forbidden region.It might seem odd to throw away perfectly good feasible samples, sinceadding them to the roadmap won't hurt (and can only help) connectivity.However, every additional milestone incurs additional work to test andconnect edges. In fact, edge collision checking is often the dominantcost of planning. It turns out that in the presence of narrow passages,the added cost to generate samples is worth it, and Gaussian samplingcan perform quite well. However, for best performance the perturbationstandard deviation $\sigma$ must be tuned to trade off these competingcosts.RRTs benefit from a slight *goal bias* which drives the tree toward thegoal. In RRT-Connect, this could take the form of sampling $q_{rand}$from $T_{g}$ some fraction of the time, which would drive extensions of$T_{s}$ toward the goal. Similarly, the reverse search could sample$q_{rand}$ from $T_s$ some fraction of the time, and drive extensionsfrom $T_g$ toward the start. This takes the form of replacing Lines 4 – 6in [Algorithm RRT-Connect](alg:RRTConnect) with the following code:1. **if** {$rand() \leq p_{gb}$}* **if** {$rand() \leq 0.5$}* $q_{e}^\prime \gets RandomVertex(T_g)$* $q_{e} \gets$Extend-Tree$(T_s,q_{e}^\prime,\delta)$* **else*** $q_{e} \gets RandomVertex(T_s)$* $q_{e} \gets$Extend-Tree$(T_g,q_{e},\delta)$* **else** Perform Lines 4 – 6 as usualHere the term $p_{gb}$ in Line 1 is the probability of using goalbiasing, and Line 2 decides according to an unbiased coin flip whetherto extend toward the start or toward the goal. The function $rand()$samples from the uniform distribution on $[0,1]$. Multi-query PRMsAnother variant of PRMs that is useful in some scenarios is the*multi-query* PRM. As presented, the PRM method finds a path for a given$(s,g)$ query and then throws out the roadmap for the next query. Insome cases, we would like the robot to plan another path in the sameenvironment. Or, a team of robots may be traversing the sameenvironment. In this case, it makes sense to *precompute* a good PRM andthen reuse it for multiple queries. This is because the primary cost ofPRM planning is in the construction phase, while the graph search phaseis quite fast.PRM construction proceeds like before, but without any endpoints. Then,to query the existing roadmap for a start and goal $(s,g)$, we tryconnecting $s$ and $g$ to nearby milestones using visibility queries.Then, the augmented PRM is searched for a path. To keep the roadmap fromgrowing if many queries are to be made, $s$, $g$, and all the edgesconnecting them are removed from the roadmap before terminating thequery. Lazy collision checking in PRMsFor complex robots and/or environments, such as those composed of CAD models, the most significant computational expense in PRMs and RRTs is checking visibility of edges (i.e., [dynamic collision checking](Dynamic-collision-checking) because each check may require tens, hundreds, or thousands of static collision checks. Furthermore, for complex robots, self-collision testing may need to be performed beween all *pairs* of links, so even a single static collision check can take milliseconds of compute time. This can add up quickly, as the roadmap begins to contain thousands of milestones.An effective heuristic for accelerating PRM performance is to perform *lazy collision checking*, which delays collision checking for edges until a candidate path to the goal is found. The hypothesis is that if the endpoints of an edge are collision free, then the path between them is also likely to be free. Since most edges in the PRM aren't used in the final path, it is a waste to devote effort checking their collision status. If the path does contain a collision, the offending edge can be removed and planning can continue.The Lazy PRM algorithm can be implemented in both basic and incremental forms. A lazy [Basic PRM](PRM) variant is as follows:1. Create a PRM $(V,E)$, assuming IsVisible always returns true.2. Find a path from $s$ to $g$, $v_1=s,v_2,\ldots,v_{m-1},v_m=g$ using search. If no path is found, return failure.3. Check each edge IsVisible$(v_i,v_{i+1})$, $i=1,...,m-1$ for collision.4. If any edge $(v_i,v_{i+1})$ is not feasible, delete it from $E$ and return to 2.5. If all edges are feasible, return $v_1 \rightarrow v_2 \rightarrow\cdots \rightarrow v_m$ as the path.In this design, it is helpful to cache which edges have been found to be visible to avoid re-checking edges in step 3. Another speed improvement is to use the costs of optimal paths to $g$ in the original PRM as a heuristic for A* search (used in the Batch Informed Trees* algorithm).A lazy [incremental PRM](alg:IncrementalPRM) variant is as follows:1. During roadmap construction, IsVisible is assumed to always return true.2. In line 15, once a path is found to the goal, the shortest path connecting $s$ and $g$ is checked for collision, as in steps 2 – 5 in the lazy Basic PRM variant.3. Connected components need to be recomputed when edges are found to be infeasible.To implement this efficiently, step 3 must be implemented so that connected components can be updated quickly when the graph changes. One way of doing this in conjunction with the path search is a *dynamic shortest paths* data structure, which stores the cost of the shortest path to every node in the graph. This data structure should be updated every time an edge is added or removed. Although in the worst case, $O(n)$ costs must be updated, the vast majority of cases are typically cheap.To adapt RRT to perform lazy collision checking, we have a problem figuring out what to do with infeasible edges. Suppose we find that an edge near the start is infeasible: discarding it would break the tree structure, or we could delete the subtree of descendants of the edge, but this would waste a significant amount of prior effort. Instead, a bidirectional tree-based lazy collision checking strategy, introduced in the SBL algorithm (Sanchez-Ante and Latombe, 2003), avoids discarding subtrees. Instead, it maintains bidirectional trees as in RRT-Connected, and checks edges for collision once a path is found from the start to the goal. If an edge is found to be in collision, then it *switches the subtree* of descendants of that edge to the other subtree. This takes a degree of bookkeeping to update the tree data structures, but can be done quickly. Optimizing probabilistic roadmapsBoth PRM and RRT probabilistically complete, i.e., are increasingly likely to find a feasible path as more samples are drawn. A natural question to ask is whether they produce paths that are close to optimal? Well, it is clear that if incremental PRM or RRT were to terminate on the first path found, these paths may be far from optimal. But if they were to *continue planning past the first path*, then perhaps better and better paths could be found. This idea forms the basis of the PRM* and RRT* algorithms, which have been shown to be *asymptotically optimal* (Karaman and Frazzoli, 2009).> **Asymptotically-optimal planner**. A planner is asymptotically-optimal if the cost $c(n)$ of the path that it> produces after $n$ iterations approaches the cost of the optimal path $c^\star$ as $n$ increases. If the> planner is probabilistic, asymptotic optimality means that the *probability* that the cost> $c(n)$ does not approach $c^\star$ is 0. Specifically,> $Pr(\lim_{n\rightarrow \infty} c(n) - c^\star) \neq 0)=0$. PRM\* and RRT\*PRM has been shown to be asymptotically-optimal using the $R$-neighborhood connection strategy, but not the $k$-nearest neighbors strategy. Using the $R$ neighborhood strategy, however, in the limit of large $n$ eventually tries to connect $O(n^2)$ milestones. It has been proven that using a *dynamically choice* of $R$ and $k$ can lead to an asymptotically optimal PRM planner, specifically the values of $R^\star(n) \propto (\frac{\log n}{n})^{1/d}$ and $k^\star(n) \propto \log n$. Note that $R^\star$ shrinks towards zero and $k^\star$ grows, so that in both cases, each new milestone is expected to be connected to $O(\log n)$ milestones, which grows asymptotically. Hence, the number of edges in the roadmap is expected to be $O(n \log n)$. (Note that there is a constant factor in these expressions that depends on the volume of free space and distance measure, and must be set sufficiently large or else asymptotic optimality no longer holds.)*************| (a) PRM, $k=5$ neighbors | (b) PRM, $R=0.1$ connections | (c) PRM\* | (d) RRT\* ||---------------------------------------------------------------|--------------------------------------------------------|---------------------------------------------------------------|--------------------------------------------------------||![fig:PRMStar_a](figures/planning/prm_knn.gif) | ![fig:PRMStar_b](figures/planning/prm_neighborhood.gif) | ![fig:PRMStar_c](figures/planning/prmstar.gif) | ![fig:PRMStar_d](figures/planning/rrtstar.gif) |Figure 7. Convergence of various PRM and RRT variants. The fixed-$k$ strategy is not asymptotically optimal. *************RRT has been shown not to be asymptotically-optimal in any case, since the history-dependence of its tree growth strategy from the nearest configuration in the tree prevents it from taking advantage of shorter paths that may arise. The RRT\* algorithm introduces a new technique called "rewiring" that, upon expanding the tree, changes the tree structure if it is possible to improve path lengths by passing through a different nearby milestone. Let us assume a unidirectional RRT. The main differences introduced to RRT are:1. Optimal costs $c(q)$ through $T$ are stored at each node $q$, and updated during Extend-Tree and Rewire.2. After each successful extension, points in $T$ near the new milestone $q_e$ are checked for whether better paths can be found passing through $q_e$. 3. Extend-Tree sets $c(q) = c(q_{near}) + d(q_{near},q)$, and returns $nil$ if the tree could not be successfully extended.**********************Algorithm RRT\***1. $T \gets \{ s \}$..2. **for** $i=1,...,N$ **do**3. $q_{rand} \gets Sample()$4. $q_e \gets$Extend-Tree$(T,q_{rand},\delta)$5. **if** $q_e \neq nil$ **then** Rewire$(T,q_{e},|T|)$6. **if** $d(q_e,g) \leq \delta$ and Visible($q_e,g$)**then**7. Add edge $q_e\rightarrow g$ to $T$8. $c(g) = $ cost of optimal path from $s$ to $g$, if $g$ is connected, and $\infty$ otherwise9. **return** "no path"*********************************************Algorithm Rewire**$(T,q_{new},n)$1. Neighbors $\gets$ Set of $k^\star(n)$-nearest neighbors in $T$, or points in $R^\star(n)$-neighborhood.2. **for** $q\in$Neighbors sorted by increasing $c(q)$ **do**3. **if** $c(q_{new}) + d(q_{new},q) (optimal path to $q$ passes through $q_{new}$)4. $c(q) \gets c(q_{new}) + d(q_{new},q)$5. Update costs of descendants of $q$.6. **if** $c(q) + d(q,q_{new}) (optimal path to $q_{new}$ passes through $q$)7. $c(q_{new}) \gets c(q) + d(q,q_{new})$8. Set parent of $q_{new}$ to $q$.9. Revise costs and parents of descendants of $q_{new}$.***********************Steps 4 and 8 can involve traversing large parts of the tree to update costs and parents, using a depth first traversal of the tree. In particular, in step 8, the parent of $q_{new}$'s child $q_{c}$ should be set to $q_{new}$ if $c(q_{new}) + d(q_{new},q_{c}) < c(q_{c})$. Then, the update should be called recursively on $q_{c}$. If that condition does not hold, the recursion does not continue. Convergence rateDue to their proven asymptotic optimality and relative ease of implementation, PRM\* and RRT\* have gained wide acceptance, and have spawned many variants. But how quickly do these planners converge to optimal?First of all, PRM\* and RRT\* run slower than their normal counterparts to find the first feasible path, because they do more work per iteration. One way of mitigating this in RRT\* is to disable the rewiring step until a first feasible path is found, in which case the algorithm begins identically to RRT.Secondly, PRM\* and RRT\* perform $n$ configuration feasibility checks and $O(n \log n)$ edge visibility checks. The number of configuration checks is the same as in PRM and RRT, but PRM performs $kn$ edge checks and RRT performs $n$. So we pay a logarithmic factor of computation speed to gain asymptotic optimality.Third, the number of milestones needed to obtain a desired decrease in the suboptimality of the best path is exponential in the dimensionality. Examine two cases: either a) the planner does not yet have a path in the homotopy class of the optimal path, and hence must explore the space further globally to make progress, and b) the planner has a path fairly close to the optimal path and can just perform local sampling to improve the current best path. In case a), it can be shown that for any planner that only performs binary collision queries, the expected number of samples needed to obtain a solution in the optimal homotopy class is $\Omega(\delta^{-d})$ (that is, at least a constant times $\delta^{-d}$) where $\delta$ is the clearance of a path in the optimal homotopy class. The counterexample is shown below: the optimal path passes through the tiny block, and its visibility set volume is $O(\delta^{d})$, and at least two samples need to be placed there.To address case b), we can also show that for a sampling based planner to locally reduce the cost of the current best path, it must place samples in a region with volume $O((c(n)-c^\star)^{d-1})$. TODO: expand on this and show figures Shortcutting As noted above, PRM and RRT are only concerned with finding a feasiblepath, and often produce jerky, unnatural paths. Shortcutting is a veryuseful postprocessing heuristic, illustrated in[Figure 9](fig:Shortcutting), in which portions of a path arerepeatedly replaced with shorter, feasible segments. *************![fig:Shortcutting](figures/planning/shortcutting.svg)Figure 9. A shortcutting heuristic can quickly smooth out the jerkiest parts ofpaths that are generated by a sampling-based planner. *************In order to do so, two random points are sampled along the path. Withthe path being a curve $y(s):[0,1]\rightarrow \mathcal{C}$, we sampletwo parameters $u,v \in [0,1]$. If $Visible(y(u),y(v))$ is true, then wereplace the portion of the path between $u$ and $v$ with the straightline path. Otherwise, the sampling process begins again. This repeatsfor some number of iterations.Shortcutting is only a local optimization technique, and not a verypowerful one at that. But it is very fast, and this low overhead makesit a very practical method for getting rid of the worst parts of a jerkytrajectory. In fact, we can construct an any-time planner that simplyapplies repeated restarts of an RRT (or PRM) followed by shortcutting.The shortest path found after shortcutting is maintained through each ofthese restarts. Eventually, we might get lucky and find a path close tooptimal. It turns out that for many problems, this approach canoutperform RRT\* (or PRM\*)! Dynamic collision checking So far we have assumed that edges in configuration space can be checked for collision using the $Visible$ subroutine, but checking collisions is not as straightforward as in simple geometric spaces, where we could simply check the collision status of a line segment. The simplest method for approximating the feasibility of a configuration-space line segment $\overline{ab}$ is to subdivide $\overline{ab}$ into small segments, with configurations $q_1=a,q_2,\ldots,q_{n-1},q_n=b$ uniformly spaced no more than $\epsilon$ distance apart. Then, each of $q_1,...,q_n$ is checked for collision using the $Feasible$ subroutine. The segment is considered visible if all configurations are feasible.Note that this is only an approximate method that depends on the resolution $\epsilon$. If this is too large, then collisions may be missed between checked points $q_{i}$ and $q_{i+1}$ even if both $q_{i}$ and $q_{i+1}$ are feasible. On the other hand, if $\epsilon$ is too small, then this takes a lot of time. Precisely, the number of collisions checked is $n = \lceil d(a,b) / \epsilon \rceil$.Another issue is the order in which the configurations are checked. In the worst case, the edge is feasible, and all configurations must be checked for feasibility. However, if the edge is infeasible, then we can save time by finding an infeasible configuration quickly. Let us suppose that both $a$ and $b$ are feasible. Then, in the absence of additional information, the point that is most likely to lead to a collision is the midpoint $(a+b)/2$. This intuition gives a recursive implementation that is effective in practice:**********************Algorithm Visible-Recurse**($a,b,\epsilon$)1. If this is the first recursive call, check $a$ and $b$ for collision. Return false if either $\neg Feasible(a)$ or $\neg Feasible(b)$.2. If $d(a,b) \leq \epsilon$ return true.3. $m \gets (a+b)/2$4. If $\neg Feasible(m)$, return false.5. Return Visible-Recurse$(a,m,\epsilon)$ $\wedge$ Visible-Recurse$(m,b,\epsilon)$.********************This approach is illustrated in [Figure 10.a](fig:dynamic-cc).*************| (a) Approximate dynamic CC with recursion | (b) Exact dynamic CC with adaptive recursion || ------------------------------------------|----------------------------------------------|| ![fig:dynamic-cc](figures/planning/dynamic_cc.png) | ![fig:dynamic-cc-adaptive](figures/planning/dynamic_cc_adaptive.png) |Figure 10. Approximate and exact dynamic collision checking methods. *************Although this approximate technique is by far the most widely used in practice, _exact_ dynamic collision checking methods are also available. These methods are based on similar recursions, but use additional information about the clearance of a configuration. Recall that the clearance $c(q)$ of a configuration is the distance in C-space to the nearest C-obstacle. If we can show that $d(a,b) \leq c(a)+c(b)$, then we can be certain that $\overline{ab}$ is collision free ([Figure 10.b](fig:dynamic-cc)). This is because the balls centered at $a$ and $b$ with radii $c(a)$ and $c(b)$, overlap, are in free space, and contain $\overline{ab}$ in their union. (In most cases, however, we do not have access to an exact clearance function, but this reasoning still works when $c(q)$ is any lower bound on clearance.) This gives the following exact algorithm:**********************Algorithm Visible-Exact1**($a,b$)1. If this is the first recursive call, check $a$ and $b$ for collision. Return false if either $\neg Feasible(a)$ or $\neg Feasible(b)$.2. If $d(a,b) \leq c(a) + c(b)$ return true.3. $m \gets (a+b)/2$4. If $\neg Feasible(m)$, return false.5. Return Visible-Exact1$(a,m)$ $\wedge$ Visible-Exact1$(m,b)$.********************This is an adaptive recursion method that terminates quickly when $a$ and $b$ are far from obstacles, but spends more time when the line segment passes close to obstacles.It is more likely to have _workspace distance_ information between pairs of objects. Let $CO_1,\ldots,CO_N$ be the C-obstacles, and let $c_i(q)$ indicate the clearance of the $i$'th obstacle in workspace at configuration $q$. We also need a function $\eta_i(q,q^\prime)$ that gives an upper bound on the distance that *any point on the robot moves* in workspace during the motion from $q$ to $q^\prime$.For example, consider a 2R robot arm with link lengths $L_1$ and $L_2$, the link geometries are simple line segments, and there are two C-obstacles, one for each link. The collision constraint for link 1 only depends on $q_1$, and the points on the link are contained within a circle with radius $L_1$. Moreover, in a movement of $\theta$ radians, the tip of the link moves at most $L_1 \theta$ distance in workspace. Hence, $$\eta_1(q,q^\prime) = L_1 |q_1 - q^\prime_1 |$$ is a suitable upper bound. Through similar reasoning, we can show that $$\eta_2(q,q^\prime) = (L_1+L_2) (|q_1 - q^\prime_1 | + |q_2 - q^\prime_2 |)$$ is an upper bound on how far the tip of link 2 moves. There are general formulas like this for arbitrary articulated robots. Specifically, $$\eta_k(q,q^\prime) = (R_k + \sum_i^{k-1} L_i) \| (q_1,\ldots,q_k) - (q^\prime_1,\ldots,q_k^\prime)\|_1$$ is a suitable function for all nR robots. The following algorithm uses this bound to perform exact collision detection.**********************Algorithm Visible-Exact2**($a,b$)1. If this is the first recursive call, check $a$ and $b$ for collision. Return false if either $\neg Feasible(a)$ or $\neg Feasible(b)$.2. If $\eta_i(a,b) \leq c_i(a) + c_i(b)$ for all C-obstacles $i=1,\ldots,N$, return true.3. $m \gets (a+b)/2$4. If $\neg Feasible(m)$, return false.5. Return Visible-Exact2$(a,m)$ $\wedge$ Visible-Exact2$(m,b)$.********************A final performance enhancement is that if the condition in Line 2 is satisfied for any C-obtstacle, you can just ignore it from then on. This focuses effort only on the constraints that need further refinement.A non-recursive variant of this algorithm, known as conservative advancement, gives the earliest point of contact during a motion from $a$ to $b$ by walking along the line segment as far as possible ensuring that the condition in Line 2 holds. This is useful for collision detection in simulation. Nearest-neighbor queriesA significant computational expense in PRM, RRT, and their variants, is computing nearest-neighbors (and near-neighbors). There are three types of nearest-neighbor queries:- (1-)Nearest-neighbor (NN$(q,P)$), used in RRT.- $k$-nearest-neighbors (kNN$(q,P,k)$), used in PRM- $R$-neighborhood (near$(q,P,R)$), used in PRMThese are important subroutines for a variety of application areas, including machine learning and geographic information systems, and hence the number of nearest neighbors algorithms and software packages is quite abundant. However, in motion planning, there are two problems that often times break the assumptions used in such packages:1. The point set $P$ is dynamic, and so extensive precomputation of data structures is not acceptable. Whatever data structures are used must support fast point insertion.2. The distance metric is often non-Euclidean, and can even be non-Cartesian in the case of common geodesic spaces used in robotics, like SO(2) and SO(3).*Brute-force* nearest neighbors simply loops through each point, and returns the one with smallest distance to the query point. This runs in $O(n)$ time, and similar algorithms can be used for kNN() and near() queries. It is also highly general, and can work with arbitrary metrics and spaces. However, if brute force nearest neighbors is used, leads PRM / RRT planning to be quadratic in the number of milestones. As a result, faster algorithms are usually needed. k-d treesA *$k$-d tree* data structure is a spatial hierarchy that recursively divides a Cartesian space $\mathbb{R}^k$ into regions by splitting each region by a hyperplane. *The $k$ here refers to the number of dimensions in the space, not the $k$ in the kNN query. In the following, let us revert back to our original notation where dimensionality is denoted $d$.* Each hyperplane is aligned to one of the $d$ primary axes. An illustration of a k-d tree is shown below.***************(a) k-d tree holding 14 2-D points | (b) First leaf reached in NN query | (c) Second leaf reached in NN query -----------------------------------|------------------------------------|------------------------------------![fig:KDTrees_a](figures/planning/kdtree.svg) | ![fig:KDTrees_b](figures/planning/kdtree_query1.svg) | ![fig:KDTrees_c](figures/planning/kdtree_query2.svg) Figure 11.(a) $k$-d trees recursively divide a space into rectilinear regions. Each leaf of the tree contains a set of points contained in that region. (b) For a nearest-neighbor query (blue point), the leaf containing the point is reached first, and the closest point in the leaf is found (blue circle). This forms an upper bound on the nearest neighbor's distance, and any leaves further than this distance will be pruned. (c) Only one more leaf is visited before the nearest neighbor is found.***************More formally, the binary tree $T_{kd}$ is composed of nodes $N$. Each leaf node contains a list of contained points `pts`, and each non-leaf node contains a split dimension `dim` and split value `value`. Non-leaf nodes have exactly two children $C^-$, and $C^+$, and all points $q \in \mathbb{R}^d$ such that $q_{dim} < value$ belong to the _negative_ child $C^-$, while points such that $q_{dim} \geq value$ belong to the _positive_ child $C^+$.We will describe how to query for the closest point (NN), with the kNN and near queries implemented similarly. We traverse the tree in a branch-and-bound manner similar to the [bounding volume hierarchy](Geometry.ipynbBounding-volume-hierarchies) approach. Let's assume the Euclidean distance is used. We maintain a closest point $p_{close}$, initialized by picking a point from $P$ at random. We proceed examining nodes $N$ recursively starting from $N=root(T_{kd})$. Psuedocode is below:**********************Algorithm KDTree-NN-recurse**($q,N,p_{close}$)1. **if** $N$ is a leaf node, **then**1. Let `pts` be the points contained in $N$. 1. Let $p$ be the closest point in `pts` to $q$.1. **return** the closer of $p$ and $p_{close}$.1. **else** (non-leaf node)1. Let its splitting plane be on axis `dim` with value `value`. Let its children be $C^-$ and $C^+$.1. **if** $q_{dim} (negative side first)1. $C_1 \gets C^-$, $C_2 \gets C^+$.1. **else** (positive side first)1. $C_1 \gets C^+$, $C_2 \gets C^-$.1. $p_{close} \gets $KDTree-NN-recurse($q,C_1,p_{close}$)1. **if** $|q_{dim} - value| \leq d(q,p_{close})$ **then** (prune opposite side if too far)1. $p_{close} \gets $KDTree-NN-recurse($q,C_2,p_{close}$)1. **return** $p_{close}$If $N$ is a leaf node, we check all its points in `pts` in brute force manner (Lines 1 – 4). If $N$ is a non-leaf node, containing split values `dim` and `value`, we first examine whether $q_{dim} < value$ or $q_{dim} \geq value$. We first recurse on the corresponding child (Lines 7 – 11). This recursive call may update $p_{close}$. Then, Lines 12 – 13 consider whether to check the opposite child. If $|q_{dim} - value| \geq d_{close}$, the distance $|q_{dim} - value|$ to the splitting hyperplane is sufficiently large that there is no chance that the closest point lies within the region defined by the opposite child. Hence, recursion on the opposite child can be skipped. Regardless of the outcome, we return $p_{close}$.**Insertion.** To insert points into a $k$-d tree, we can simply locate the leaf node in which the point is located, and add it to the `pts` structure. If the number of points in `pts` exceeds some threshold, defined by a given parameter, then the node is converted into a non-leaf node via splitting. Letting `dim` be the axis of the parent, the chosen axis can either be `(dim+1) mod d` (round-robin splitting) or the dimension with the largest variation in `pts`. In either case, `value` is set to the median value of `pts` in that dimension. A potential problem with incremental insertion is that unless the points are distributed identically at random, the split value of a leaf may not bisect the distribution of future points, leading to an imbalanced tree. More advanced algorithms may detect imbalanced trees during construction and rebalance them.**Extensions.** $k$-d trees can be extended easily to weighted Euclidean distances, since a weighted Euclidean distance is equivalent to a rescaled version of Euclidean space. They can be extended to handle other distance metrics or non-Cartesian spaces with a bit more effort. For other distance metrics, the main challenge is determining the point-hyperplane distance $\min_{p | p[dim]=value} d(p,q)$ rather than the straightforward calculation in Line 13. Non-Cartesian spaces require an alternative definition of the splitting plane, sidedness determination (Line 7), and a point-splitting plane distance (Line 13). Insertion also needs to be modified to determine a reasonable splitting plane.**Performance.** Notice in the NN query, recursion proceeds in depth first fashion, and the first leaf node found is associated with the region containing $q$. Ideally, the closest point in this region will be very close to $q$, and hence most of the opposite sides will be pruned. In the best case, all of the opposite sides will be pruned and this runs in $O(\log |P|)$ time. In the worst case, all $P$ points must be checked, in addition to the overhead of traversing the tree, making this no better than brute-force search. It can be seen that performance degrades if the points are nonuniformly distributed or the tree is imbalanced, that is, the number of points on either side of a split differ significantly. Performance also degrades in spaces of higher dimension, because point-point distances tend to be much larger than point-hyperplane distances. Approximate methodsDue to the degradation in performance of $k$-d trees in spaces of higher dimension, it is common to apply approximate nearest neighbors techniques. These sacrifice exactness of the output for speed improvements. The sacrifice of exactness is usually worth it in sampling-based planning because for most algorithms *there is no inherent reason to use the exact nearest neighbor(s) for connection* except that a closer milestone is slightly more likely to yield a feasible edge than a farther one. One straightforward approximate method that uses $k$-d trees is to modify the pruning condition $|q_{dim} - value| \leq d(q,p_{close})$ in Line 13 so that more nodes are pruned. A typical approach is to inflate the point-hyperplane distance by a relative coefficient $\epsilon_{rel}\geq 0$ and absolute coefficient $\epsilon_{abs}\geq 0$ so that the condition in Line 13 becomes $(1+\epsilon_{rel})\cdot|q_{dim} - value| + \epsilon_{abs} \leq d(q,p_{close})$. With such an approach, it is easy to show that the distance of the resulting point $p_{close}$ to $q$ is no more than $(1+\epsilon_{res})d^\star + \epsilon_{abs}$. With larger values of $\epsilon_{res}$ and $\epsilon_{abs}$, more branches are pruned at a sacrifice of optimality.Another approximation technique is Locality Sensitive Hashing (LSH), which is based on the idea that if two points are close, then random projections of the points onto a lower dimensional subspace are also likely to be close. The details of LSH are beyond the scope of this book, but [many references are available](https://en.wikipedia.org/wiki/Locality-sensitive_hashing).Several software packages are available for exact and approximate nearest neighbors queries. In Python, [scipy contains an implementation](https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.KDTree.query.html) of $k$-d trees, [scikit-learn](https://scikit-learn.org/stable/modules/neighbors.html) implements $k$-d trees and ball trees. Both libraries accept an approximation factor. For approximate nearest neighbors, there are many packages named ANN, and the [FLANN](https://www.cs.ubc.ca/research/flann/) library is a popular choice, used in the [Open Motion Planning Library](https://ompl.kavrakilab.org/). Common pitfalls in employing PRMsSampling-based motion planning is appealing since it can be implementedfor a wide variety of problems by non-planning experts. However, thereare several issues that can cause PRMs to fail. What is oftenfrustrating is that the PRM will not provide a *rationale* for failure!It just appears that it just "doesn't work". Some of the most commonpitfalls encountered when implement PRMs and their variants are:- Improper handling of non-Euclidean topology of $\mathcal{C}$ in the distance metric $d(p,q)$ and dynamic collision checking function.- Improper scaling of the C-space / badly scaled distance thresholds $R$ or $\delta$.- Providing infeasible start and goal configurations.- Providing start and goal configurations in "deep pockets": passages narrowing down as the endpoint is approached.- Incorrect feasibility tests.- Applying a planner when the free space volume is negligible, or narrow passages are infinitely thin.When debugging, it is often extremely useful to extract and visuallydebug the roadmap produced by a planner. This helps diagnose problemslike the planner taking tiny steps, not expanding the roadmap at all, ordetecting phantom obstacles. This can be tricky in high-dimensionalspaces, since visualization must ultimately take place on a 2D display,and a roadmap may contain thousands of configurations and edges.To handle topology, it is extremely important to ensure that the notionof a "straight line path" in dynamic collision checking interpolatesnearly along a geodesic, and that the distance metric is relativelyclose to a geodesic distance. When orientations are present, if thisissue were neglected and the C-space were treated as Euclidean, thensmall positive rotations would *never* be connected to small negativerotations. This will manifest itself as artifacts in which the robotwill either fail to find a path, or will rotate in an unnecessarily longfashion.For choosing thresholds, a rule of thumb is to start by setting $R$ and$\delta$ to be approximately 10% of the diameter of the C-space. Thevalues can then be fine-tuned to achieve better performance on a givenproblem. A good rule of thumb is to aim to achieve approximately 5 – 15connections per milestone. This tends to work well for setting the valueof $k$ when $k$-nearest neighbors is used as the nearness criterion inPRM.The infeasible endpoint problem is often encountered when there is a bitof error in the world model or the robot's sensing of its configuration,and the robot starts or ends at a configuration that is in contact (orclose to it). There are two approaches to handling this: beforeplanning, adjust the world model so that the robot is collision free(which can be hard), or slightly perturb $s$ and $g$ to newconfigurations $s^\prime$ and $g^\prime$ that are collision free withrespect to the robot's current knowledge. Then, the path is plannedbetween $s^\prime$ and $g^\prime$, and the robot executes path$s\rightarrow s^\prime \rightsquigarrow g^\prime \rightarrow g$. This,however, assumes that the path to the perturbed configurations isactually feasible in the real world, which requires a bit of care.The deep pocket problem is faced particularly often in manipulation ordocking, in which the start or goal has the robot touching the obstacle,and must make a careful, coordinated maneuver to leave it. For example,when the robot is grasping an object, its fingers are touching bothsides of the object, and the hand must slide carefully in or out ofposition without moving laterally or rotating about certain axes. Hence,the passage is quite narrow in at least 2 or 3 dimensions! In thesepockets of free space, the robot must take shorter steps, and mostdirections of travel lead to collision. However, once the pocket isescaped (like when the hand is away from a grasped object), then largesteps can again be taken. In other words, visibility is nonuniformacross $\mathcal{F}$. There are three general ways of handling thisissue, all of which require studying the manipulation problem morecarefully:1. Manually define a short docking/undocking maneuver that inserts into / retracts from the pocket. This could be, for example in manipulation, a Cartesian move that places the gripper in front of the object with fingers wide open. The inverse of this maneuver is used to determine the start and goal points for the planner.2. Start a tree-growing planner like RRT from the constrained endpoint with small step size. After some time, the farthest node from the start is assumed to have wiggled out of the pocket and point-to-point planning can begin from that new endpoint.3. Develop an obstacle-sliding local planner or extension method that allows the planner to generate motions that slide against obstacles.It is easy to make bugs when defining feasibility tests, particularly inmore complex problems where feasibility requires passing manyconditions. This is problematic because the subroutine is the *only*representation the planner has about the free space, so it needs toaccurately reproduce the C-obstacles of the problem or else the plannerwill produce paths that will collide, or fail to find a solution whereone obviously exists. There are some newer techniques that search for asmall set of problematic C-obstacles blocking the way, which can helpdebug incorrect settings (Hauser 2012). But perhaps the first approachto try is to capture statistics during planning to detect the frequencyat which each condition passes and fails inside the test. Some motionplanning libraries will do this automatically and ask the user to defineindividual conditions, but in others this is up to the user. If a testnever fails (or always passes) this suggests an obvious implementationbug.Finally, the free space must not contain a non-neglible volumeof space (that is, $\mu(\mathcal{F}) / \mu(\mathcal{C})> 0$). Thiscondition may be produced when a constraint is introduced (like an IKconstraint, or a constraint that two objects must touch, or that a jointmust take on a particular value) that leaves all feasible configurationson a manifold of lower-dimensionality of space. In these cases, the PRMwill not be able to generate samples with non-neglible probability. Oneapproach to handle this problem is to parameterize the solution manifoldexplicitly. Extensions of PRMs are also available to properly handlemanifold constraints without a need for parameterization; thesetechniques generate samples by projecting them onto the feasiblemanifold, and also constructing paths that move along the manifold.These will be discussed later... (TODO: where will manipulation planning be added?) Incomplete Methods------------------In addition to the above methods that satisfy some notion ofcompleteness, there are additional methods based on optimizationtechniques that are incomplete: they have no guarantee of finding afeasible path when one exists. They can, however, generally producepaths quickly when they do work. Potential fieldsPotential fields are a well-studied technique that works using onlylocal information to guide the robot's movement, and is therefore quitefast, making it appropriate for real-time obstacle avoidance as well aspath planning in relatively simple spaces.The general idea is to consider the robot's configuration as being aparticle in some energy potential landscape. Due to "gravity" theparticle will feel some virtual forces equal to the negative of thegradient of this landscape. If the landscape is constructed to have aglobal minimum at the goal, then by following the gradient the particlewill, hopefully, arrive at the goal.To construct such a landscape, the usual method is to combine an*attractive potential* field in which the force is some gain constant$k_{att}$ times the vector pointing direction to the goal:$$P_{att}(q) = \frac{1}{2}k_{att} \| q - q_g \|^2$$ along with a*repulsive potential* generating a repulsive force for each obstacle.The repulsive force is chosen to grow larger (typically toward infinity)as the robot gets closer to the obstacle. Some limiting distance$\rho_0$ is typically chosen where the effect of an obstacle drops offto 0. One such function is the following:$$P_{rep}(q) = \left\lbrace \begin{array}{ll} \frac{1}{2}k_{rep}(1/\rho(q) - 1/\rho_0)^2 & \text{If }\rho(q) \leq \rho_0\\0 & \text{If }\rho(q) > \rho_0\end{array}\right.$$ Here $\rho(q)$ is a function that measures theworkspace distance between the robot and the obstacle, and $k_{rep}$modulates the strength of the force. The potential is infinity at$\rho(q)=0$ and drops down to 0 at $\rho(q) = \rho_0$. (Note that herewe must be able to calculate distance rather than just Boolean collisiondetection.)The force acting on the particle is the negated gradient of eachpotential: $$f_{att}(q) = -k_{att} (q - q_g)$$ and$$f_{rep}(q) = \left\lbrace \begin{array}{ll} k_{rep} (\frac{1}{\rho(q)} - \frac{1}{\rho_0})\frac{1}{\rho(q)^2} \frac{\partial \rho(q)}{\partial q} & \text{If }\rho(q) \leq \rho_0\\0 & \text{If }\rho(q) > \rho_0\end{array}\right.$$Then, to evolve the configuration over time as a particle in thispotential field, we use an iterative approach. At the current time step,the robot is at position $q_t$. The next point along the path is givenby: $$q_{t+1} = q_t + \frac{\Delta t}{m}(f_{att}(q) + f_{rep}(q))$$where $m$ is a virtual "mass" of the robot and $\Delta t$ is the timestep. One potential issue with this method is that the magnitude of theforce vector can be highly varying, from 0 at the goal to infinity atthe boundary of an obstacle. To avoid huge jumps (or little movement atall) in the path it makes sense to dynamically set the mass to beproportional to the magnitude of the force. In this way, consistent rateof progress is ensured as the path evolves.***************![fig:potential](figures/planning/Yixuan_chap10_fig13_2.png) Figure 13.An example of potential field. The yellow circle is the obstacle. The contour shows the potential field with euqal value.***************This method works well when the robot simply needs to move slightly awayfrom a straight line path to avoid obstacles, provided that obstaclesare relatively convex and spatially distant. Its main advantages are 1)speed of computation, and 2) only local information about obstacles isneeded. However, like other local methods it is prone to local minimacaused either by concave obstacles, or narrow passages where therepulsive forces from either side of the passage cancel out theattractive force. Trajectory optimizationTrajectory optimization is another potential-based method that optimizesthe overall shape of the robot's path to minimize cost. Unlike potentialfields, for which the optimization variable is the configuration of asingle point in time, trajectory optimization uses some parameterizationof the *entire* path as the optimization variable. This helps it avoidfuture obstacles and, in some cases, avoid local minima that potentialfields would fall prey to.Such methods begins with the definition of some fixed number of *pathparameters* $\theta \in \mathbb{R}^N$ which dictate the shape of acandidate path. One example, for piecewise linear paths passing betweenthe start and the goal configuration, is simply the set of intermediatemilestones: $$\theta = (q_1,\ldots,q_{k-1})$$ In this case, the path$y(s)$ consists of $k$ straight-line segments, interpolating betweenmilestones $q_0=q_s, q_1, \ldots, q_{k-1}, q_k=q_g$. Any value of$\theta$ dictates the shape of some path, and any piecewise linear pathwith $k$ segments corresponds to some value of $\theta$. To make thisdependence clear, we shall refer to the path defined by some value$\theta$ as $y_\theta$.TODO: Figure 14If the dimension of C-space is $d$, then $N = d(k-1)$. Hence thetrajectory optimization problem can be quite high dimensional (hundredsor thousands of dimensions) even for C-spaces with a moderate number ofdimensions.Next, we must encode the objective function and constraints. Forminimizing path length, it may be tempting to initially define thefollowing cost function that minimizes path length:$$f(\theta) = \sum_{i=1}^k \| q_i - q_{i-1} \|.$$ However, thisformulation has the drawback that it is not differentiable when twomilestones are equal, and also has a null direction when threemilestones are on the straight line. It is more numerically convenientto minimize the sum of squared distances$$f(\theta) = \sum_{i=1}^k \| q_i - q_{i-1} \|^2.$$ which, if a$k$-segment piecewise linear path is indeed optimal, is minimized whenthe path (nearly) has minimum length and the milestones are evenlyspaced.Now let us proceed to defining constraints, which we assume are in theform $g(q) \leq 0$. At first glance, one might choose to simply enforceconstraints on the milestones:$$h(\theta) = \begin{bmatrix}{g(q_1)} \\ {\vdots} \\ {g(q_{k-1})} \end{bmatrix} \leq 0.$$However, this runs the risk of having two milestones on either side ofan obstacle, with the intermediate segment crossing the obstacle.Instead, we must consider the possibility of constraint violationsbetween milestones. A straightforward way to do so is to use*collocation points*, which are points along the path at whichconstraints will be enforced.Specifically we can define some number $M$ of collocation points atparameters $s_1,\ldots,s_M \in [0,1]$, usually evenly distributed alongthe parameter space $[0,1]$. The $j$'th collocation point lies on asegment indexed by $i_j \in \{1,\ldots,k\}$ and lies a fraction$u_j \in [0,1]$ along the straight line segment, where these aredetermined so that the configuration at the collocation point is:$$y_\theta(s_j) = q_{i_j-1} + u_j (q_{i_j} - q_{i_j-1}).$$ We thendefine many inequality constraints on $\theta$ so that constraints ateach collocation point are enforced:$$h(\theta) = \begin{bmatrix}{g(y_\theta(s_1))} \\ {\vdots} \\ {g(y_\theta(s_M))} \end{bmatrix} \leq 0.$$The resulting problem is a constrained optimization problem([Appendix B.3.](Optimization.ipynbConstrained-Optimization)), which can be solved using anonlinear program solver, like Sequential Quadratic Programming (SQP).Efficient implementations will take advantage of sparseness in theconstraint Jacobian.Another alternative lets us use unconstrained optimizations([Appendix B.3.](Optimization.ipynbUnconstrained-Optimization)) by converting hardconstraints to penalties in the objective function. In this approach wedefining a penalty function for violating constraints:$$f_{pen}(\theta) = \sum_{j=1}^M \max(g(y_\theta(s_j)), 0).$$ Then byminimizing a weighted objective function$$f(\theta) + w f_{pen}(\theta)$$ using standard nonlinear optimizationtechniques (e.g., Quasi-Newton methods), portions of the path for whichconstraints are violated will be pushed out of the C-obstacle. However,if $w$ is not set to be sufficiently high, then the optimizer of theweighted objective function will still slightly overlap with theobstacle. To address this, we can progressively increase $w$ to reducethe amount of overlap. To prevent overlap altogether, we can also allowthe constraint violation penalty to extend a distance $\gamma > 0$outside the region where the constraint is violated.$$f_{pen}(\theta; \gamma) = \sum_{j=1}^M \max(g(y_\theta(s_j)), -\gamma) + \gamma.$$Regardless of whether a constrained or unconstrained approach is taken,there are two major issues with trajectory optimization:- The computational cost of optimization depends strongly on the number of path parameters and collocation points. If too few path parameters are chosen then a feasible path may not be found; if too few collocation points are chosen then the path may violate constraints.- For complex environments, the potential landscape in $\theta$ space is littered with local minima (and typically, more minima appear as the granularity $k$ of the path grows).The problem of choosing collocation points can be addressed byadaptively identifying the point along the path with maximum constraintviolation, in advanced optimization techniques known as *constraintgeneration* or *semi-infinite programming*.The local minimum problem can be partially addressed either byinitializing the optimizer with a path from some other motion planningmethod, like a sampling-based planner, or by using global optimizationtechniques. The approach of seeding an optimizer by a sampling-basedplanner is fast and often works well. However, does not guarantee aglobally optimal path, because the planner may have produced a seed pathin a suboptimal homotopy class or basin of attraction. Globaloptimization may result in better paths, but can be extraordinarilyslow, particularly in high dimensional spaces. Summary-------Key takeaways:- Sampling-based motion planners can overcome some limitations of the curse of dimensionality. However, they pay a cost in the variance of solution quality and running time.- The running time of such planners is dependent on the visibility characteristics of the free space, which does not directly relate to dimensionality. Running times will be fast in spaces of good visibility.- Probabilistic roadmaps (PRMs) and Rapidly-Exploring Random Trees (RRTs) are the most widely used classes of such planners. There are many variations on the basic structure.- Shortcutting can be employed in postprocessing to achieve fast (but local) improvements in path quality. To achieve global improvements, optimizing variants of PRMs and RRTs are available.- Potential field methods use only local information to determine a direction of movement and are extremely fast. They can work well for real-time obstacle avoidance, but are prone to local minima.- Trajectory optimization methods simultaneously optimize milestones along an entire trajectory. However, they require a choice of the number of milestones used to represent a path, and are also prone to local minima. Exercises---------1. Let $n$ be the number of (feasible) milestones in a probabilistic roadmap, and $N$ be the number of configurations sampled. Prove that if a PRM algorithm is probabilistically complete as $n$ increases, then it is also probabilistically completeness as $N$ increases, as long as the chance of drawing a feasible sample is nonzero.1. A PRM with a fixed connection radius $R$ can be thought of as restricting the visibility set of a point to be intersected a neighborhood of radius $R$. With this interpretation, are the visibility properties of a space ($\epsilon$-goodness and $\beta$-expansiveness) dependent on $R$? Explain why or why not. How would the visibility properties vary depending on whether the distance function was chosen to use an $L_1$, $L_2$, or $L_\infty$ metric?1. Suppose the free space is described by a set of $m$ C-obstacles $C_1,...,C_m$. Let $\mathcal{C}$ be the space in which configurations are sampled, and let $\mu$ be the volume measure. For a sampled configuration $q$, what are the probabilities of that $q$ lies within each C-obstacle? If testing each obstacle has the same computational cost, what is the fastest order in which the C-obstacles should be tested?1. Illustrate a free space in which Lazy PRM is expected to check a large fraction of edges for visibility before finding a solution. Lazy PRM may take more time than a standard PRM in this case. What component of Lazy PRM would be the cause of this computational overhead?1. Does it make sense to build a lazy PRM in precomputation for multi-query path planning? If so, give some examples of what situations in which this approach would be useful. If not, explain why not.1. In our discussion of shortcutting, path length was used as the objective function for optimization. Give an example of an objective function for which shortcutting does not improve the path cost. Then, describe a modification to the shortcutting algorithm so that the objective function does not increase.1. What is the maximum number of static collision checks needed for a PRM to check a path between milestones $v_1,...,v_m$, given a fixed resolution of $\epsilon$ for dynamic collision checking? How many static collision checks and distance queries are needed for a PRM to solve a problem, using Visible-Exact1 for dynamic collision checking, where the clearance of the path $y = v_1 \rightarrow \cdots \rightarrow v_m$ is $\delta$?1. Implement a brute force $k$-nearest neighbor algorithm that runs in $O(n k)$ time. Hint: store the $k$ nearest neighbors in an array, and maintain the index of the neighbor with maximum distance. Can you improve this to $O(n \log k)$ time?1. Write pseudocode for an $R$-neighborhood query for a $k$-d tree. Implement this, double checking that it works properly compared to a brute-force approach on random datasets. Interactive quiz
###Code
#This code must be run from the RoboticSystemsBook folder
# If you are running on Google Colab, uncomment the following code:
#
# %cd ~
# !git clone --depth 1 https://github.com/krishauser/RoboticSystemsBook
# %cd RoboticSystemsBook
from rsbook_code.assessment import quiz
quiz.show("motion_planning_higher_dimensions")
###Output
_____no_output_____
###Markdown
Section III. MOTION PLANNING Chapter 10. Motion Planning in Higher Dimensionsdiv.figcaption { text-align: center; margin-left:1em; margin-top:1em; margin-right:1em; margin-bottom:1em; } The algorithms we have studied for the 2D case do not apply to many other systems of interest: quadcopters that avoid obstacles in 3D, articulated robot arms, multi-robot teams,and robots that manipulate objects. The main issue that we are facedwith is that previous geometric planning methods require the ability toexplicitly perform computations about the shape of obstacles. Inhigher-dimensional configuration spaces, this is rather challenging.This chapter will consider how to address problems with configurationspaces of arbitrary dimension. (In fact, these algorithms can also beapplied to problems of lower dimension too --- in low-dimensionalspaces, planning is fairly easy!)Geometric planning methods like visibility graphs and cell decompositioncan often be extended to 3D environments with polyhedral obstacles withsome work. There are also algorithms for path planning with constraintsexpressed as semi-algebraic sets that work in spaces of arbitrarily highdimension, but their running time is exponential in dimensionality, andare more of an intellectual curiosity since they have never beenpractically implemented. Grid search planners can also be extended tohigher dimensional grids, but they must in the worst case explore anexponential number of grid vertices.In fact, it has been proven that even feasible path planning is NP-hardin the case of articulated $n$-joint robots. Surprisingly, optimal pathplanning in the presence of 3D polygonal obstacles is also NP-hard inthe number of obstacle vertices! This dramatic increase in thecomplexity of exact algorithms in dimensions 3 and higher has led to thedevelopment of several approximate methods, which are discussed indetail in this chapter. Implicit C-obstacle representation----------------------------------C-obstacles for general articulated robotsare in general, even more complex than they were in 2D and 3D. As aresult, most motion planning algorithms outside of planar environmentsdo not attempt to build an explicit representation of C-obstacles, butinstead opt to perform Boolean *feasibility queries* in which thecollision status of a robot is queried for a given configuration: inother words, we can test whether $q\in \mathcal{C}O$ for a specificconfiguration $q$, but we do not have a representation of any points on$\partial \mathcal{C}O$. Specifically, the user of a planner defines asubroutine as follows: $$Feasible(q) = \left\{ \begin{array}{ll}T & \text{ if $q$ is in the free space} \\F & \text{ if $q$ is in the forbidden region}\end{array} \right.$$ A planner can then call this subroutine to probewhether a configuration is feasible. Since this will be called thousandsor millions of times, fast planning in high-dimensional spaces requiresefficient collision tests as described in [Chapter 7](Geometry.ipynbCollision-Queries).Often, we will also need to check whether a *motion* is feasible aswell, usually a short segment of a path $\overline{pq}$ betweenconfigurations $p,q \in \mathcal{C}$. The process is called a*visibility query* in the case of a straight line path, and can be auser-defined subroutine or performed by the planner. The query is specified asfollows:$$Visible(p,q) = \left\{\begin{array}{ll}T & \text{ if $\overline{pq}$ is completely inside the free space} \\F & \text{ if $\overline{pq}$ intersects the forbidden region}\end{array} \right.$$In general the process of checking motions for collision is known as*dynamic collision checking*. The simplest method for doing so is simplyto take small steps along the path and perform feasibility queries ateach configuration. More details about this and other techniques aredescribed in the[section below](Dynamic-collision-checking).In addition to the Boolean feasibility query computational model, wealso consider some planners that exploit knowledge encoded in animplicit function model$\mathcal{C}O = \{ q \quad |\quad f(q) \leq 0 \}$. For example, one suchimplicit function $f$ may be the signed distance in workspace betweenthe robot and obstacles. (Specifically, this would return the distancewhen there is no collision, and the negative of the penetration depthwhen collision exists.) For most complex geometric models it is far morecomputationally expensive to perform distance and penetration depthcomputations than collision queries. As a result there is a trade offbetween using a computational query that provides richer information vsthe added complexity of invoking the query. Grid Search and the Curse of Dimensionality-------------------------------------------Let us begin by considering the case of extending grid search to $n$-Dspace. It is fairly straightforward to build such a grid, and collisionchecking for arbitrary $n$-D robots at configurations or paths can beperformed relatively quickly (we shall describe methods for doing sobelow). However, the number of vertices that may need to be exploredgrows exponentially with the dimension of the space. This growth rapidlyoverwhelms the available computational resources, both in time andmemory.It is helpful to get a sense of the absolute scale of exponentialincrease to appreciate how difficult this makes the problem. Considercreating a grid in a 6-D unit hypercube $[0,1]^6$ with resolution $h$ oneach axis. The number of vertices in the grid is listed in the right ofthe below table. Clearly, at high resolutions itwould be impractical to search the entire grid.| **Resolution $h$** | **\ vertices** || -------------------- |------------------- || 0.5 | 64 || 0.25 | 46,656 || 0.1 | 1,000,000 || 0.05 | 64,000,000 || 0.025 | 46,656,000,000 || 0.01 | 1,000,000,000,000 |Let us also fix a relatively manageable resolution, say 0.1, and observewhat happens as dimension varies. The following table shows how many vertices are in a gridof variable dimension $[0,1]^d$.| **Dimension $d$** | **\ vertices** || ------------------- |-----------------------------|| 2 | 100 || 3 | 1,000 || 6 | 1,000,000 || 8 | 100,000,000 || 10 | 10,000,000,000 || 15 | 1,000,000,000,000,000 || 20 | 100,000,000,000,000,000,000 |Yikes! Even if feasibility checking andvisibility checking were super-fast, this becomes impractical for use indimensions of around 8. This problem is generally known as the *curse ofdimensionality*.Besides the combinatorial explosion in the number of grid cells neededto span a space, there are several other odd effects in high dimensionalspaces that are counterintuitive to our experience in 2D and 3D spaces.Examples include the fact that the volume of a hypersphere dropsdramatically as dimension increases. In fact the volume of a unithypersphere approaches 0 as $d\rightarrow \infty$! This implies that*almost all points are far* in a high dimensional space, for mostreasonable definitions of "far". Another effect is that the complexityof a polytope grows dramatically. Consider a polytope in $d$-D spacewith $n$ faces, such as that defined by a linear equality $A x \leq b$,with $A$ an $n \times d$ matrix and $b$ an $n$-vector. The number ofvertices of the polytope is $O( {n \choose d } )$, which growsexponentially in $n$ and $d$. As a result, the complexity of the freespace can be exponential even for simple constraint representations.Since this "curse" appears so often in computational problems, it is ofgreat interest (and often surprising) to find algorithms that circumventthese limitations. However, they tend to trade-off computationalcomplexity for other limitations. One example is *Monte-Carlointegration*, in which a sum of function values of randomly sampledpoints are used to estimate the integral of a function:$$\int_a^b f(x) dx \approx \frac{b-a}{N} \sum_{i=1}^N f(x_i)\label{eq:MonteCarlo1D}$$ where the points $x_1,\ldots,x_N$ are sampleduniformly at random from the range $[a,b]$. The approximation error ofthis estimate, assuming $f$ is well-behaved, is on the order of$O((b-a)/\sqrt{N})$.Monte-Carlo integration can be generalized to higher dimensionalfunctions. If $B$ is an axis-aligned, $n$-dimensional box$[a_1,b_1]\times\cdots\times[a_n,b_n]$, then the integral over $B$ canbe approximated$$\int_B f(x) dx \approx \frac{|B|}{N} \sum_{i=1}^N f(x_i)\label{eq:MonteCarloND}$$ with $|B|=\prod_{i=1}^n (b_i-a_i)$ the volumeof $B$ and $x_1,\ldots,x_n$ is a uniform sampling over $B$.Specifically, a uniform sampling random variables $\epsilon_i \in [0,1]$uniformly at random, and sets $$x_i = a_i + \epsilon_i (b_i - a_i).$$The approximation errorof ($\ref{eq:MonteCarloND}$) is very similar to the 1D case:$O(|B|/\sqrt{N})$. Observe that the dimensionality $n$ did not appear atall in this equation! The sampling-based planning methods introduced in this chapter aresomewhat inspired by Monte-Carlo integration. In particular,their performance is *not immediatelyaffected by C-space dimensionality*. This is extremely appealing! However, like Monte-Carlo methods, these wins come at a price. In Monte-Carlo sampling, there is a hidden constant in the approximation error that depends on the variance of $f$ across the domain. Likewise, sampling-based planning induces a probabilistic chance of failure, and the risk of failure is highly dependent on the *visibility properties* of the free space. We will investigate these concepts more formally below. Sampling-based motion planning------------------------------The most popular set of techniques for motion planning on robots with 5or more DOFs is the class of sampling-based motion planners, mostnotably the probabilistic roadmap (PRMs) and rapidly-exploring randomtree (RRTs) planners. All such techniques are roadmap methods that builda network of paths in C-space, but they use different strategies fordoing so.There are three general reasons for their popularity:1. The same algorithm can be generalized to new problems of arbitrary dimensionality simply with changes of the $Feasible(q)$ and $Visible(p,q)$ subroutines.2. They often produce paths quickly for high-dimensional problems that are not too-maze like, and given enough time can eventually solve problems of arbitrary complexity (probabilistic completeness).3. They can be implemented fairly quickly by a student competent in algorithms and graph data structures.By "high-dimensional" we mean that sampling-based planners can routinelysolve problems in spaces of approximately 10D, and with tuning (or luck)can also find feasible paths in dozens or even hundreds of dimensions. However, these planners are not a "magic bullet", and a deeper analysis of their performance characteristics — both algorithmically and as they relate to the *visibility properties* of the underlying planning problem — is required to understand when they can be used effectively. Probabilistic roadmapsProbabilistic roadmaps (PRM) are an approximate roadmap of the robot'sfree space built by randomly sampling free connections and attemptingconnections between them. The roadmap can then be searched as usual fora path from the start to the goal. The basic algorithm for constructinga PRM is as follows:1. Sample $N$ configurations at random from the C-space ([Figure 1](fig:PRM).a--b).2. Add all feasible configurations and the start and goal to the roadmap. These are known as *milestones*. ([Figure 1](fig:PRM).c)3. Test pairs of nearby milestones for visibility, and add visible edges to the roadmap. ([Figure 1](fig:PRM).d--e)4. Search for a path from the start to the goal. ([Figure 1](fig:PRM).f)************![fig:PRM](figures/planning/prm.svg)Figure 1.Illustrating the main steps of the PRM algorithm.************It can be shown that the method is *probabilistically complete*, in thatif it finds a path, the path will be feasible. If it does not find apath, then this answer might be incorrect. However, the chance of thisincorrect answer decreases to 0 as $N$ increases, given some relativelymild assumptions on the shape of the free space. Another useful propertyis that the likelihood of success is *not directly* dependent ondimensionality, but rather then *visibility properties* of the freespace. As a result, it can reliably solve high-dimensional problems thathave good visibility properties, and perform poorly in low-dimensionalproblems that have poor visibility properties (such as narrow passages). Algorithm and key parametersThe basic PRM algorithm is given in[Algorithm Basic-PRM](alg:PRM).The first for loop adds any sampled feasible configurations to theroadmap $(V,E)$. The second for loop checks for pairs of "nearby"milestones, and adds edges as long as the path between them iscollision-free.******************Algorithm Basic-PRM(s,g,N)**1. $V \gets \{ s, g \}$.2. $E \gets \{ \}$.3. **for** $i=1,\ldots,N$ **do**4. $q \gets Sample()$5. **if** not Feasible($q$) **then** return to Line 3.6. Add $q$ to $V$ (add $q$ as a new milestone)7. **for** all $p \in near (q,V)$8. **if** Visible($p,q$) **then**9. Add $(p,q)$ to $E$.10. Search $G = (V,E)$, with Cartesian distance as the edge cost, to connect $s$ and $g$.11. **return** the path if one is found.****************There are two key subroutines to tune, which greatly affect running timeand performance:- The sampling distribution over $\mathcal{C}$ (the $Sample()$ subroutine in Line 4).- The method for determining nearby milestones in Line 7.Since at most $N$ milestones will be added to the roadmap, it isimportant to place as many as possible inside $\mathcal{F}$ at criticallocations that aid the planner in connecting the start and goal. The$Sample()$ subroutine can be tuned to do so. First, in order to have anonnegligible chance of sampling a milestone in a useful location, PRMsrequire that the C-space is bounded. In the most basic uniformdistribution it is assumed that $\mathcal{C}$ is a box$[a_1,b_1]\times \cdots \times [a_n,b_n]$, and $Sample()$ samplesconfigurations uniformly across the box. However, there are othermethods for improving performance, which we shall discuss later.If we were to check all edges in the roadmap, this would lead to a totalof $O(N^2)$ visibility checks. This can be rather computationallyexpensive for large $N$, and long paths are much more likely to collidethan short ones. Hence the idea of restricting visibility tests only to"nearby" points is a fast way of determining a small set of potentialedges that have a better chance of being feasible. To do this, we needfirst to determine a distance metric $$d(p,q)$$ that measures somenotion of path length. The simplest form might measure Euclideandistance, but for configuration spaces that blend translation androtation it is often more suitable to use a weighted geodesic metric.Once a metric is defined, most frequently one of two methods areemployed to calculate the set of milestones "near" $q$:- The $k$-*nearest neighbors* of $q$ in $V$.- The $R$-*neighborhood*: all milestones in $V$ within some radius $R$ of $q$.Using fast *nearest neighbors data structures*, which will be describedlater, the $k$-nearest neighbors can be computed in $O(k + \log N)$time, and the $R$-neighborhood can be computed in $O(h + \log N)$ time,where $h$ is the number of points returned. In any case, this usuallysaves a huge amount of time because $k$, $h$, and $\log N$ are muchsmaller than $N$, and distance queries are fast. If we are careful notto double-check the reverse of an edge that has been checked before, atmost $kN/2$ (or $hN/2$) edges will be checked in the roadmap. Incremental variantAs written, the basic PRM algorithm places all of the (up to $N$)samples and edges into the roadmap first before performing search. Butin easy problems, perhaps fewer samples were needed to construct aroadmap that contained a solution -- say, the first $M << N$. If all wewanted was a feasible path from $s$ to $g$, it would be foolish tocontinue adding unnecessary points! Fortunately, it is straightforwardto implement an *incremental* variant of PRM with very little additionalcost. (This is in fact the typical variant used for PRM planningqueries.)***************![fig:IncrementalPRM](figures/planning/incremental_prm.svg)Figure 2. An incremental version of PRM adds more samples to the roadmap while maintaining the roadmap's connected components. It terminates once thestart and goal are in the same component.***************[Algorithm Incremental-PRM](alg:IncrementalPRM) gives an implementation ofIncremental-PRM using a special data structure to determine each of the*connected components* of the graph. A connected component consists ofall milestones that are mutually connected by any path.******************Algorithm Incremental-PRM**(s,g,T)1. $V \gets \{ s, g \}$.2. $E \gets \{ \}$.3. $CC[s] \gets \{s\}$.4. $CC[g] \gets \{g\}$.5. **while** time-elapsed $< T$ **do**6. $q \gets Sample()$7. **if** not Feasible($q$) return to Line 5.8. Add $q$ to $V$9. $CC[q] \gets \{q\}$ (for now, $q$ gets its own connected component)10. **for** all $p \in near (q,V)$11. **if** Visible($p,q$) **then**12. Add $(p,q)$ to $E$.13. Merge $CC[p]$ and $CC[q]$ (merge connected components of connected edge)14. **if** $g \in CC[s]$ **then** (start and goal in same connected component)15. **return** the shortest path in $G = (V,E)$ between $s$ and $g$. 16. **return** "no path"****************Every time an isolated milestone gets added to the graph (Lines 3, 4, and 9), it gets assigneda connected component in the data structure $CC$. The connectedcomponents are maintained as more edges are added to the roadmap([Figure 2](fig:IncrementalPRM)). Once $s$ and $g$ are in the sameconnected component (Line 14), the algorithm stops. In Line 5, the mainloop is stopped by a *time limit* $T$ rather than an iteration limit.This means the overall running time can be controlled more precisely,which is useful if the robot needs to generate a path within a certaindeadline.To give more details about the $CC$ data structure, it can be thought ofas a map from each milestone to its connected component: that is,$CC[v]$ set of milestones $w \in V$ that can be reached from $v$ via apath in the $G=(V,E)$. After each change of $G$, $CC$ is updated toreflect any changes in connected component. The difficulty is that eachtime an edge gets added, the connected components of those two pointsneed to be *merged* (Line 13). If this were done in a naive fashion(say, by storing a list of connected milestones per milestone), it couldtake an extremely long time ($O(|V|)$, where $|V|$ is the number ofvertices currently in $V$) for each update. Fortunately, there is aspecial *disjoint set* (aka union-find) data structure that is very fastat maintaining these sets. With this data structure, it has been proventhat a merge takes O($\alpha(|V|)$) time on average, where $\alpha(n)$is a very, very slow growing function of $n$ called the inverseAckermann function. It grows so slowly, in fact, that for all practicalvalues of $n$ it is less than 5, and hence this can be considered aconstant. Overall, to perform $|E|$ edge insertions the overheadintroduced by this data structure is $O(|E| \alpha(|V|))$, which isessentially $O(|E|)$, and hence no additional asymptotic running timepenalty is incurred by the incremental algorithm. Empirical performanceThe performance of PRMs in practice can be quite variable from problemto problem, and even run to run (depending on the initial seed of arandom number generator). In typical problems, the probability ofsuccessfully finding a path increases as $N$ increases. After a "burnin" phase with low $N$ where there is no change of finding a solution,the probability of success rises fairly sharply before tapering off. Theslope of this tapering off depends on the visibility characteristics ofthe problem, which we shall discuss below.***************| (a) PRM, w=0.01 | (b) PRM, w=0.005 | (c) PRM, w=0.0025 ||------------------------------------------------------------------|-----------------------------------------------------------|------------------------------------------------------------------|| ![fig:PRMPerformance_a](figures/planning/prm_0.01_histogram.png) | ![fig:PRMPerformance_b](figures/planning/prm_0.005_histogram.png) | ![fig:PRMPerformance_c](figures/planning/prm_0.0025_histogram.png) |Figure 3. Histograms of PRM failure rate as more time is spent planning, taken over 100 runs on the same narrow passage problem. As the narrow passage width w shrinks, the likelihood of failure for a given time duration increases.***************The average time needed for incremental PRM to solve a given problemdepends on a number of factors. Firstly, efficient collision queries areessential, since thousands or millions of queries will be made during a"reasonably" sized run $(N=$1,000 to 100,000, usually). Second, thenearness criterion (either number of neighbors $k$ or radius $R$) shouldbe set so that a small number of edges are checked, but not too small.The path quality of PRM solutions can be quite variable. If there existsa suboptimal wide passage in $\mathcal{F}$ while the optimal passage isnarrow, incremental PRM will find the suboptimal one first (with veryhigh likelihood). Basic PRM will find the suboptimal one when $N$ islow, but does have a chance to find the optimal one once $N$ is largeenough. There is also an issue of jerkiness of the produced path, whichtends to be more pronounced with incremental PRM is used, or fewerneighbors are connected. We shall see strategies to address thejerkiness of PRM paths when discussing [optimizing roadmaps](Optimizing-probabilistic-roadmaps) and [shortcutting](Shortcutting). Analysis of visibility and performanceThe running time of PRM is composed of $N$ configuration collision checks, $kn$ edge collision checks (for the $k$-nearest-neighbor variant), and $n$ nearest neighbor querie-. Let's assume the likelihood of finding a sampling a configuration is constant and nonzero, and that configuration and edge collision checks are $O(1)$. Assuming brute-force nearest-neighbor queries, the overall running time is $O(n^2)$. However, using [more sophisticated nearest-neighbor queries](Nearest-neighbor-queries), this can be reduced to $O(n \log n)$. The basic PRM and incremental PRM algorithms have been shown to be [probabilistically complete](WhatIsMotionPlanning.ipynbCompleteness-and-optimality) under relatively mild conditions. This implies that the likelihood that the roadmap connects the start and goal (assuming they are connected in $\mathcal{F}$) approaches 1 as more milestones are added to the roadmap. But how quickly do they converge?A key factor in the theoretical analysis (and empirical performance) of a PRM are the *visibility properties* of the free space. Using the language of unfavorable / favorable visibility properties, we can mathematically formalize the intuitive notions of a "narrow passage" and "wide passage". To do this, we will need to introduce several definitions.First, we define a *measure* $\mu(X)$ that assigns any free space subset $X \subseteq \mathcal{F}$ a nonnegative scalar. Measures have a whole host of requirements, but intuitively, $\mu(X)$ measures the volume of $X$. (Note that if $X$ has lower dimension than $\mathcal{C}$, then $\mu(X)=0$; such sets include points, lines, etc.) Next, we define the visibility sets.> The **Visibility set** of a free configuration $q$ is the subset $\mathcal{V}(q) \subseteq \mathcal{F}$> of points that are visible from $q$.> Specifically, $\mathcal{V}(q) = \{ q'\in \mathcal{F} \,|\,\text{IsVisible}(q,q')\}$. It is typically useful to think of visibility as respecting a given connection radius, i.e., the constant $R$ in an $R$-neighborhood connection strategy. We can also similarly define the visibility set of a set of points as the union of the visibility sets of each point: $\mathcal{V}(X) = \{ q' \in \mathcal{F}\,|\,q' \in \mathcal{V}(q) \text{ for some }q\in X \}$. ***************![fig:Visibility](figures/planning/visibility.svg)Figure 4. Some visibility sets of various points and spaces. A PRM will require more samples to connect to points with small visibility sets. Moreover, $\epsilon$-goodness is determined by the point in the free space with the smallest visibility set. A convex space is $(\epsilon=1)$-good, while spaces with cusps and features of lower dimension are not $\epsilon$-good for any $\epsilon > 0$. ***************Intuitively, the milestones in a PRM $(V,E)$ with $n$ milestones are likely to connect to a new milestone if the visibility set of $V$ is large. Formally, if a new milestone $q$ is sampled uniformly at random from $\mathcal{F}$, then the probability that it can be connected to $V$ is exactly $\mu(V)/\mu(\mathcal{F})$. Since visibility is symmetric, the probability that $q$ cannot be connected to any of the milestones in $V$ is equal to the probability that $q$ cannot be connected to any of $n$ random configurations. Since the milestones are drawn independently at random, this probability is $(1-\mu(\mathcal{V}(q))/\mu(\mathcal{F}))^{n}$. Hence, obtain the result:> The probability that a configuration $q$ can be connected to a PRM with $n$ milestones is $Pr(q\text{ connected}) = 1 - (1-\mu(\mathcal{V}(q))/\mu(\mathcal{F}))^{n}$, assuming that $q$ and each of the milestones is drawn at random from $\mathcal{F}$.Note that this value rapidly approaches 1 as $n$ increases, as long as $\mu(\mathcal{V}(q))>0$. What this also shows is that if visibility properties are not uniform across the free space — that is, visibility sets are small in some areas (narrow passages) and large in others (wide passages) — PRMs will have a harder time connecting milestones in narrow passages. This is because the speed at which $Pr(q\text{ connected})$ approaches 1 is dependent on $\mu(\mathcal{V}(q))/\mu(\mathcal{F})$, with larger values converging to 1 much faster than smaller values. (On average, $\mu(\mathcal{F})/\mu(\mathcal{V}(q))$ milestones will be needed in $|V|$ before $q$ lies in the visibility set of $V$.) We can analyze this situation further using bounds that depend on the shape of the free space. Suppose that the minimum volume of any configuration's visibility set is $\epsilon = \inf_{q\in \mathcal{F}}\mu(\mathcal{V}(q))/\mu(\mathcal{F})$. Then, for any point $q$ sampled at random, the probability that it can be connected to a given point $q'$ is at least $\epsilon$. If $\epsilon > 0$, we say that the free space is $\epsilon$**-good**. Since the visibility bound $\epsilon$ holds across the space, we can see that the probability that any $q$ is in the visibility set of $V$ is at least $1 - (1-\epsilon)^{n}$. Keep in mind that we have not mentioned dimensionality $d$ in this discussion and only volumetric ratios, so the performance here has no direct relation to $d$. However, note that with a fixed connection radius $R$, the volume of any visible set cannot be greater than $O(R^d)$ (the volume of an $R$-ball), and hence there is an implicit exponential dependence of performance on dimension. This also shows that to improve a PRM's visibility set in spaces of higher dimension, it is necessary to set the connection radius $R$ relatively large.Is $\epsilon$-goodness all that we need to analyze PRM performance? No! Notice that we have only addressed the problem of whether a point can be connected to a single milestone in the PRM, but not whether it can reach all other reachable milestones with a feasible path. Specifically, we need to examine whether milestones in the same connected component of $\mathcal{F}$ are also in the same connected component of $(V,E)$. For this, we need a concept called **expansiveness**. Intuitively, a space $\mathcal{F}$ has high expansiveness if for any partition of $\mathcal{F}$ into two sets $A$ and $B$, a significant portion of $A$ can see a significant portion of $B$. This means that the likelihood that one or more edges of the PRM cross any boundary in $\mathcal{F}$ increases as more milestones are added.***************![fig:Expansiveness](figures/planning/expansiveness.svg)Figure 5. (a) Even if a space is $\epsilon$-good, a PRM may have a difficult time connecting two regions. (b) A $\beta$-lookout of a set $X$ is the subset of $X$ that can see a $\beta$ fraction of its complement. (c) A narrow passage causes certain $\beta$-lookouts to have small volume, reducing the expansiveness of the space. (d) A convex set is maximally expansive ($\beta=1$). ***************More formally, we describe a simplified version of the argument in Hsu et al 97. Let us define the $\beta$*-lookout* of a subset $X\subset \mathcal{F}$ as the subset of configurations in $X$ that can see an $\beta$-fraction of the complement of $X$. Mathematically, this is defined as follows:> The $\mathbf{\beta}$**-lookout** of $X$ is the set $\beta\text{-lookout}(X) = \{ q \in X\,|\,\mu(\mathcal{V}(q)\cap \bar{X}) \geq \alpha \mu(\bar{X}) \}$, where $\bar{X} = \mathcal{F} \setminus X$ is the complement of $X$. We define the _expansiveness_ $\beta$ of $\mathcal{F}$ as the largest value such that, for any partition $\mathcal{F} = X\cup \bar{X}$, the volume of $\beta$-lookout$(X)$ is at least $\beta \mu(X)$. If $\beta > 0$, then we say that the free space is $\beta$**-expansive**. (Note that $\beta \leq \epsilon$, since each point must see a $\beta$ fraction of its complement.)It has been proven that for any $\beta$-expansive space, the probability that a roadmap fails to connect the start and the goal with a feasible path drops *exponentially* toward 0 (provided that they are in the same connected component of $\mathcal{F}$). Specifically, a bound can be formulated in the following form:$$Pr(\text{failure} | n\text{ milestones}) \leq c(\beta) e^{-d(\beta) n}.$$Moreover, the convergence constants are directly related to $\beta$, with larger values of $\beta$ leading to faster convergence (smaller $c$ and larger $d$). Exponential convergence bounds are favorable because they show that the expected running time and its variance are bounded, which is not true for all convergence rates (consider, for example, the bound $Pr(\text{failure} | n) \propto 1/n$). Intuitively, the method of proof considers the idea of a *linking sequence* of regions connecting $s$ and $g$, such that a milestone is sampled in each region, then $s$ and $g$ will be connected. If the space is expansive, then it can be shown that such a linking sequence exists, has finite length, and the regions have non-zero measure. The details of these proofs are out of the scope of this book. PRM variants Rapidly-Exploring Random Trees (RRTs)One of the most popular PRM variants is the Rapidly-Exploring RandomTree (RRT) algorithm, which grows a tree rather than a graph. Originallydeveloped for kinodynamic planning, it is easily adapted to kinematicplanning as well. The specific variant we will discuss is calledRRT-Connect, which is a *bidirectional* search.RRT-Connect grows two trees of feasible paths, one rooted at the startand the other at the goal. At each iteration, both the start and thegoal trees are *extended* toward a randomly sampled configuration. Then,if the trees are close enough, a connection will be attempted betweenthem. If connected, the joined trees contain a unique path from thestart to the goal.**********************Algorithm RRT-Connect**1. $T_s \gets \{ s \}$.* $T_g \gets \{ g \}$.* **for** $i=1,...,N$ **do*** $q_{rand} \gets Sample()$* $q_e \gets$Extend-Tree$(T_s,q_{rand},\delta)$ (extend start tree at most $\delta$ distance toward $q_{rand}$)* $q_e^\prime \gets$Extend-Tree$(T_g,q_{rand},\delta)$ (extend goal tree at most $\delta$ distance toward $q_{rand}$)* **if** $d(q_e,q_e^\prime) \leq \delta$ and Visible($q_e,q_e^\prime$) **then** (trees are close enough)* Add edge $q_e\rightarrow q_e^\prime$ to connect $T_s$ and $T_g$* **return** the path from $s$ to $g$* **return** "no path"*********************************************Algorithm Extend-Tree**$(T,q_{rand},\delta)$1. $q_{near} \gets Nearest(T,q_{rand})$2. $q \gets q_{near} + \min{1,\frac{\delta}{d(q_{rand},q_{near})}}(q_{rand}-q_{near})$3. **if** Visible$(q_{near},q)$ **then**4. Add edge $q_{near}\rightarrow q$ to $T$.5. **return** $q$.6. **return** $q_{near}$.********************Specifically, the pseudocode is listed in[Alg. RRT-Connect](alg:RRTConnect). $T_s$ and $T_g$ denote the trees rooted atthe start and goal, respectively. In Line 3, a random configuration isdrawn, and in Lines 4 – 5, the trees are extended toward it along astraight line path using the Extend-Tree subroutine. RRT has a keyparameter $\delta$, which a limit to how far a tree can be extended oneach step. In other words, every edge in each tree has length no morethan $\delta$. Also, if the two extended milestones are within distance$\delta$, they are connected. For small values of $\delta$, it is morelikely for each extension to succeed, but the tree makes slower progressin exploring the free space.Pseudocode for Extend-Tree is given in[Alg. Extend-Tree](alg:ExtendTree). It first performs a nearest-neighbor queryon the milestones in the given tree to determine a milestone $q_{near}$.It then extends a short path no more than distance $\delta$ toward thedestination $q_{rand}$. If this edge is visible, then it is added to thetree.Unlike PRMs, RRTs do not use the configurations coming from the$Sample()$ function directly, nor do they attempt more than one edgeconnection per iteration. Hence, they sample points in a differentdistribution than PRMs. But what is this distribution? We firstintroduce the concept of a Voronoi diagram, which defined for some setof points $X = \{\mathbf{x}_1,\ldots,\mathbf{x}_n\}$. The Voronoi diagram is apartition of the space $\mathcal{C}$ into Voronoi cells , one per point.The cell $C_i$ corresponding to a point $\mathbf{x}_i$ is the subset of$\mathcal{C}$ for which $\mathbf{x}_i$ is the closest point. In other words,$$C_i \equiv \{ \mathbf{x} \in \mathcal{C}\, | \, i = \arg \min_{i=1,\ldots,n} d(\mathbf{x},\mathbf{x}_i) \}$$RRT is said to employ a Voronoi bias strategy because each milestone ina tree is selected for expansion (i.e., be the nearest node to$q_{rand}$) with probability proportional to the volume of its Voronoicell. This means that milestones that are closer to unexplored areas of$\mathcal{C}$ have a higher likelihood of being expanded. Moreover, theextended milestone will have a higher likelihood of extending the treein unexplored directions (and hence the term *rapidly exploring* applieshere).RRTs are appealing because tree data structures are a bit simpler toimplement than graphs. Also, the RRT explores locally first, so if thestart and goal are nearby, the RRT may do significantly less work than aPRM. However, RRT performance is generally more sensitive to the choiceof a distance metric, and is generally better at exploring thanrefining. *******************![fig:RRTBugtrap](figures/planning/bugtrap.svg)Figure 6To escape the mouth of a bugtrap, an RRT needs to sample a verycarefully chosen sequence of milestones within the general area that ithas already explored. But due to the Voronoi bias, it frequentlyattempts infeasible extensions from the highlighted frontiernodes.*******************As an example, the "bugtrap" problem illustrated in[Figure 6](fig:RRTBugtrap) tends to pose challenges for the RRT. Inthis and many other problems, a planner needs to strike a balancebetween *exploration* toward new areas and *refinement* of the roadmapin existing areas. Let's assume RRT only grows a tree from the start; itis easy to imagine double-bugtraps that cause the same behavior for thegoal. Here, the bug has a very difficult time wiggling out of the opening ofthe trap because it appears purely from the Voronoi bias that thefrontier has not yet been adequately explored. However, each attemptedextension ends up bumping into the walls of the trap. A *sequence* ofprecisely-chosen values of $q_{rand}$ are needed to escape the trap,which is highly unlikely to occur by chance. Moreover, the theoretical analysis of RRT is more challenging because its tree expansion strategyis history-dependent. In fact, the probabilistic completeness proof containedin the original RRT paper was been shown to be flawed, and has only beencorrected recently! The best exponential convergence bound found so far alsothat the expected running time is dependent on a factor of the form $c^{-d}$ where $c$is the minimum of $\delta$ and the clearance of some feasible path connectingthe start and goal, and $d$ is the dimension (Kleinbort et al, 2018). This bound is, however, extremely loose, and RRT empirical performance is not directlycorrelated with dimensionality, and like PRM typically enjoys betterperformance in spaces with favorable visibility properties. One caveat is thatthe expansion radius $\delta$ must be set larger in spaces of higher dimension to avoidextremely slow convergence. In general it can be challenging to say whether an RRT orPRM will work better for a given problem without empirical testing. Nonuniform sampling strategiesSince PRM and RRT performance depends highly on how well samples areplaced in critical regions, several strategies have been developed toboost performance with nonuniform sampling. PRMs benefit from placingmore samples in *low-visibility regions*, which requires identifyingareas that are relatively constrained or close to obstacles. One way todo this is to record how many feasible and infeasible edges wereattempted for each milestone (these are stored as counts $n_f[q]$ and$n_i[q]$, respectively, for each $q\in V$). After $N$ samples, moresamples are added near the milestones with a large fraction ofinfeasible edges, with the hope that these milestones are located inlow-visibility regions where a denser sampling is needed to makeconnections. Specifically, we might pick a milestone $q \in V$ withprobability proportional to $n_i[q] / (n_i[q] + n_f[q])$ and then samplea new configuration from a disk centered at $q$ with radius $R$. Iffeasible, the new milestone is connected to the roadmap as usual.Another method that can boost PRM performance in low-visibility spacesis the Gaussian sampling strategy. The idea is to increase the densityof milestones near the boundaries of obstacles, since low-visibilityregions will certainly be located near obstacles. The method actuallydraws two samples: one $q_1$ at random, and the second $q_2$ from amultivariate Gaussian distribution (see[Appendix A.3.](Probability.ipynbMultivariate-Gaussians)) centered at $q_1$ and withstandard deviation $\sigma$. Then, *only if exactly one of the samplesis feasible*, that sample is kept. Otherwise, both are thrown out. Thisensures that the segment between $q_1$ and $q_2$ straddles the boundarybetween the free space and forbidden region.It might seem odd to throw away perfectly good feasible samples, sinceadding them to the roadmap won't hurt (and can only help) connectivity.However, every additional milestone incurs additional work to test andconnect edges. In fact, edge collision checking is often the dominantcost of planning. It turns out that in the presence of narrow passages,the added cost to generate samples is worth it, and Gaussian samplingcan perform quite well. However, for best performance the perturbationstandard deviation $\sigma$ must be tuned to trade off these competingcosts.RRTs benefit from a slight *goal bias* which drives the tree toward thegoal. In RRT-Connect, this could take the form of sampling $q_{rand}$from $T_{g}$ some fraction of the time, which would drive extensions of$T_{s}$ toward the goal. Similarly, the reverse search could sample$q_{rand}$ from $T_s$ some fraction of the time, and drive extensionsfrom $T_g$ toward the start. This takes the form of replacing Lines 4 – 6in [Algorithm RRT-Connect](alg:RRTConnect) with the following code:1. **if** {$rand() \leq p_{gb}$}* **if** {$rand() \leq 0.5$}* $q_{e}^\prime \gets RandomVertex(T_g)$* $q_{e} \gets$Extend-Tree$(T_s,q_{e}^\prime,\delta)$* **else*** $q_{e} \gets RandomVertex(T_s)$* $q_{e} \gets$Extend-Tree$(T_g,q_{e},\delta)$* **else** Perform Lines 4 – 6 as usualHere the term $p_{gb}$ in Line 1 is the probability of using goalbiasing, and Line 2 decides according to an unbiased coin flip whetherto extend toward the start or toward the goal. The function $rand()$samples from the uniform distribution on $[0,1]$. Multi-query PRMsAnother variant of PRMs that is useful in some scenarios is the*multi-query* PRM. As presented, the PRM method finds a path for a given$(s,g)$ query and then throws out the roadmap for the next query. Insome cases, we would like the robot to plan another path in the sameenvironment. Or, a team of robots may be traversing the sameenvironment. In this case, it makes sense to *precompute* a good PRM andthen reuse it for multiple queries. This is because the primary cost ofPRM planning is in the construction phase, while the graph search phaseis quite fast.PRM construction proceeds like before, but without any endpoints. Then,to query the existing roadmap for a start and goal $(s,g)$, we tryconnecting $s$ and $g$ to nearby milestones using visibility queries.Then, the augmented PRM is searched for a path. To keep the roadmap fromgrowing if many queries are to be made, $s$, $g$, and all the edgesconnecting them are removed from the roadmap before terminating thequery. Lazy collision checking in PRMsFor complex robots and/or environments, such as those composed of CAD models, the most significant computational expense in PRMs and RRTs is checking visibility of edges (i.e., [dynamic collision checking](Dynamic-collision-checking) because each check may require tens, hundreds, or thousands of static collision checks. Furthermore, for complex robots, self-collision testing may need to be performed beween all *pairs* of links, so even a single static collision check can take milliseconds of compute time. This can add up quickly, as the roadmap begins to contain thousands of milestones.An effective heuristic for accelerating PRM performance is to perform *lazy collision checking*, which delays collision checking for edges until a candidate path to the goal is found. The hypothesis is that if the endpoints of an edge are collision free, then the path between them is also likely to be free. Since most edges in the PRM aren't used in the final path, it is a waste to devote effort checking their collision status. If the path does contain a collision, the offending edge can be removed and planning can continue.The Lazy PRM algorithm can be implemented in both basic and incremental forms. A lazy [Basic PRM](PRM) variant is as follows:1. Create a PRM $(V,E)$, assuming IsVisible always returns true.2. Find a path from $s$ to $g$, $v_1=s,v_2,\ldots,v_{m-1},v_m=g$ using search. If no path is found, return failure.3. Check each edge IsVisible$(v_i,v_{i+1})$, $i=1,...,m-1$ for collision.4. If any edge $(v_i,v_{i+1})$ is not feasible, delete it from $E$ and return to 2.5. If all edges are feasible, return $v_1 \rightarrow v_2 \rightarrow\cdots \rightarrow v_m$ as the path.In this design, it is helpful to cache which edges have been found to be visible to avoid re-checking edges in step 3. Another speed improvement is to use the costs of optimal paths to $g$ in the original PRM as a heuristic for A* search (used in the Batch Informed Trees* algorithm).A lazy [incremental PRM](alg:IncrementalPRM) variant is as follows:1. During roadmap construction, IsVisible is assumed to always return true.2. In line 15, once a path is found to the goal, the shortest path connecting $s$ and $g$ is checked for collision, as in steps 2 – 5 in the lazy Basic PRM variant.3. Connected components need to be recomputed when edges are found to be infeasible.To implement this efficiently, step 3 must be implemented so that connected components can be updated quickly when the graph changes. One way of doing this in conjunction with the path search is a *dynamic shortest paths* data structure, which stores the cost of the shortest path to every node in the graph. This data structure should be updated every time an edge is added or removed. Although in the worst case, $O(n)$ costs must be updated, the vast majority of cases are typically cheap.To adapt RRT to perform lazy collision checking, we have a problem figuring out what to do with infeasible edges. Suppose we find that an edge near the start is infeasible: discarding it would break the tree structure, or we could delete the subtree of descendants of the edge, but this would waste a significant amount of prior effort. Instead, a bidirectional tree-based lazy collision checking strategy, introduced in the SBL algorithm (Sanchez-Ante and Latombe, 2003), avoids discarding subtrees. Instead, it maintains bidirectional trees as in RRT-Connected, and checks edges for collision once a path is found from the start to the goal. If an edge is found to be in collision, then it *switches the subtree* of descendants of that edge to the other subtree. This takes a degree of bookkeeping to update the tree data structures, but can be done quickly. Optimizing probabilistic roadmapsBoth PRM and RRT probabilistically complete, i.e., are increasingly likely to find a feasible path as more samples are drawn. A natural question to ask is whether they produce paths that are close to optimal? Well, it is clear that if incremental PRM or RRT were to terminate on the first path found, these paths may be far from optimal. But if they were to *continue planning past the first path*, then perhaps better and better paths could be found. This idea forms the basis of the PRM* and RRT* algorithms, which have been shown to be *asymptotically optimal* (Karaman and Frazzoli, 2009).> **Asymptotically-optimal planner**. A planner is asymptotically-optimal if the cost $c(n)$ of the path that it> produces after $n$ iterations approaches the cost of the optimal path $c^\star$ as $n$ increases. If the> planner is probabilistic, asymptotic optimality means that the *probability* that the cost> $c(n)$ does not approach $c^\star$ is 0. Specifically,> $Pr(\lim_{n\rightarrow \infty} c(n) - c^\star) \neq 0)=0$. PRM\* and RRT\*PRM has been shown to be asymptotically-optimal using the $R$-neighborhood connection strategy, but not the $k$-nearest neighbors strategy. Using the $R$ neighborhood strategy, however, in the limit of large $n$ eventually tries to connect $O(n^2)$ milestones. It has been proven that using a *dynamically choice* of $R$ and $k$ can lead to an asymptotically optimal PRM planner, specifically the values of $R^\star(n) \propto (\frac{\log n}{n})^{1/d}$ and $k^\star(n) \propto \log n$. Note that $R^\star$ shrinks towards zero and $k^\star$ grows, so that in both cases, each new milestone is expected to be connected to $O(\log n)$ milestones, which grows asymptotically. Hence, the number of edges in the roadmap is expected to be $O(n \log n)$. (Note that there is a constant factor in these expressions that depends on the volume of free space and distance measure, and must be set sufficiently large or else asymptotic optimality no longer holds.)*************| (a) PRM, $k=5$ neighbors | (b) PRM, $R=0.1$ connections | (c) PRM\* | (d) RRT\* ||---------------------------------------------------------------|--------------------------------------------------------|---------------------------------------------------------------|--------------------------------------------------------||![fig:PRMStar_a](figures/planning/prm_knn.gif) | ![fig:PRMStar_b](figures/planning/prm_neighborhood.gif) | ![fig:PRMStar_c](figures/planning/prmstar.gif) | ![fig:PRMStar_d](figures/planning/rrtstar.gif) |Figure 7. Convergence of various PRM and RRT variants. The fixed-$k$ strategy is not asymptotically optimal. *************RRT has been shown not to be asymptotically-optimal in any case, since the history-dependence of its tree growth strategy from the nearest configuration in the tree prevents it from taking advantage of shorter paths that may arise. The RRT\* algorithm introduces a new technique called "rewiring" that, upon expanding the tree, changes the tree structure if it is possible to improve path lengths by passing through a different nearby milestone. Let us assume a unidirectional RRT. The main differences introduced to RRT are:1. Optimal costs $c(q)$ through $T$ are stored at each node $q$, and updated during Extend-Tree and Rewire.2. After each successful extension, points in $T$ near the new milestone $q_e$ are checked for whether better paths can be found passing through $q_e$. 3. Extend-Tree sets $c(q) = c(q_{near}) + d(q_{near},q)$, and returns $nil$ if the tree could not be successfully extended.**********************Algorithm RRT\***1. $T \gets \{ s \}$..2. **for** $i=1,...,N$ **do**3. $q_{rand} \gets Sample()$4. $q_e \gets$Extend-Tree$(T,q_{rand},\delta)$5. **if** $q_e \neq nil$ **then** Rewire$(T,q_{e},|T|)$6. **if** $d(q_e,g) \leq \delta$ and Visible($q_e,g$)**then**7. Add edge $q_e\rightarrow g$ to $T$8. $c(g) = $ cost of optimal path from $s$ to $g$, if $g$ is connected, and $\infty$ otherwise9. **return** "no path"*********************************************Algorithm Rewire**$(T,q_{new},n)$1. Neighbors $\gets$ Set of $k^\star(n)$-nearest neighbors in $T$, or points in $R^\star(n)$-neighborhood.2. **for** $q\in$Neighbors sorted by increasing $c(q)$ **do**3. **if** $c(q_{new}) + d(q_{new},q) (optimal path to $q$ passes through $q_{new}$)4. $c(q) \gets c(q_{new}) + d(q_{new},q)$5. Update costs of descendants of $q$.6. **if** $c(q) + d(q,q_{new}) (optimal path to $q_{new}$ passes through $q$)7. $c(q_{new}) \gets c(q) + d(q,q_{new})$8. Set parent of $q_{new}$ to $q$.9. Revise costs and parents of descendants of $q_{new}$.***********************Steps 4 and 8 can involve traversing large parts of the tree to update costs and parents, using a depth first traversal of the tree. In particular, in step 8, the parent of $q_{new}$'s child $q_{c}$ should be set to $q_{new}$ if $c(q_{new}) + d(q_{new},q_{c}) < c(q_{c})$. Then, the update should be called recursively on $q_{c}$. If that condition does not hold, the recursion does not continue. Convergence rateDue to their proven asymptotic optimality and relative ease of implementation, PRM\* and RRT\* have gained wide acceptance, and have spawned many variants. But how quickly do these planners converge to optimal?First of all, PRM\* and RRT\* run slower than their normal counterparts to find the first feasible path, because they do more work per iteration. One way of mitigating this in RRT\* is to disable the rewiring step until a first feasible path is found, in which case the algorithm begins identically to RRT.Secondly, PRM\* and RRT\* perform $n$ configuration feasibility checks and $O(n \log n)$ edge visibility checks. The number of configuration checks is the same as in PRM and RRT, but PRM performs $kn$ edge checks and RRT performs $n$. So we pay a logarithmic factor of computation speed to gain asymptotic optimality.Third, the number of milestones needed to obtain a desired decrease in the suboptimality of the best path is exponential in the dimensionality. Examine two cases: either a) the planner does not yet have a path in the homotopy class of the optimal path, and hence must explore the space further globally to make progress, and b) the planner has a path fairly close to the optimal path and can just perform local sampling to improve the current best path. In case a), it can be shown that for any planner that only performs binary collision queries, the expected number of samples needed to obtain a solution in the optimal homotopy class is $\Omega(\delta^{-d})$ (that is, at least a constant times $\delta^{-d}$) where $\delta$ is the clearance of a path in the optimal homotopy class. The counterexample is shown below: the optimal path passes through the tiny block, and its visibility set volume is $O(\delta^{d})$, and at least two samples need to be placed there.To address case b), we can also show that for a sampling based planner to locally reduce the cost of the current best path, it must place samples in a region with volume $O((c(n)-c^\star)^{d-1})$. TODO: expand on this and show figures Shortcutting As noted above, PRM and RRT are only concerned with finding a feasiblepath, and often produce jerky, unnatural paths. Shortcutting is a veryuseful postprocessing heuristic, illustrated in[Figure 9](fig:Shortcutting), in which portions of a path arerepeatedly replaced with shorter, feasible segments. *************![fig:Shortcutting](figures/planning/shortcutting.svg)Figure 9. A shortcutting heuristic can quickly smooth out the jerkiest parts ofpaths that are generated by a sampling-based planner. *************In order to do so, two random points are sampled along the path. Withthe path being a curve $y(s):[0,1]\rightarrow \mathcal{C}$, we sampletwo parameters $u,v \in [0,1]$. If $Visible(y(u),y(v))$ is true, then wereplace the portion of the path between $u$ and $v$ with the straightline path. Otherwise, the sampling process begins again. This repeatsfor some number of iterations.Shortcutting is only a local optimization technique, and not a verypowerful one at that. But it is very fast, and this low overhead makesit a very practical method for getting rid of the worst parts of a jerkytrajectory. In fact, we can construct an any-time planner that simplyapplies repeated restarts of an RRT (or PRM) followed by shortcutting.The shortest path found after shortcutting is maintained through each ofthese restarts. Eventually, we might get lucky and find a path close tooptimal. It turns out that for many problems, this approach canoutperform RRT\* (or PRM\*)! Dynamic collision checking So far we have assumed that edges in configuration space can be checked for collision using the $Visible$ subroutine, but checking collisions is not as straightforward as in simple geometric spaces, where we could simply check the collision status of a line segment. The simplest method for approximating the feasibility of a configuration-space line segment $\overline{ab}$ is to subdivide $\overline{ab}$ into small segments, with configurations $q_1=a,q_2,\ldots,q_{n-1},q_n=b$ uniformly spaced no more than $\epsilon$ distance apart. Then, each of $q_1,...,q_n$ is checked for collision using the $Feasible$ subroutine. The segment is considered visible if all configurations are feasible.Note that this is only an approximate method that depends on the resolution $\epsilon$. If this is too large, then collisions may be missed between checked points $q_{i}$ and $q_{i+1}$ even if both $q_{i}$ and $q_{i+1}$ are feasible. On the other hand, if $\epsilon$ is too small, then this takes a lot of time. Precisely, the number of collisions checked is $n = \lceil d(a,b) / \epsilon \rceil$.Another issue is the order in which the configurations are checked. In the worst case, the edge is feasible, and all configurations must be checked for feasibility. However, if the edge is infeasible, then we can save time by finding an infeasible configuration quickly. Let us suppose that both $a$ and $b$ are feasible. Then, in the absence of additional information, the point that is most likely to lead to a collision is the midpoint $(a+b)/2$. This intuition gives a recursive implementation that is effective in practice:**********************Algorithm Visible-Recurse**($a,b,\epsilon$)1. If this is the first recursive call, check $a$ and $b$ for collision. Return false if either $\neg Feasible(a)$ or $\neg Feasible(b)$.2. If $d(a,b) \leq \epsilon$ return true.3. $m \gets (a+b)/2$4. If $\neg Feasible(m)$, return false.5. Return Visible-Recurse$(a,m,\epsilon)$ $\wedge$ Visible-Recurse$(m,b,\epsilon)$.********************This approach is illustrated in [Figure 10.a](fig:dynamic-cc).*************| (a) Approximate dynamic CC with recursion | (b) Exact dynamic CC with adaptive recursion || ------------------------------------------|----------------------------------------------|| ![fig:dynamic-cc](figures/planning/dynamic_cc.png) | ![fig:dynamic-cc-adaptive](figures/planning/dynamic_cc_adaptive.png) |Figure 10. Approximate and exact dynamic collision checking methods. *************Although this approximate technique is by far the most widely used in practice, _exact_ dynamic collision checking methods are also available. These methods are based on similar recursions, but use additional information about the clearance of a configuration. Recall that the clearance $c(q)$ of a configuration is the distance in C-space to the nearest C-obstacle. If we can show that $d(a,b) \leq c(a)+c(b)$, then we can be certain that $\overline{ab}$ is collision free ([Figure 10.b](fig:dynamic-cc)). This is because the balls centered at $a$ and $b$ with radii $c(a)$ and $c(b)$, overlap, are in free space, and contain $\overline{ab}$ in their union. (In most cases, however, we do not have access to an exact clearance function, but this reasoning still works when $c(q)$ is any lower bound on clearance.) This gives the following exact algorithm:**********************Algorithm Visible-Exact1**($a,b$)1. If this is the first recursive call, check $a$ and $b$ for collision. Return false if either $\neg Feasible(a)$ or $\neg Feasible(b)$.2. If $d(a,b) \leq c(a) + c(b)$ return true.3. $m \gets (a+b)/2$4. If $\neg Feasible(m)$, return false.5. Return Visible-Exact1$(a,m)$ $\wedge$ Visible-Exact1$(m,b)$.********************This is an adaptive recursion method that terminates quickly when $a$ and $b$ are far from obstacles, but spends more time when the line segment passes close to obstacles.It is more likely to have _workspace distance_ information between pairs of objects. Let $CO_1,\ldots,CO_N$ be the C-obstacles, and let $c_i(q)$ indicate the clearance of the $i$'th obstacle in workspace at configuration $q$. We also need a function $\eta_i(q,q^\prime)$ that gives an upper bound on the distance that *any point on the robot moves* in workspace during the motion from $q$ to $q^\prime$.For example, consider a 2R robot arm with link lengths $L_1$ and $L_2$, the link geometries are simple line segments, and there are two C-obstacles, one for each link. The collision constraint for link 1 only depends on $q_1$, and the points on the link are contained within a circle with radius $L_1$. Moreover, in a movement of $\theta$ radians, the tip of the link moves at most $L_1 \theta$ distance in workspace. Hence, $$\eta_1(q,q^\prime) = L_1 |q_1 - q^\prime_1 |$$ is a suitable upper bound. Through similar reasoning, we can show that $$\eta_2(q,q^\prime) = (L_1+L_2) (|q_1 - q^\prime_1 | + |q_2 - q^\prime_2 |)$$ is an upper bound on how far the tip of link 2 moves. There are general formulas like this for arbitrary articulated robots. Specifically, $$\eta_k(q,q^\prime) = (R_k + \sum_i^{k-1} L_i) \| (q_1,\ldots,q_k) - (q^\prime_1,\ldots,q_k^\prime)\|_1$$ is a suitable function for all nR robots. The following algorithm uses this bound to perform exact collision detection.**********************Algorithm Visible-Exact2**($a,b$)1. If this is the first recursive call, check $a$ and $b$ for collision. Return false if either $\neg Feasible(a)$ or $\neg Feasible(b)$.2. If $\eta_i(a,b) \leq c_i(a) + c_i(b)$ for all C-obstacles $i=1,\ldots,N$, return true.3. $m \gets (a+b)/2$4. If $\neg Feasible(m)$, return false.5. Return Visible-Exact2$(a,m)$ $\wedge$ Visible-Exact2$(m,b)$.********************A final performance enhancement is that if the condition in Line 2 is satisfied for any C-obtstacle, you can just ignore it from then on. This focuses effort only on the constraints that need further refinement.A non-recursive variant of this algorithm, known as conservative advancement, gives the earliest point of contact during a motion from $a$ to $b$ by walking along the line segment as far as possible ensuring that the condition in Line 2 holds. This is useful for collision detection in simulation. Nearest-neighbor queriesA significant computational expense in PRM, RRT, and their variants, is computing nearest-neighbors (and near-neighbors). There are three types of nearest-neighbor queries:- (1-)Nearest-neighbor (NN$(q,P)$), used in RRT.- $k$-nearest-neighbors (kNN$(q,P,k)$), used in PRM- $R$-neighborhood (near$(q,P,R)$), used in PRMThese are important subroutines for a variety of application areas, including machine learning and geographic information systems, and hence the number of nearest neighbors algorithms and software packages is quite abundant. However, in motion planning, there are two problems that often times break the assumptions used in such packages:1. The point set $P$ is dynamic, and so extensive precomputation of data structures is not acceptable. Whatever data structures are used must support fast point insertion.2. The distance metric is often non-Euclidean, and can even be non-Cartesian in the case of common geodesic spaces used in robotics, like SO(2) and SO(3).*Brute-force* nearest neighbors simply loops through each point, and returns the one with smallest distance to the query point. This runs in $O(n)$ time, and similar algorithms can be used for kNN() and near() queries. It is also highly general, and can work with arbitrary metrics and spaces. However, if brute force nearest neighbors is used, leads PRM / RRT planning to be quadratic in the number of milestones. As a result, faster algorithms are usually needed. k-d treesA *$k$-d tree* data structure is a spatial hierarchy that recursively divides a Cartesian space $\mathbb{R}^k$ into regions by splitting each region by a hyperplane. *The $k$ here refers to the number of dimensions in the space, not the $k$ in the kNN query. In the following, let us revert back to our original notation where dimensionality is denoted $d$.* Each hyperplane is aligned to one of the $d$ primary axes. An illustration of a k-d tree is shown below.***************(a) k-d tree holding 14 2-D points | (b) First leaf reached in NN query | (c) Second leaf reached in NN query -----------------------------------|------------------------------------|------------------------------------![fig:KDTrees_a](figures/planning/kdtree.svg) | ![fig:KDTrees_b](figures/planning/kdtree_query1.svg) | ![fig:KDTrees_c](figures/planning/kdtree_query2.svg) Figure 11.(a) $k$-d trees recursively divide a space into rectilinear regions. Each leaf of the tree contains a set of points contained in that region. (b) For a nearest-neighbor query (blue point), the leaf containing the point is reached first, and the closest point in the leaf is found (blue circle). This forms an upper bound on the nearest neighbor's distance, and any leaves further than this distance will be pruned. (c) Only one more leaf is visited before the nearest neighbor is found.***************More formally, the binary tree $T_{kd}$ is composed of nodes $N$. Each leaf node contains a list of contained points `pts`, and each non-leaf node contains a split dimension `dim` and split value `value`. Non-leaf nodes have exactly two children $C^-$, and $C^+$, and all points $q \in \mathbb{R}^d$ such that $q_{dim} < value$ belong to the _negative_ child $C^-$, while points such that $q_{dim} \geq value$ belong to the _positive_ child $C^+$.We will describe how to query for the closest point (NN), with the kNN and near queries implemented similarly. We traverse the tree in a branch-and-bound manner similar to the [bounding volume hierarchy](Geometry.ipynbBounding-volume-hierarchies) approach. Let's assume the Euclidean distance is used. We maintain a closest point $p_{close}$, initialized by picking a point from $P$ at random. We proceed examining nodes $N$ recursively starting from $N=root(T_{kd})$. Psuedocode is below:**********************Algorithm KDTree-NN-recurse**($q,N,p_{close}$)1. **if** $N$ is a leaf node, **then**1. Let `pts` be the points contained in $N$. 1. Let $p$ be the closest point in `pts` to $q$.1. **return** the closer of $p$ and $p_{close}$.1. **else** (non-leaf node)1. Let its splitting plane be on axis `dim` with value `value`. Let its children be $C^-$ and $C^+$.1. **if** $q_{dim} (negative side first)1. $C_1 \gets C^-$, $C_2 \gets C^+$.1. **else** (positive side first)1. $C_1 \gets C^+$, $C_2 \gets C^-$.1. $p_{close} \gets $KDTree-NN-recurse($q,C_1,p_{close}$)1. **if** $|q_{dim} - value| \leq d(q,p_{close})$ **then** (prune opposite side if too far)1. $p_{close} \gets $KDTree-NN-recurse($q,C_2,p_{close}$)1. **return** $p_{close}$If $N$ is a leaf node, we check all its points in `pts` in brute force manner (Lines 1 – 4). If $N$ is a non-leaf node, containing split values `dim` and `value`, we first examine whether $q_{dim} < value$ or $q_{dim} \geq value$. We first recurse on the corresponding child (Lines 7 – 11). This recursive call may update $p_{close}$. Then, Lines 12 – 13 consider whether to check the opposite child. If $|q_{dim} - value| \geq d_{close}$, the distance $|q_{dim} - value|$ to the splitting hyperplane is sufficiently large that there is no chance that the closest point lies within the region defined by the opposite child. Hence, recursion on the opposite child can be skipped. Regardless of the outcome, we return $p_{close}$.**Insertion.** To insert points into a $k$-d tree, we can simply locate the leaf node in which the point is located, and add it to the `pts` structure. If the number of points in `pts` exceeds some threshold, defined by a given parameter, then the node is converted into a non-leaf node via splitting. Letting `dim` be the axis of the parent, the chosen axis can either be `(dim+1) mod d` (round-robin splitting) or the dimension with the largest variation in `pts`. In either case, `value` is set to the median value of `pts` in that dimension. A potential problem with incremental insertion is that unless the points are distributed identically at random, the split value of a leaf may not bisect the distribution of future points, leading to an imbalanced tree. More advanced algorithms may detect imbalanced trees during construction and rebalance them.**Extensions.** $k$-d trees can be extended easily to weighted Euclidean distances, since a weighted Euclidean distance is equivalent to a rescaled version of Euclidean space. They can be extended to handle other distance metrics or non-Cartesian spaces with a bit more effort. For other distance metrics, the main challenge is determining the point-hyperplane distance $\min_{p | p[dim]=value} d(p,q)$ rather than the straightforward calculation in Line 13. Non-Cartesian spaces require an alternative definition of the splitting plane, sidedness determination (Line 7), and a point-splitting plane distance (Line 13). Insertion also needs to be modified to determine a reasonable splitting plane.**Performance.** Notice in the NN query, recursion proceeds in depth first fashion, and the first leaf node found is associated with the region containing $q$. Ideally, the closest point in this region will be very close to $q$, and hence most of the opposite sides will be pruned. In the best case, all of the opposite sides will be pruned and this runs in $O(\log |P|)$ time. In the worst case, all $P$ points must be checked, in addition to the overhead of traversing the tree, making this no better than brute-force search. It can be seen that performance degrades if the points are nonuniformly distributed or the tree is imbalanced, that is, the number of points on either side of a split differ significantly. Performance also degrades in spaces of higher dimension, because point-point distances tend to be much larger than point-hyperplane distances. Approximate methodsDue to the degradation in performance of $k$-d trees in spaces of higher dimension, it is common to apply approximate nearest neighbors techniques. These sacrifice exactness of the output for speed improvements. The sacrifice of exactness is usually worth it in sampling-based planning because for most algorithms *there is no inherent reason to use the exact nearest neighbor(s) for connection* except that a closer milestone is slightly more likely to yield a feasible edge than a farther one. One straightforward approximate method that uses $k$-d trees is to modify the pruning condition $|q_{dim} - value| \leq d(q,p_{close})$ in Line 13 so that more nodes are pruned. A typical approach is to inflate the point-hyperplane distance by a relative coefficient $\epsilon_{rel}\geq 0$ and absolute coefficient $\epsilon_{abs}\geq 0$ so that the condition in Line 13 becomes $(1+\epsilon_{rel})\cdot|q_{dim} - value| + \epsilon_{abs} \leq d(q,p_{close})$. With such an approach, it is easy to show that the distance of the resulting point $p_{close}$ to $q$ is no more than $(1+\epsilon_{res})d^\star + \epsilon_{abs}$. With larger values of $\epsilon_{res}$ and $\epsilon_{abs}$, more branches are pruned at a sacrifice of optimality.Another approximation technique is Locality Sensitive Hashing (LSH), which is based on the idea that if two points are close, then random projections of the points onto a lower dimensional subspace are also likely to be close. The details of LSH are beyond the scope of this book, but [many references are available](https://en.wikipedia.org/wiki/Locality-sensitive_hashing).Several software packages are available for exact and approximate nearest neighbors queries. In Python, [scipy contains an implementation](https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.KDTree.query.html) of $k$-d trees, [scikit-learn](https://scikit-learn.org/stable/modules/neighbors.html) implements $k$-d trees and ball trees. Both libraries accept an approximation factor. For approximate nearest neighbors, there are many packages named ANN, and the [FLANN](https://www.cs.ubc.ca/research/flann/) library is a popular choice, used in the [Open Motion Planning Library](https://ompl.kavrakilab.org/). Common pitfalls in employing PRMsSampling-based motion planning is appealing since it can be implementedfor a wide variety of problems by non-planning experts. However, thereare several issues that can cause PRMs to fail. What is oftenfrustrating is that the PRM will not provide a *rationale* for failure!It just appears that it just "doesn't work". Some of the most commonpitfalls encountered when implement PRMs and their variants are:- Improper handling of non-Euclidean topology of $\mathcal{C}$ in the distance metric $d(p,q)$ and dynamic collision checking function.- Improper scaling of the C-space / badly scaled distance thresholds $R$ or $\delta$.- Providing infeasible start and goal configurations.- Providing start and goal configurations in "deep pockets": passages narrowing down as the endpoint is approached.- Incorrect feasibility tests.- Applying a planner when the free space volume is negligible, or narrow passages are infinitely thin.When debugging, it is often extremely useful to extract and visuallydebug the roadmap produced by a planner. This helps diagnose problemslike the planner taking tiny steps, not expanding the roadmap at all, ordetecting phantom obstacles. This can be tricky in high-dimensionalspaces, since visualization must ultimately take place on a 2D display,and a roadmap may contain thousands of configurations and edges.To handle topology, it is extremely important to ensure that the notionof a "straight line path" in dynamic collision checking interpolatesnearly along a geodesic, and that the distance metric is relativelyclose to a geodesic distance. When orientations are present, if thisissue were neglected and the C-space were treated as Euclidean, thensmall positive rotations would *never* be connected to small negativerotations. This will manifest itself as artifacts in which the robotwill either fail to find a path, or will rotate in an unnecessarily longfashion.For choosing thresholds, a rule of thumb is to start by setting $R$ and$\delta$ to be approximately 10% of the diameter of the C-space. Thevalues can then be fine-tuned to achieve better performance on a givenproblem. A good rule of thumb is to aim to achieve approximately 5 – 15connections per milestone. This tends to work well for setting the valueof $k$ when $k$-nearest neighbors is used as the nearness criterion inPRM.The infeasible endpoint problem is often encountered when there is a bitof error in the world model or the robot's sensing of its configuration,and the robot starts or ends at a configuration that is in contact (orclose to it). There are two approaches to handling this: beforeplanning, adjust the world model so that the robot is collision free(which can be hard), or slightly perturb $s$ and $g$ to newconfigurations $s^\prime$ and $g^\prime$ that are collision free withrespect to the robot's current knowledge. Then, the path is plannedbetween $s^\prime$ and $g^\prime$, and the robot executes path$s\rightarrow s^\prime \rightsquigarrow g^\prime \rightarrow g$. This,however, assumes that the path to the perturbed configurations isactually feasible in the real world, which requires a bit of care.The deep pocket problem is faced particularly often in manipulation ordocking, in which the start or goal has the robot touching the obstacle,and must make a careful, coordinated maneuver to leave it. For example,when the robot is grasping an object, its fingers are touching bothsides of the object, and the hand must slide carefully in or out ofposition without moving laterally or rotating about certain axes. Hence,the passage is quite narrow in at least 2 or 3 dimensions! In thesepockets of free space, the robot must take shorter steps, and mostdirections of travel lead to collision. However, once the pocket isescaped (like when the hand is away from a grasped object), then largesteps can again be taken. In other words, visibility is nonuniformacross $\mathcal{F}$. There are three general ways of handling thisissue, all of which require studying the manipulation problem morecarefully:1. Manually define a short docking/undocking maneuver that inserts into / retracts from the pocket. This could be, for example in manipulation, a Cartesian move that places the gripper in front of the object with fingers wide open. The inverse of this maneuver is used to determine the start and goal points for the planner.2. Start a tree-growing planner like RRT from the constrained endpoint with small step size. After some time, the farthest node from the start is assumed to have wiggled out of the pocket and point-to-point planning can begin from that new endpoint.3. Develop an obstacle-sliding local planner or extension method that allows the planner to generate motions that slide against obstacles.It is easy to make bugs when defining feasibility tests, particularly inmore complex problems where feasibility requires passing manyconditions. This is problematic because the subroutine is the *only*representation the planner has about the free space, so it needs toaccurately reproduce the C-obstacles of the problem or else the plannerwill produce paths that will collide, or fail to find a solution whereone obviously exists. There are some newer techniques that search for asmall set of problematic C-obstacles blocking the way, which can helpdebug incorrect settings (Hauser 2012). But perhaps the first approachto try is to capture statistics during planning to detect the frequencyat which each condition passes and fails inside the test. Some motionplanning libraries will do this automatically and ask the user to defineindividual conditions, but in others this is up to the user. If a testnever fails (or always passes) this suggests an obvious implementationbug.Finally, the free space must not contain a non-neglible volumeof space (that is, $\mu(\mathcal{F}) / \mu(\mathcal{C})> 0$). Thiscondition may be produced when a constraint is introduced (like an IKconstraint, or a constraint that two objects must touch, or that a jointmust take on a particular value) that leaves all feasible configurationson a manifold of lower-dimensionality of space. In these cases, the PRMwill not be able to generate samples with non-neglible probability. Oneapproach to handle this problem is to parameterize the solution manifoldexplicitly. Extensions of PRMs are also available to properly handlemanifold constraints without a need for parameterization; thesetechniques generate samples by projecting them onto the feasiblemanifold, and also constructing paths that move along the manifold.These will be discussed later... (TODO: where will manipulation planning be added?) Incomplete Methods------------------In addition to the above methods that satisfy some notion ofcompleteness, there are additional methods based on optimizationtechniques that are incomplete: they have no guarantee of finding afeasible path when one exists. They can, however, generally producepaths quickly when they do work. Potential fieldsPotential fields are a well-studied technique that works using onlylocal information to guide the robot's movement, and is therefore quitefast, making it appropriate for real-time obstacle avoidance as well aspath planning in relatively simple spaces.The general idea is to consider the robot's configuration as being aparticle in some energy potential landscape. Due to "gravity" theparticle will feel some virtual forces equal to the negative of thegradient of this landscape. If the landscape is constructed to have aglobal minimum at the goal, then by following the gradient the particlewill, hopefully, arrive at the goal.To construct such a landscape, the usual method is to combine an*attractive potential* field in which the force is some gain constant$k_{att}$ times the vector pointing direction to the goal:$$P_{att}(q) = \frac{1}{2}k_{att} \| q - q_g \|^2$$ along with a*repulsive potential* generating a repulsive force for each obstacle.The repulsive force is chosen to grow larger (typically toward infinity)as the robot gets closer to the obstacle. Some limiting distance$\rho_0$ is typically chosen where the effect of an obstacle drops offto 0. One such function is the following:$$P_{rep}(q) = \left\lbrace \begin{array}{ll} \frac{1}{2}k_{rep}(1/\rho(q) - 1/\rho_0)^2 & \text{If }\rho(q) \leq \rho_0\\0 & \text{If }\rho(q) > \rho_0\end{array}\right.$$ Here $\rho(q)$ is a function that measures theworkspace distance between the robot and the obstacle, and $k_{rep}$modulates the strength of the force. The potential is infinity at$\rho(q)=0$ and drops down to 0 at $\rho(q) = \rho_0$. (Note that herewe must be able to calculate distance rather than just Boolean collisiondetection.)The force acting on the particle is the negated gradient of eachpotential: $$f_{att}(q) = -k_{att} (q - q_g)$$ and$$f_{rep}(q) = \left\lbrace \begin{array}{ll} k_{rep} (\frac{1}{\rho(q)} - \frac{1}{\rho_0})\frac{1}{\rho(q)^2} \frac{\partial \rho(q)}{\partial q} & \text{If }\rho(q) \leq \rho_0\\0 & \text{If }\rho(q) > \rho_0\end{array}\right.$$Then, to evolve the configuration over time as a particle in thispotential field, we use an iterative approach. At the current time step,the robot is at position $q_t$. The next point along the path is givenby: $$q_{t+1} = q_t + \frac{\Delta t}{m}(f_{att}(q) + f_{rep}(q))$$where $m$ is a virtual "mass" of the robot and $\Delta t$ is the timestep. One potential issue with this method is that the magnitude of theforce vector can be highly varying, from 0 at the goal to infinity atthe boundary of an obstacle. To avoid huge jumps (or little movement atall) in the path it makes sense to dynamically set the mass to beproportional to the magnitude of the force. In this way, consistent rateof progress is ensured as the path evolves.TODO: Figure 13This method works well when the robot simply needs to move slightly awayfrom a straight line path to avoid obstacles, provided that obstaclesare relatively convex and spatially distant. Its main advantages are 1)speed of computation, and 2) only local information about obstacles isneeded. However, like other local methods it is prone to local minimacaused either by concave obstacles, or narrow passages where therepulsive forces from either side of the passage cancel out theattractive force. Trajectory optimizationTrajectory optimization is another potential-based method that optimizesthe overall shape of the robot's path to minimize cost. Unlike potentialfields, for which the optimization variable is the configuration of asingle point in time, trajectory optimization uses some parameterizationof the *entire* path as the optimization variable. This helps it avoidfuture obstacles and, in some cases, avoid local minima that potentialfields would fall prey to.Such methods begins with the definition of some fixed number of *pathparameters* $\theta \in \mathbb{R}^N$ which dictate the shape of acandidate path. One example, for piecewise linear paths passing betweenthe start and the goal configuration, is simply the set of intermediatemilestones: $$\theta = (q_1,\ldots,q_{k-1})$$ In this case, the path$y(s)$ consists of $k$ straight-line segments, interpolating betweenmilestones $q_0=q_s, q_1, \ldots, q_{k-1}, q_k=q_g$. Any value of$\theta$ dictates the shape of some path, and any piecewise linear pathwith $k$ segments corresponds to some value of $\theta$. To make thisdependence clear, we shall refer to the path defined by some value$\theta$ as $y_\theta$.TODO: Figure 14If the dimension of C-space is $d$, then $N = d(k-1)$. Hence thetrajectory optimization problem can be quite high dimensional (hundredsor thousands of dimensions) even for C-spaces with a moderate number ofdimensions.Next, we must encode the objective function and constraints. Forminimizing path length, it may be tempting to initially define thefollowing cost function that minimizes path length:$$f(\theta) = \sum_{i=1}^k \| q_i - q_{i-1} \|.$$ However, thisformulation has the drawback that it is not differentiable when twomilestones are equal, and also has a null direction when threemilestones are on the straight line. It is more numerically convenientto minimize the sum of squared distances$$f(\theta) = \sum_{i=1}^k \| q_i - q_{i-1} \|^2.$$ which, if a$k$-segment piecewise linear path is indeed optimal, is minimized whenthe path (nearly) has minimum length and the milestones are evenlyspaced.Now let us proceed to defining constraints, which we assume are in theform $g(q) \leq 0$. At first glance, one might choose to simply enforceconstraints on the milestones:$$h(\theta) = \begin{bmatrix}{g(q_1)} \\ {\vdots} \\ {g(q_{k-1})} \end{bmatrix} \leq 0.$$However, this runs the risk of having two milestones on either side ofan obstacle, with the intermediate segment crossing the obstacle.Instead, we must consider the possibility of constraint violationsbetween milestones. A straightforward way to do so is to use*collocation points*, which are points along the path at whichconstraints will be enforced.Specifically we can define some number $M$ of collocation points atparameters $s_1,\ldots,s_M \in [0,1]$, usually evenly distributed alongthe parameter space $[0,1]$. The $j$'th collocation point lies on asegment indexed by $i_j \in \{1,\ldots,k\}$ and lies a fraction$u_j \in [0,1]$ along the straight line segment, where these aredetermined so that the configuration at the collocation point is:$$y_\theta(s_j) = q_{i_j-1} + u_j (q_{i_j} - q_{i_j-1}).$$ We thendefine many inequality constraints on $\theta$ so that constraints ateach collocation point are enforced:$$h(\theta) = \begin{bmatrix}{g(y_\theta(s_1))} \\ {\vdots} \\ {g(y_\theta(s_M))} \end{bmatrix} \leq 0.$$The resulting problem is a constrained optimization problem([Appendix B.3.](Optimization.ipynbConstrained-Optimization)), which can be solved using anonlinear program solver, like Sequential Quadratic Programming (SQP).Efficient implementations will take advantage of sparseness in theconstraint Jacobian.Another alternative lets us use unconstrained optimizations([Appendix B.3.](Optimization.ipynbUnconstrained-Optimization)) by converting hardconstraints to penalties in the objective function. In this approach wedefining a penalty function for violating constraints:$$f_{pen}(\theta) = \sum_{j=1}^M \max(g(y_\theta(s_j)), 0).$$ Then byminimizing a weighted objective function$$f(\theta) + w f_{pen}(\theta)$$ using standard nonlinear optimizationtechniques (e.g., Quasi-Newton methods), portions of the path for whichconstraints are violated will be pushed out of the C-obstacle. However,if $w$ is not set to be sufficiently high, then the optimizer of theweighted objective function will still slightly overlap with theobstacle. To address this, we can progressively increase $w$ to reducethe amount of overlap. To prevent overlap altogether, we can also allowthe constraint violation penalty to extend a distance $\gamma > 0$outside the region where the constraint is violated.$$f_{pen}(\theta; \gamma) = \sum_{j=1}^M \max(g(y_\theta(s_j)), -\gamma) + \gamma.$$Regardless of whether a constrained or unconstrained approach is taken,there are two major issues with trajectory optimization:- The computational cost of optimization depends strongly on the number of path parameters and collocation points. If too few path parameters are chosen then a feasible path may not be found; if too few collocation points are chosen then the path may violate constraints.- For complex environments, the potential landscape in $\theta$ space is littered with local minima (and typically, more minima appear as the granularity $k$ of the path grows).The problem of choosing collocation points can be addressed byadaptively identifying the point along the path with maximum constraintviolation, in advanced optimization techniques known as *constraintgeneration* or *semi-infinite programming*.The local minimum problem can be partially addressed either byinitializing the optimizer with a path from some other motion planningmethod, like a sampling-based planner, or by using global optimizationtechniques. The approach of seeding an optimizer by a sampling-basedplanner is fast and often works well. However, does not guarantee aglobally optimal path, because the planner may have produced a seed pathin a suboptimal homotopy class or basin of attraction. Globaloptimization may result in better paths, but can be extraordinarilyslow, particularly in high dimensional spaces. Summary-------Key takeaways:- Sampling-based motion planners can overcome some limitations of the curse of dimensionality. However, they pay a cost in the variance of solution quality and running time.- The running time of such planners is dependent on the visibility characteristics of the free space, which does not directly relate to dimensionality. Running times will be fast in spaces of good visibility.- Probabilistic roadmaps (PRMs) and Rapidly-Exploring Random Trees (RRTs) are the most widely used classes of such planners. There are many variations on the basic structure.- Shortcutting can be employed in postprocessing to achieve fast (but local) improvements in path quality. To achieve global improvements, optimizing variants of PRMs and RRTs are available.- Potential field methods use only local information to determine a direction of movement and are extremely fast. They can work well for real-time obstacle avoidance, but are prone to local minima.- Trajectory optimization methods simultaneously optimize milestones along an entire trajectory. However, they require a choice of the number of milestones used to represent a path, and are also prone to local minima. Exercises---------1. Let $n$ be the number of (feasible) milestones in a probabilistic roadmap, and $N$ be the number of configurations sampled. Prove that if a PRM algorithm is probabilistically complete as $n$ increases, then it is also probabilistically completeness as $N$ increases, as long as the chance of drawing a feasible sample is nonzero.1. A PRM with a fixed connection radius $R$ can be thought of as restricting the visibility set of a point to be intersected a neighborhood of radius $R$. With this interpretation, are the visibility properties of a space ($\epsilon$-goodness and $\beta$-expansiveness) dependent on $R$? Explain why or why not. How would the visibility properties vary depending on whether the distance function was chosen to use an $L_1$, $L_2$, or $L_\infty$ metric?1. Suppose the free space is described by a set of $m$ C-obstacles $C_1,...,C_m$. Let $\mathcal{C}$ be the space in which configurations are sampled, and let $\mu$ be the volume measure. For a sampled configuration $q$, what are the probabilities of that $q$ lies within each C-obstacle? If testing each obstacle has the same computational cost, what is the fastest order in which the C-obstacles should be tested?1. Illustrate a free space in which Lazy PRM is expected to check a large fraction of edges for visibility before finding a solution. Lazy PRM may take more time than a standard PRM in this case. What component of Lazy PRM would be the cause of this computational overhead?1. Does it make sense to build a lazy PRM in precomputation for multi-query path planning? If so, give some examples of what situations in which this approach would be useful. If not, explain why not.1. In our discussion of shortcutting, path length was used as the objective function for optimization. Give an example of an objective function for which shortcutting does not improve the path cost. Then, describe a modification to the shortcutting algorithm so that the objective function does not increase.1. What is the maximum number of static collision checks needed for a PRM to check a path between milestones $v_1,...,v_m$, given a fixed resolution of $\epsilon$ for dynamic collision checking? How many static collision checks and distance queries are needed for a PRM to solve a problem, using Visible-Exact1 for dynamic collision checking, where the clearance of the path $y = v_1 \rightarrow \cdots \rightarrow v_m$ is $\delta$?1. Implement a brute force $k$-nearest neighbor algorithm that runs in $O(n k)$ time. Hint: store the $k$ nearest neighbors in an array, and maintain the index of the neighbor with maximum distance. Can you improve this to $O(n \log k)$ time?1. Write pseudocode for an $R$-neighborhood query for a $k$-d tree. Implement this, double checking that it works properly compared to a brute-force approach on random datasets. Interactive quiz
###Code
#This code must be run from the RoboticSystemsBook folder
# If you are running on Google Colab, uncomment the following code:
#
# %cd ~
# !git clone --depth 1 https://github.com/krishauser/RoboticSystemsBook
# %cd RoboticSystemsBook
from rsbook_code.assessment import quiz
quiz.show("motion_planning_higher_dimensions")
###Output
_____no_output_____ |
Python-Data-Science-and-Machine-Learning-Bootcamp/Python-for-Data-Visualization/Geographical Plotting/Choropleth Maps Exercise - Solutions.ipynb | ###Markdown
___ ___ Choropleth Maps Exercise - SolutionsWelcome to the Choropleth Maps Exercise! In this exercise we will give you some simple datasets and ask you to create Choropleth Maps from them. Due to the Nature of Plotly we can't show you examples embedded inside the notebook.[Full Documentation Reference](https://plot.ly/python/reference/choropleth) Plotly Imports
###Code
import plotly.graph_objs as go
from plotly.offline import init_notebook_mode,iplot,plot
init_notebook_mode(connected=True)
###Output
_____no_output_____
###Markdown
** Import pandas and read the csv file: 2014_World_Power_Consumption**
###Code
import pandas as pd
df = pd.read_csv('2014_World_Power_Consumption')
###Output
_____no_output_____
###Markdown
** Check the head of the DataFrame. **
###Code
df.head()
###Output
_____no_output_____
###Markdown
** Referencing the lecture notes, create a Choropleth Plot of the Power Consumption for Countries using the data and layout dictionary. **
###Code
data = dict(
type = 'choropleth',
colorscale = 'Viridis',
reversescale = True,
locations = df['Country'],
locationmode = "country names",
z = df['Power Consumption KWH'],
text = df['Country'],
colorbar = {'title' : 'Power Consumption KWH'},
)
layout = dict(title = '2014 Power Consumption KWH',
geo = dict(showframe = False,projection = {'type':'Mercator'})
)
choromap = go.Figure(data = [data],layout = layout)
plot(choromap,validate=False)
###Output
_____no_output_____
###Markdown
USA Choropleth** Import the 2012_Election_Data csv file using pandas. **
###Code
usdf = pd.read_csv('2012_Election_Data')
###Output
_____no_output_____
###Markdown
** Check the head of the DataFrame. **
###Code
usdf.head()
###Output
_____no_output_____
###Markdown
** Now create a plot that displays the Voting-Age Population (VAP) per state. If you later want to play around with other columns, make sure you consider their data type. VAP has already been transformed to a float for you. **
###Code
data = dict(type='choropleth',
colorscale = 'Viridis',
reversescale = True,
locations = usdf['State Abv'],
z = usdf['Voting-Age Population (VAP)'],
locationmode = 'USA-states',
text = usdf['State'],
marker = dict(line = dict(color = 'rgb(255,255,255)',width = 1)),
colorbar = {'title':"Voting-Age Population (VAP)"}
)
layout = dict(title = '2012 General Election Voting Data',
geo = dict(scope='usa',
showlakes = True,
lakecolor = 'rgb(85,173,240)')
)
choromap = go.Figure(data = [data],layout = layout)
plot(choromap,validate=False)
###Output
_____no_output_____ |
bitcoin_script_load_data_postgres_manyuser.ipynb | ###Markdown
Import all libraries
###Code
import requests
import json
import time
import multiprocessing
import psycopg2
###Output
_____no_output_____
###Markdown
Functions for loading the data and counting rows in meanwhile
###Code
# Postgres
def load_data_postgres(data):
start = time.time()
for line in data:
response_insert = requests.post('http://localhost:3001/postgres/write/crypto',json=line).text
end = time.time()
print("postgres",end-start)
# Firebase
def load_data_firebase(data):
start = time.time()
prev_time = start
for line in data:
response_insert = requests.post('http://localhost:3001/firebase/write/crypto',json=line).text
end = time.time()
print("firebase",end-start)
def time_measurement(num,l):
start = time.time()
prev_time = start
query_list_stat_all=[]
for i in range(num):
time.sleep(20)
# print or write to log here
prev_time = time.time()
response_write = int(requests.get('http://localhost:3001/postgres/read/crypto').text)
#running queries after 20 sec
quer_stat = query_postgres(l)
quer_stat.append(response_write)
time_now = time.time()-start
quer_stat.append(time_now)
print("No of rows {} and time {}".format(response_write,time_now))
print("query_stats",quer_stat)
print("\n\n")
query_list_stat_all.append(quer_stat)
return query_list_stat_all
def query_postgres(query_list):
time_list = []
# row_count = 0
q_all_start = time.time()
for i,que in enumerate(query_list):
temp = {}
temp['query'] = que
start = time.time()
response_query = requests.post('http://localhost:3001/postgres/query/crypto',json=temp)
temp["id"] = i+1
# if i==0:
# print(response_query)
# row_count = json.loads(response_query.text)['rowCount']
end = time.time()
temp["query_time"] = end-start
# temp['row_count'] = row_count
time_list.append(temp)
q_all_end = time.time()
final_list = [time_list,(q_all_end-q_all_start)]
# time_list.append(q_all_end-q_all_start)
# temp={}
# temp['all_query_time'] =
# time_list.append(temp)
print("time taken to run all the queries", q_all_end-q_all_start)
return final_list
def query_firebase(query_list):
q_all_start = time.time()
for i,que in enumerate(query_list):
temp = {}
temp['query'] = que
start = time.time()
print(temp)
# post request
response_query = requests.post('http://localhost:3001/firebase/query/crypto',json=temp)
print(response_query)
end = time.time()
print("time taken for query {} is {}".format(i+1,(end-start)))
q_all_end = time.time()
print("time taken to run all the queries", q_all_end-q_all_start)
l = ["""select count(*) from crypto_tab;""",
""" select * from crypto_tab order by bitcoin_info->>'Date' DESC limit 100;""",
"""select * from crypto_tab where bitcoin_info->>'Date' = '2020-12-31 21:52:00';""",
"""select * from crypto_tab where (bitcoin_info->>'High')::float > 3800 and (bitcoin_info->>'High')::float < 4500; """,
"""select * from crypto_tab where (bitcoin_info->>'Volume')::float > 2;"""
]
# query_postgres(l)
###Output
_____no_output_____
###Markdown
Select which data to load
###Code
# BITCOIN DATA
json_file_path= './data/bitcoin_sep_data.json'
with open(json_file_path) as json_file:
bitcoin_data = json.load(json_file)
bitcoin_data_1 = bitcoin_data[:17359]
bitcoin_data_2 = bitcoin_data[17359:]
# ETHEREUM DATA
json_file_path= 'data/et_sep_data.json'
with open(json_file_path) as json_file:
et_data = json.load(json_file)
et_data_1 = et_data[:17359]
et_data_2 = et_data[17359:]
# LITECOIN DATA
json_file_path= './data/lt_sep_data.json'
with open(json_file_path) as json_file:
lt_data = json.load(json_file)
lt_data_1 = lt_data[:17359]
lt_data_2 = lt_data[17359:]
p1 = multiprocessing.Process(target=load_data_postgres, args=(bitcoin_data_1,))
p2 = multiprocessing.Process(target=load_data_postgres, args=(et_data_1, ))
p3 = multiprocessing.Process(target=load_data_postgres, args=(lt_data_1, ))
p4 = multiprocessing.Process(target=load_data_postgres, args=(bitcoin_data_2,))
p5 = multiprocessing.Process(target=load_data_postgres, args=(et_data_2, ))
p6 = multiprocessing.Process(target=load_data_postgres, args=(lt_data_2, ))
# starting process 1
p1.start()
# starting process 2
p2.start()
# starting process 3
p3.start()
# starting process 1
p4.start()
#5starting process 2
p5.start()
# starting process 3
p6.start()
# wait until process 1 is finished
p1.join()
# wait until process 2 is finished
p2.join()
# wait until process 3 is finished
p3.join()
# wait until process 1 is finished
p4.join()
# wait until process 2 is finished
p5.join()
# wait until process 3 is finished
p6.join()
###Output
postgres 125.61071848869324
postgres 125.63218021392822
postgres 125.6639609336853
postgres 125.72213530540466
postgres 125.87022113800049
postgres 126.0732593536377
###Markdown
Run process for Firebase
###Code
p1 = multiprocessing.Process(target=load_data_postgres, args=(bitcoin_data, ))
p2 = multiprocessing.Process(target=load_data_postgres, args=(bitcoin_data, ))
p3 = multiprocessing.Process(target=load_data_postgres, args=(bitcoin_data, ))
p4 = multiprocessing.Process(target=time_measurement, args=(len(bitcoin_data), ))
# starting process 1
p1.start()
# starting process 2
p2.start()
# starting process 3
p3.start()
# starting process 4
p4.start()
# wait until process 1 is finished
p1.join()
# wait until process 2 is finished
p2.join()
# wait until process 3 is finished
p3.join()
# wait until process 4 is finished
p4.join()
###Output
_____no_output_____
###Markdown
Run process for PostgreSQL
###Code
load_data_postgres(data)
# BITCOIN DATA
json_file_path= './data/trajectory_data_60k.json'
with open(json_file_path) as json_file:
data = json.load(json_file)
# len(data)
sql = """insert into trajectory_path(path_info) values(%s)"""
sql_count = """select count(*) from trajectory_path"""
start_time = time.time()
prev_time = start_time
prv_count = 0
with psycopg2.connect(host='localhost', port=5432, database='applicationdb', user='postgres', password='4258') as conn:
with conn.cursor() as cur:
#start_time = time.time()
print(start_time)
counter = 0
for line in data:
line = str(line)
line = line.replace('\'','\"')
try:
x = cur.execute(sql, (line,))
except (Exception, psycopg2.DatabaseError) as error:
print(error)
print(time.time()-start_time)
continue
dt = time.time() - prev_time
if dt > 1:
# print or write to log here
prev_time = time.time()
cur.execute(sql_count)
result = cur.fetchone()
print(result[0])
conn.commit()
end_time = time.time()
print(end_time)
print("time taken to load the data is {} seconds".format(end_time-start_time))
###Output
1637452712.3010046
2058
4605
7130
9652
12216
14725
17256
19806
22350
24918
27493
30004
32538
35064
37589
40129
42657
45291
47736
50190
52728
missile
CONTEXT: PL/pgSQL function process_dist_calculation() line 4 at RAISE
21.097785234451294
current transaction is aborted, commands ignored until end of transaction block
21.098284006118774
current transaction is aborted, commands ignored until end of transaction block
21.09968662261963
current transaction is aborted, commands ignored until end of transaction block
21.100196361541748
current transaction is aborted, commands ignored until end of transaction block
21.101358890533447
current transaction is aborted, commands ignored until end of transaction block
21.102198123931885
current transaction is aborted, commands ignored until end of transaction block
21.102198123931885
current transaction is aborted, commands ignored until end of transaction block
21.103246450424194
current transaction is aborted, commands ignored until end of transaction block
21.104194402694702
current transaction is aborted, commands ignored until end of transaction block
21.10523796081543
current transaction is aborted, commands ignored until end of transaction block
21.106194257736206
current transaction is aborted, commands ignored until end of transaction block
21.106194257736206
current transaction is aborted, commands ignored until end of transaction block
21.10735774040222
current transaction is aborted, commands ignored until end of transaction block
21.10835909843445
current transaction is aborted, commands ignored until end of transaction block
21.10835909843445
current transaction is aborted, commands ignored until end of transaction block
21.109365463256836
current transaction is aborted, commands ignored until end of transaction block
21.110358238220215
current transaction is aborted, commands ignored until end of transaction block
21.111357927322388
current transaction is aborted, commands ignored until end of transaction block
21.112508535385132
current transaction is aborted, commands ignored until end of transaction block
21.113396406173706
current transaction is aborted, commands ignored until end of transaction block
21.113396406173706
current transaction is aborted, commands ignored until end of transaction block
21.114362955093384
current transaction is aborted, commands ignored until end of transaction block
21.11551260948181
current transaction is aborted, commands ignored until end of transaction block
21.11636471748352
current transaction is aborted, commands ignored until end of transaction block
21.11742115020752
current transaction is aborted, commands ignored until end of transaction block
21.11836528778076
current transaction is aborted, commands ignored until end of transaction block
21.119396686553955
current transaction is aborted, commands ignored until end of transaction block
21.120396375656128
current transaction is aborted, commands ignored until end of transaction block
21.121359825134277
current transaction is aborted, commands ignored until end of transaction block
21.121359825134277
current transaction is aborted, commands ignored until end of transaction block
21.122706413269043
current transaction is aborted, commands ignored until end of transaction block
21.123352527618408
current transaction is aborted, commands ignored until end of transaction block
21.124398231506348
current transaction is aborted, commands ignored until end of transaction block
21.125396728515625
current transaction is aborted, commands ignored until end of transaction block
21.125396728515625
current transaction is aborted, commands ignored until end of transaction block
21.12751603126526
current transaction is aborted, commands ignored until end of transaction block
21.12751603126526
current transaction is aborted, commands ignored until end of transaction block
21.128400325775146
current transaction is aborted, commands ignored until end of transaction block
21.129353284835815
current transaction is aborted, commands ignored until end of transaction block
21.130356550216675
current transaction is aborted, commands ignored until end of transaction block
21.13138175010681
current transaction is aborted, commands ignored until end of transaction block
21.13235902786255
current transaction is aborted, commands ignored until end of transaction block
21.13339900970459
current transaction is aborted, commands ignored until end of transaction block
21.13439655303955
current transaction is aborted, commands ignored until end of transaction block
21.13540530204773
current transaction is aborted, commands ignored until end of transaction block
21.13540530204773
current transaction is aborted, commands ignored until end of transaction block
21.136566162109375
current transaction is aborted, commands ignored until end of transaction block
21.137362241744995
current transaction is aborted, commands ignored until end of transaction block
21.138367891311646
current transaction is aborted, commands ignored until end of transaction block
21.139354467391968
current transaction is aborted, commands ignored until end of transaction block
21.140644788742065
current transaction is aborted, commands ignored until end of transaction block
21.141396045684814
current transaction is aborted, commands ignored until end of transaction block
21.142399311065674
current transaction is aborted, commands ignored until end of transaction block
21.143352508544922
current transaction is aborted, commands ignored until end of transaction block
21.143352508544922
current transaction is aborted, commands ignored until end of transaction block
21.14544701576233
current transaction is aborted, commands ignored until end of transaction block
21.14635682106018
current transaction is aborted, commands ignored until end of transaction block
21.147367477416992
current transaction is aborted, commands ignored until end of transaction block
21.14836549758911
current transaction is aborted, commands ignored until end of transaction block
21.149399280548096
current transaction is aborted, commands ignored until end of transaction block
21.150360584259033
current transaction is aborted, commands ignored until end of transaction block
21.151358366012573
current transaction is aborted, commands ignored until end of transaction block
21.151358366012573
current transaction is aborted, commands ignored until end of transaction block
21.152795553207397
current transaction is aborted, commands ignored until end of transaction block
21.153353214263916
current transaction is aborted, commands ignored until end of transaction block
21.154361963272095
current transaction is aborted, commands ignored until end of transaction block
21.156357765197754
current transaction is aborted, commands ignored until end of transaction block
21.15735673904419
current transaction is aborted, commands ignored until end of transaction block
21.15835976600647
current transaction is aborted, commands ignored until end of transaction block
21.15963840484619
current transaction is aborted, commands ignored until end of transaction block
21.160356760025024
current transaction is aborted, commands ignored until end of transaction block
21.16136360168457
current transaction is aborted, commands ignored until end of transaction block
21.162356853485107
current transaction is aborted, commands ignored until end of transaction block
21.163358211517334
current transaction is aborted, commands ignored until end of transaction block
21.1643545627594
current transaction is aborted, commands ignored until end of transaction block
21.165356159210205
current transaction is aborted, commands ignored until end of transaction block
21.166363954544067
current transaction is aborted, commands ignored until end of transaction block
21.167354822158813
current transaction is aborted, commands ignored until end of transaction block
21.168396472930908
current transaction is aborted, commands ignored until end of transaction block
21.16875171661377
current transaction is aborted, commands ignored until end of transaction block
21.169397115707397
current transaction is aborted, commands ignored until end of transaction block
21.170355558395386
current transaction is aborted, commands ignored until end of transaction block
21.171359539031982
current transaction is aborted, commands ignored until end of transaction block
21.17235779762268
current transaction is aborted, commands ignored until end of transaction block
21.17340874671936
current transaction is aborted, commands ignored until end of transaction block
21.17435383796692
current transaction is aborted, commands ignored until end of transaction block
21.17539668083191
current transaction is aborted, commands ignored until end of transaction block
21.17639684677124
current transaction is aborted, commands ignored until end of transaction block
21.17639684677124
current transaction is aborted, commands ignored until end of transaction block
21.177396535873413
current transaction is aborted, commands ignored until end of transaction block
21.178358793258667
current transaction is aborted, commands ignored until end of transaction block
21.179469347000122
current transaction is aborted, commands ignored until end of transaction block
21.180357456207275
current transaction is aborted, commands ignored until end of transaction block
21.1813542842865
current transaction is aborted, commands ignored until end of transaction block
21.18277931213379
current transaction is aborted, commands ignored until end of transaction block
21.18335199356079
current transaction is aborted, commands ignored until end of transaction block
21.184396266937256
current transaction is aborted, commands ignored until end of transaction block
21.18536353111267
current transaction is aborted, commands ignored until end of transaction block
21.186359643936157
current transaction is aborted, commands ignored until end of transaction block
21.18740749359131
current transaction is aborted, commands ignored until end of transaction block
21.18836498260498
current transaction is aborted, commands ignored until end of transaction block
21.18836498260498
current transaction is aborted, commands ignored until end of transaction block
21.18965792655945
current transaction is aborted, commands ignored until end of transaction block
21.190407752990723
current transaction is aborted, commands ignored until end of transaction block
21.19139862060547
current transaction is aborted, commands ignored until end of transaction block
21.192358255386353
current transaction is aborted, commands ignored until end of transaction block
21.192358255386353
current transaction is aborted, commands ignored until end of transaction block
21.193366527557373
current transaction is aborted, commands ignored until end of transaction block
21.194416046142578
current transaction is aborted, commands ignored until end of transaction block
21.19535493850708
current transaction is aborted, commands ignored until end of transaction block
21.196360111236572
current transaction is aborted, commands ignored until end of transaction block
21.19739842414856
current transaction is aborted, commands ignored until end of transaction block
21.19739842414856
current transaction is aborted, commands ignored until end of transaction block
21.19836401939392
current transaction is aborted, commands ignored until end of transaction block
21.199443578720093
current transaction is aborted, commands ignored until end of transaction block
21.20136308670044
current transaction is aborted, commands ignored until end of transaction block
21.20136308670044
current transaction is aborted, commands ignored until end of transaction block
21.203396320343018
current transaction is aborted, commands ignored until end of transaction block
21.204355716705322
current transaction is aborted, commands ignored until end of transaction block
21.204355716705322
current transaction is aborted, commands ignored until end of transaction block
21.205647230148315
current transaction is aborted, commands ignored until end of transaction block
21.206355810165405
current transaction is aborted, commands ignored until end of transaction block
21.207361698150635
current transaction is aborted, commands ignored until end of transaction block
21.20764708518982
current transaction is aborted, commands ignored until end of transaction block
21.208396434783936
current transaction is aborted, commands ignored until end of transaction block
21.20935869216919
current transaction is aborted, commands ignored until end of transaction block
21.210369110107422
current transaction is aborted, commands ignored until end of transaction block
21.211400032043457
current transaction is aborted, commands ignored until end of transaction block
21.212358713150024
current transaction is aborted, commands ignored until end of transaction block
21.213359832763672
current transaction is aborted, commands ignored until end of transaction block
21.213359832763672
current transaction is aborted, commands ignored until end of transaction block
21.215357542037964
current transaction is aborted, commands ignored until end of transaction block
21.21639895439148
current transaction is aborted, commands ignored until end of transaction block
21.21735692024231
current transaction is aborted, commands ignored until end of transaction block
21.218355178833008
current transaction is aborted, commands ignored until end of transaction block
21.219353914260864
current transaction is aborted, commands ignored until end of transaction block
21.220356464385986
current transaction is aborted, commands ignored until end of transaction block
21.222352027893066
current transaction is aborted, commands ignored until end of transaction block
21.22335410118103
current transaction is aborted, commands ignored until end of transaction block
21.224354028701782
current transaction is aborted, commands ignored until end of transaction block
21.225354433059692
current transaction is aborted, commands ignored until end of transaction block
21.22635555267334
current transaction is aborted, commands ignored until end of transaction block
21.227354288101196
current transaction is aborted, commands ignored until end of transaction block
21.228373765945435
current transaction is aborted, commands ignored until end of transaction block
21.22935914993286
current transaction is aborted, commands ignored until end of transaction block
21.23035740852356
current transaction is aborted, commands ignored until end of transaction block
21.231356382369995
current transaction is aborted, commands ignored until end of transaction block
21.232359170913696
current transaction is aborted, commands ignored until end of transaction block
21.233358144760132
current transaction is aborted, commands ignored until end of transaction block
21.233358144760132
current transaction is aborted, commands ignored until end of transaction block
21.23539638519287
current transaction is aborted, commands ignored until end of transaction block
21.23539638519287
current transaction is aborted, commands ignored until end of transaction block
21.236398458480835
current transaction is aborted, commands ignored until end of transaction block
21.237361669540405
current transaction is aborted, commands ignored until end of transaction block
21.238574743270874
current transaction is aborted, commands ignored until end of transaction block
21.239357709884644
current transaction is aborted, commands ignored until end of transaction block
21.240396738052368
current transaction is aborted, commands ignored until end of transaction block
21.24135422706604
current transaction is aborted, commands ignored until end of transaction block
21.24277424812317
current transaction is aborted, commands ignored until end of transaction block
21.243353128433228
current transaction is aborted, commands ignored until end of transaction block
21.24536395072937
current transaction is aborted, commands ignored until end of transaction block
21.245434284210205
current transaction is aborted, commands ignored until end of transaction block
21.24636459350586
current transaction is aborted, commands ignored until end of transaction block
21.247354984283447
current transaction is aborted, commands ignored until end of transaction block
21.24835515022278
current transaction is aborted, commands ignored until end of transaction block
21.24935746192932
current transaction is aborted, commands ignored until end of transaction block
21.25036907196045
current transaction is aborted, commands ignored until end of transaction block
21.251363277435303
current transaction is aborted, commands ignored until end of transaction block
21.252429962158203
current transaction is aborted, commands ignored until end of transaction block
21.253605127334595
current transaction is aborted, commands ignored until end of transaction block
21.254353284835815
current transaction is aborted, commands ignored until end of transaction block
21.2553653717041
current transaction is aborted, commands ignored until end of transaction block
21.2553653717041
current transaction is aborted, commands ignored until end of transaction block
21.257359504699707
current transaction is aborted, commands ignored until end of transaction block
21.258358001708984
current transaction is aborted, commands ignored until end of transaction block
21.259360551834106
current transaction is aborted, commands ignored until end of transaction block
21.260403871536255
current transaction is aborted, commands ignored until end of transaction block
21.261619567871094
current transaction is aborted, commands ignored until end of transaction block
21.262400150299072
current transaction is aborted, commands ignored until end of transaction block
21.26435685157776
current transaction is aborted, commands ignored until end of transaction block
21.26535177230835
current transaction is aborted, commands ignored until end of transaction block
21.266359090805054
current transaction is aborted, commands ignored until end of transaction block
21.26739811897278
current transaction is aborted, commands ignored until end of transaction block
21.26739811897278
current transaction is aborted, commands ignored until end of transaction block
21.26968240737915
current transaction is aborted, commands ignored until end of transaction block
21.27056384086609
current transaction is aborted, commands ignored until end of transaction block
21.271398305892944
current transaction is aborted, commands ignored until end of transaction block
21.272451400756836
current transaction is aborted, commands ignored until end of transaction block
21.273395776748657
current transaction is aborted, commands ignored until end of transaction block
21.274353504180908
current transaction is aborted, commands ignored until end of transaction block
21.275431871414185
current transaction is aborted, commands ignored until end of transaction block
21.27635383605957
current transaction is aborted, commands ignored until end of transaction block
21.277372121810913
current transaction is aborted, commands ignored until end of transaction block
21.27937412261963
current transaction is aborted, commands ignored until end of transaction block
21.280357122421265
current transaction is aborted, commands ignored until end of transaction block
21.281460523605347
current transaction is aborted, commands ignored until end of transaction block
21.282355070114136
current transaction is aborted, commands ignored until end of transaction block
21.283857345581055
current transaction is aborted, commands ignored until end of transaction block
21.28537106513977
current transaction is aborted, commands ignored until end of transaction block
21.286355018615723
current transaction is aborted, commands ignored until end of transaction block
21.287362337112427
current transaction is aborted, commands ignored until end of transaction block
21.287362337112427
current transaction is aborted, commands ignored until end of transaction block
21.289360523223877
current transaction is aborted, commands ignored until end of transaction block
21.290355920791626
current transaction is aborted, commands ignored until end of transaction block
21.291454315185547
current transaction is aborted, commands ignored until end of transaction block
21.292361974716187
current transaction is aborted, commands ignored until end of transaction block
21.293445348739624
current transaction is aborted, commands ignored until end of transaction block
21.29435896873474
current transaction is aborted, commands ignored until end of transaction block
21.29535937309265
current transaction is aborted, commands ignored until end of transaction block
21.296358108520508
current transaction is aborted, commands ignored until end of transaction block
21.29735279083252
current transaction is aborted, commands ignored until end of transaction block
21.29835844039917
current transaction is aborted, commands ignored until end of transaction block
21.299354314804077
current transaction is aborted, commands ignored until end of transaction block
21.300354480743408
current transaction is aborted, commands ignored until end of transaction block
21.30135488510132
current transaction is aborted, commands ignored until end of transaction block
21.302353858947754
current transaction is aborted, commands ignored until end of transaction block
21.303269386291504
current transaction is aborted, commands ignored until end of transaction block
21.304267168045044
current transaction is aborted, commands ignored until end of transaction block
21.305262327194214
current transaction is aborted, commands ignored until end of transaction block
21.306294441223145
current transaction is aborted, commands ignored until end of transaction block
21.306294441223145
current transaction is aborted, commands ignored until end of transaction block
21.307335376739502
current transaction is aborted, commands ignored until end of transaction block
21.30840301513672
current transaction is aborted, commands ignored until end of transaction block
21.3095920085907
current transaction is aborted, commands ignored until end of transaction block
21.310150146484375
current transaction is aborted, commands ignored until end of transaction block
21.311410427093506
current transaction is aborted, commands ignored until end of transaction block
21.311995029449463
current transaction is aborted, commands ignored until end of transaction block
21.312994956970215
current transaction is aborted, commands ignored until end of transaction block
21.314086198806763
current transaction is aborted, commands ignored until end of transaction block
21.314086198806763
current transaction is aborted, commands ignored until end of transaction block
21.315990209579468
current transaction is aborted, commands ignored until end of transaction block
21.316991329193115
current transaction is aborted, commands ignored until end of transaction block
21.316991329193115
current transaction is aborted, commands ignored until end of transaction block
21.3179931640625
current transaction is aborted, commands ignored until end of transaction block
21.31999683380127
current transaction is aborted, commands ignored until end of transaction block
21.31999683380127
current transaction is aborted, commands ignored until end of transaction block
21.32103419303894
current transaction is aborted, commands ignored until end of transaction block
21.322164297103882
current transaction is aborted, commands ignored until end of transaction block
21.322997093200684
current transaction is aborted, commands ignored until end of transaction block
21.322997093200684
current transaction is aborted, commands ignored until end of transaction block
21.324137449264526
current transaction is aborted, commands ignored until end of transaction block
21.325002193450928
current transaction is aborted, commands ignored until end of transaction block
21.32605266571045
current transaction is aborted, commands ignored until end of transaction block
21.32710599899292
current transaction is aborted, commands ignored until end of transaction block
21.328006744384766
current transaction is aborted, commands ignored until end of transaction block
21.328991651535034
current transaction is aborted, commands ignored until end of transaction block
21.328991651535034
current transaction is aborted, commands ignored until end of transaction block
21.330533266067505
current transaction is aborted, commands ignored until end of transaction block
21.331043004989624
current transaction is aborted, commands ignored until end of transaction block
21.33199715614319
current transaction is aborted, commands ignored until end of transaction block
21.33329200744629
current transaction is aborted, commands ignored until end of transaction block
21.334002256393433
current transaction is aborted, commands ignored until end of transaction block
21.335033178329468
current transaction is aborted, commands ignored until end of transaction block
21.335033178329468
current transaction is aborted, commands ignored until end of transaction block
21.336034059524536
current transaction is aborted, commands ignored until end of transaction block
21.337188959121704
current transaction is aborted, commands ignored until end of transaction block
21.337998151779175
current transaction is aborted, commands ignored until end of transaction block
21.33903694152832
current transaction is aborted, commands ignored until end of transaction block
21.34024739265442
current transaction is aborted, commands ignored until end of transaction block
21.340996026992798
current transaction is aborted, commands ignored until end of transaction block
21.342000484466553
current transaction is aborted, commands ignored until end of transaction block
21.343169450759888
current transaction is aborted, commands ignored until end of transaction block
21.343429565429688
current transaction is aborted, commands ignored until end of transaction block
21.34500217437744
current transaction is aborted, commands ignored until end of transaction block
21.345996379852295
current transaction is aborted, commands ignored until end of transaction block
21.346997499465942
current transaction is aborted, commands ignored until end of transaction block
21.347254514694214
current transaction is aborted, commands ignored until end of transaction block
21.348533630371094
current transaction is aborted, commands ignored until end of transaction block
21.34963345527649
current transaction is aborted, commands ignored until end of transaction block
21.350529670715332
current transaction is aborted, commands ignored until end of transaction block
21.351000547409058
current transaction is aborted, commands ignored until end of transaction block
21.352004289627075
current transaction is aborted, commands ignored until end of transaction block
21.3529953956604
current transaction is aborted, commands ignored until end of transaction block
21.354121208190918
current transaction is aborted, commands ignored until end of transaction block
21.354997158050537
current transaction is aborted, commands ignored until end of transaction block
21.354997158050537
current transaction is aborted, commands ignored until end of transaction block
21.35618782043457
current transaction is aborted, commands ignored until end of transaction block
21.35740852355957
current transaction is aborted, commands ignored until end of transaction block
21.358158588409424
current transaction is aborted, commands ignored until end of transaction block
21.358997344970703
current transaction is aborted, commands ignored until end of transaction block
21.360005855560303
current transaction is aborted, commands ignored until end of transaction block
21.360995531082153
current transaction is aborted, commands ignored until end of transaction block
21.360995531082153
current transaction is aborted, commands ignored until end of transaction block
21.362035274505615
current transaction is aborted, commands ignored until end of transaction block
21.36331796646118
current transaction is aborted, commands ignored until end of transaction block
21.364182472229004
current transaction is aborted, commands ignored until end of transaction block
21.365036010742188
current transaction is aborted, commands ignored until end of transaction block
21.365995407104492
current transaction is aborted, commands ignored until end of transaction block
21.36700177192688
current transaction is aborted, commands ignored until end of transaction block
21.368165254592896
current transaction is aborted, commands ignored until end of transaction block
21.368990898132324
current transaction is aborted, commands ignored until end of transaction block
21.370034217834473
current transaction is aborted, commands ignored until end of transaction block
21.370993614196777
current transaction is aborted, commands ignored until end of transaction block
21.37204074859619
current transaction is aborted, commands ignored until end of transaction block
21.372995138168335
current transaction is aborted, commands ignored until end of transaction block
21.372995138168335
current transaction is aborted, commands ignored until end of transaction block
21.37404155731201
current transaction is aborted, commands ignored until end of transaction block
21.37503457069397
current transaction is aborted, commands ignored until end of transaction block
21.376006603240967
current transaction is aborted, commands ignored until end of transaction block
21.377044439315796
current transaction is aborted, commands ignored until end of transaction block
21.377997636795044
current transaction is aborted, commands ignored until end of transaction block
21.377997636795044
current transaction is aborted, commands ignored until end of transaction block
21.37907385826111
current transaction is aborted, commands ignored until end of transaction block
21.380104780197144
current transaction is aborted, commands ignored until end of transaction block
21.381016492843628
current transaction is aborted, commands ignored until end of transaction block
21.382123470306396
current transaction is aborted, commands ignored until end of transaction block
21.382997512817383
current transaction is aborted, commands ignored until end of transaction block
21.382997512817383
current transaction is aborted, commands ignored until end of transaction block
21.384033679962158
current transaction is aborted, commands ignored until end of transaction block
21.38500213623047
current transaction is aborted, commands ignored until end of transaction block
21.386033535003662
current transaction is aborted, commands ignored until end of transaction block
21.386033535003662
current transaction is aborted, commands ignored until end of transaction block
21.38804793357849
current transaction is aborted, commands ignored until end of transaction block
21.388988733291626
current transaction is aborted, commands ignored until end of transaction block
21.390997886657715
current transaction is aborted, commands ignored until end of transaction block
21.391873359680176
current transaction is aborted, commands ignored until end of transaction block
21.391993045806885
current transaction is aborted, commands ignored until end of transaction block
21.393128395080566
current transaction is aborted, commands ignored until end of transaction block
21.394999980926514
current transaction is aborted, commands ignored until end of transaction block
21.39599323272705
current transaction is aborted, commands ignored until end of transaction block
21.396991968154907
current transaction is aborted, commands ignored until end of transaction block
21.39803433418274
current transaction is aborted, commands ignored until end of transaction block
21.39899444580078
current transaction is aborted, commands ignored until end of transaction block
21.399994611740112
current transaction is aborted, commands ignored until end of transaction block
21.399994611740112
current transaction is aborted, commands ignored until end of transaction block
21.401058673858643
current transaction is aborted, commands ignored until end of transaction block
21.402035236358643
current transaction is aborted, commands ignored until end of transaction block
21.402992486953735
current transaction is aborted, commands ignored until end of transaction block
21.40500783920288
current transaction is aborted, commands ignored until end of transaction block
21.406086444854736
current transaction is aborted, commands ignored until end of transaction block
21.40699815750122
current transaction is aborted, commands ignored until end of transaction block
21.408003330230713
current transaction is aborted, commands ignored until end of transaction block
21.409991025924683
current transaction is aborted, commands ignored until end of transaction block
21.409991025924683
current transaction is aborted, commands ignored until end of transaction block
21.4119930267334
current transaction is aborted, commands ignored until end of transaction block
21.412994861602783
current transaction is aborted, commands ignored until end of transaction block
21.41439700126648
current transaction is aborted, commands ignored until end of transaction block
21.415826082229614
current transaction is aborted, commands ignored until end of transaction block
21.417184591293335
current transaction is aborted, commands ignored until end of transaction block
21.417993545532227
current transaction is aborted, commands ignored until end of transaction block
21.41899347305298
current transaction is aborted, commands ignored until end of transaction block
21.42059063911438
current transaction is aborted, commands ignored until end of transaction block
21.420993089675903
current transaction is aborted, commands ignored until end of transaction block
21.422083854675293
current transaction is aborted, commands ignored until end of transaction block
21.423991441726685
current transaction is aborted, commands ignored until end of transaction block
21.42499828338623
current transaction is aborted, commands ignored until end of transaction block
21.425995349884033
current transaction is aborted, commands ignored until end of transaction block
21.426995277404785
current transaction is aborted, commands ignored until end of transaction block
21.42798924446106
current transaction is aborted, commands ignored until end of transaction block
21.428995847702026
current transaction is aborted, commands ignored until end of transaction block
21.428995847702026
current transaction is aborted, commands ignored until end of transaction block
21.43004083633423
current transaction is aborted, commands ignored until end of transaction block
21.43101143836975
current transaction is aborted, commands ignored until end of transaction block
21.43218183517456
current transaction is aborted, commands ignored until end of transaction block
21.433187246322632
current transaction is aborted, commands ignored until end of transaction block
21.4340603351593
current transaction is aborted, commands ignored until end of transaction block
21.435998916625977
current transaction is aborted, commands ignored until end of transaction block
21.437003135681152
current transaction is aborted, commands ignored until end of transaction block
21.437992811203003
current transaction is aborted, commands ignored until end of transaction block
21.439003229141235
current transaction is aborted, commands ignored until end of transaction block
21.439995765686035
current transaction is aborted, commands ignored until end of transaction block
21.439995765686035
current transaction is aborted, commands ignored until end of transaction block
21.441049337387085
current transaction is aborted, commands ignored until end of transaction block
21.44210410118103
current transaction is aborted, commands ignored until end of transaction block
21.44300389289856
current transaction is aborted, commands ignored until end of transaction block
21.44403624534607
current transaction is aborted, commands ignored until end of transaction block
21.44535756111145
current transaction is aborted, commands ignored until end of transaction block
21.446000337600708
current transaction is aborted, commands ignored until end of transaction block
21.447033405303955
current transaction is aborted, commands ignored until end of transaction block
21.448034286499023
current transaction is aborted, commands ignored until end of transaction block
21.448034286499023
current transaction is aborted, commands ignored until end of transaction block
21.44909358024597
current transaction is aborted, commands ignored until end of transaction block
21.449998140335083
current transaction is aborted, commands ignored until end of transaction block
21.45100688934326
current transaction is aborted, commands ignored until end of transaction block
21.45203924179077
current transaction is aborted, commands ignored until end of transaction block
21.453174829483032
current transaction is aborted, commands ignored until end of transaction block
21.453996896743774
current transaction is aborted, commands ignored until end of transaction block
21.454317808151245
current transaction is aborted, commands ignored until end of transaction block
21.455358505249023
current transaction is aborted, commands ignored until end of transaction block
21.455991744995117
current transaction is aborted, commands ignored until end of transaction block
21.456995725631714
current transaction is aborted, commands ignored until end of transaction block
21.458038568496704
current transaction is aborted, commands ignored until end of transaction block
21.45899510383606
current transaction is aborted, commands ignored until end of transaction block
21.45899510383606
current transaction is aborted, commands ignored until end of transaction block
21.460033416748047
current transaction is aborted, commands ignored until end of transaction block
21.461036205291748
current transaction is aborted, commands ignored until end of transaction block
21.461997032165527
current transaction is aborted, commands ignored until end of transaction block
21.461997032165527
current transaction is aborted, commands ignored until end of transaction block
21.462995290756226
current transaction is aborted, commands ignored until end of transaction block
21.463996171951294
current transaction is aborted, commands ignored until end of transaction block
21.464993238449097
current transaction is aborted, commands ignored until end of transaction block
21.46603298187256
current transaction is aborted, commands ignored until end of transaction block
21.466995239257812
current transaction is aborted, commands ignored until end of transaction block
21.4679958820343
current transaction is aborted, commands ignored until end of transaction block
21.468286514282227
current transaction is aborted, commands ignored until end of transaction block
21.46899724006653
current transaction is aborted, commands ignored until end of transaction block
21.470048904418945
current transaction is aborted, commands ignored until end of transaction block
21.471013069152832
current transaction is aborted, commands ignored until end of transaction block
21.472036600112915
current transaction is aborted, commands ignored until end of transaction block
21.47303295135498
current transaction is aborted, commands ignored until end of transaction block
21.473992586135864
current transaction is aborted, commands ignored until end of transaction block
21.475036144256592
current transaction is aborted, commands ignored until end of transaction block
21.47526717185974
current transaction is aborted, commands ignored until end of transaction block
21.475995779037476
current transaction is aborted, commands ignored until end of transaction block
21.476996898651123
current transaction is aborted, commands ignored until end of transaction block
21.478050708770752
current transaction is aborted, commands ignored until end of transaction block
21.478994369506836
current transaction is aborted, commands ignored until end of transaction block
21.48056721687317
current transaction is aborted, commands ignored until end of transaction block
21.48146343231201
current transaction is aborted, commands ignored until end of transaction block
21.4820339679718
current transaction is aborted, commands ignored until end of transaction block
21.483033895492554
current transaction is aborted, commands ignored until end of transaction block
21.48399782180786
current transaction is aborted, commands ignored until end of transaction block
21.485074758529663
current transaction is aborted, commands ignored until end of transaction block
21.48600172996521
current transaction is aborted, commands ignored until end of transaction block
21.48600172996521
current transaction is aborted, commands ignored until end of transaction block
21.48708486557007
current transaction is aborted, commands ignored until end of transaction block
21.488001108169556
current transaction is aborted, commands ignored until end of transaction block
21.489001512527466
current transaction is aborted, commands ignored until end of transaction block
21.489001512527466
current transaction is aborted, commands ignored until end of transaction block
21.490140199661255
current transaction is aborted, commands ignored until end of transaction block
21.490994930267334
current transaction is aborted, commands ignored until end of transaction block
21.49213695526123
current transaction is aborted, commands ignored until end of transaction block
21.49308133125305
current transaction is aborted, commands ignored until end of transaction block
21.493996620178223
current transaction is aborted, commands ignored until end of transaction block
21.495038509368896
current transaction is aborted, commands ignored until end of transaction block
21.495038509368896
current transaction is aborted, commands ignored until end of transaction block
21.496001720428467
current transaction is aborted, commands ignored until end of transaction block
21.4969961643219
current transaction is aborted, commands ignored until end of transaction block
21.49815607070923
current transaction is aborted, commands ignored until end of transaction block
21.50003218650818
current transaction is aborted, commands ignored until end of transaction block
21.50099754333496
current transaction is aborted, commands ignored until end of transaction block
21.502007246017456
current transaction is aborted, commands ignored until end of transaction block
21.502996921539307
current transaction is aborted, commands ignored until end of transaction block
21.503058433532715
current transaction is aborted, commands ignored until end of transaction block
21.504002571105957
current transaction is aborted, commands ignored until end of transaction block
21.506002187728882
current transaction is aborted, commands ignored until end of transaction block
21.50699257850647
current transaction is aborted, commands ignored until end of transaction block
21.50699257850647
current transaction is aborted, commands ignored until end of transaction block
21.50823974609375
current transaction is aborted, commands ignored until end of transaction block
21.509299755096436
current transaction is aborted, commands ignored until end of transaction block
21.509608507156372
current transaction is aborted, commands ignored until end of transaction block
21.510585069656372
current transaction is aborted, commands ignored until end of transaction block
21.5113046169281
current transaction is aborted, commands ignored until end of transaction block
21.512262105941772
current transaction is aborted, commands ignored until end of transaction block
21.513263940811157
current transaction is aborted, commands ignored until end of transaction block
21.514359951019287
current transaction is aborted, commands ignored until end of transaction block
21.515374183654785
current transaction is aborted, commands ignored until end of transaction block
21.515624046325684
current transaction is aborted, commands ignored until end of transaction block
21.516271591186523
current transaction is aborted, commands ignored until end of transaction block
21.51729917526245
current transaction is aborted, commands ignored until end of transaction block
21.51858377456665
current transaction is aborted, commands ignored until end of transaction block
21.519259214401245
current transaction is aborted, commands ignored until end of transaction block
21.52026891708374
current transaction is aborted, commands ignored until end of transaction block
21.521299600601196
current transaction is aborted, commands ignored until end of transaction block
21.521299600601196
current transaction is aborted, commands ignored until end of transaction block
21.52230143547058
current transaction is aborted, commands ignored until end of transaction block
21.523484468460083
current transaction is aborted, commands ignored until end of transaction block
21.524298667907715
current transaction is aborted, commands ignored until end of transaction block
21.524298667907715
current transaction is aborted, commands ignored until end of transaction block
21.52525568008423
current transaction is aborted, commands ignored until end of transaction block
21.5262668132782
current transaction is aborted, commands ignored until end of transaction block
21.52726459503174
current transaction is aborted, commands ignored until end of transaction block
21.528300523757935
current transaction is aborted, commands ignored until end of transaction block
21.52926206588745
current transaction is aborted, commands ignored until end of transaction block
21.52926206588745
current transaction is aborted, commands ignored until end of transaction block
21.530298709869385
current transaction is aborted, commands ignored until end of transaction block
21.53130078315735
current transaction is aborted, commands ignored until end of transaction block
21.532261848449707
current transaction is aborted, commands ignored until end of transaction block
21.53327226638794
current transaction is aborted, commands ignored until end of transaction block
21.53445553779602
current transaction is aborted, commands ignored until end of transaction block
21.535258769989014
current transaction is aborted, commands ignored until end of transaction block
21.536307096481323
current transaction is aborted, commands ignored until end of transaction block
21.536561727523804
current transaction is aborted, commands ignored until end of transaction block
21.537299394607544
current transaction is aborted, commands ignored until end of transaction block
21.538389682769775
current transaction is aborted, commands ignored until end of transaction block
21.539388418197632
current transaction is aborted, commands ignored until end of transaction block
21.54026436805725
current transaction is aborted, commands ignored until end of transaction block
21.54145622253418
current transaction is aborted, commands ignored until end of transaction block
21.542261600494385
current transaction is aborted, commands ignored until end of transaction block
21.54326295852661
current transaction is aborted, commands ignored until end of transaction block
21.54426622390747
current transaction is aborted, commands ignored until end of transaction block
21.544395923614502
current transaction is aborted, commands ignored until end of transaction block
21.54526424407959
current transaction is aborted, commands ignored until end of transaction block
21.54637598991394
current transaction is aborted, commands ignored until end of transaction block
21.547277688980103
current transaction is aborted, commands ignored until end of transaction block
21.548477172851562
current transaction is aborted, commands ignored until end of transaction block
21.549275636672974
current transaction is aborted, commands ignored until end of transaction block
21.55026340484619
current transaction is aborted, commands ignored until end of transaction block
21.55128049850464
current transaction is aborted, commands ignored until end of transaction block
21.55149531364441
current transaction is aborted, commands ignored until end of transaction block
21.552305698394775
current transaction is aborted, commands ignored until end of transaction block
21.55331039428711
current transaction is aborted, commands ignored until end of transaction block
21.554269075393677
current transaction is aborted, commands ignored until end of transaction block
21.555256843566895
current transaction is aborted, commands ignored until end of transaction block
21.556262493133545
current transaction is aborted, commands ignored until end of transaction block
21.55729913711548
current transaction is aborted, commands ignored until end of transaction block
21.55729913711548
current transaction is aborted, commands ignored until end of transaction block
21.558632135391235
current transaction is aborted, commands ignored until end of transaction block
21.559301376342773
current transaction is aborted, commands ignored until end of transaction block
21.56025719642639
current transaction is aborted, commands ignored until end of transaction block
21.561272382736206
current transaction is aborted, commands ignored until end of transaction block
21.56225895881653
current transaction is aborted, commands ignored until end of transaction block
21.563299655914307
current transaction is aborted, commands ignored until end of transaction block
21.564299821853638
current transaction is aborted, commands ignored until end of transaction block
21.564299821853638
current transaction is aborted, commands ignored until end of transaction block
21.56529974937439
current transaction is aborted, commands ignored until end of transaction block
21.56649136543274
current transaction is aborted, commands ignored until end of transaction block
21.567461252212524
current transaction is aborted, commands ignored until end of transaction block
21.568260431289673
current transaction is aborted, commands ignored until end of transaction block
21.568260431289673
current transaction is aborted, commands ignored until end of transaction block
21.56931447982788
current transaction is aborted, commands ignored until end of transaction block
21.570889472961426
current transaction is aborted, commands ignored until end of transaction block
21.571263790130615
current transaction is aborted, commands ignored until end of transaction block
21.572267055511475
current transaction is aborted, commands ignored until end of transaction block
21.572267055511475
current transaction is aborted, commands ignored until end of transaction block
21.573298931121826
current transaction is aborted, commands ignored until end of transaction block
21.574263334274292
current transaction is aborted, commands ignored until end of transaction block
21.57630228996277
current transaction is aborted, commands ignored until end of transaction block
21.57738947868347
current transaction is aborted, commands ignored until end of transaction block
21.578558444976807
current transaction is aborted, commands ignored until end of transaction block
21.579299449920654
current transaction is aborted, commands ignored until end of transaction block
21.580557346343994
current transaction is aborted, commands ignored until end of transaction block
21.581257343292236
current transaction is aborted, commands ignored until end of transaction block
21.582270622253418
current transaction is aborted, commands ignored until end of transaction block
21.583266973495483
current transaction is aborted, commands ignored until end of transaction block
21.584299564361572
current transaction is aborted, commands ignored until end of transaction block
21.585315942764282
current transaction is aborted, commands ignored until end of transaction block
21.585315942764282
current transaction is aborted, commands ignored until end of transaction block
21.586259126663208
current transaction is aborted, commands ignored until end of transaction block
21.587299823760986
current transaction is aborted, commands ignored until end of transaction block
21.588299989700317
current transaction is aborted, commands ignored until end of transaction block
21.589276552200317
current transaction is aborted, commands ignored until end of transaction block
21.590256214141846
current transaction is aborted, commands ignored until end of transaction block
21.592259407043457
current transaction is aborted, commands ignored until end of transaction block
21.593260526657104
current transaction is aborted, commands ignored until end of transaction block
21.593260526657104
current transaction is aborted, commands ignored until end of transaction block
21.595260858535767
current transaction is aborted, commands ignored until end of transaction block
21.596268892288208
current transaction is aborted, commands ignored until end of transaction block
21.596268892288208
current transaction is aborted, commands ignored until end of transaction block
21.59826374053955
current transaction is aborted, commands ignored until end of transaction block
21.599265575408936
current transaction is aborted, commands ignored until end of transaction block
21.599265575408936
current transaction is aborted, commands ignored until end of transaction block
21.60030436515808
current transaction is aborted, commands ignored until end of transaction block
21.60129976272583
current transaction is aborted, commands ignored until end of transaction block
21.602269172668457
current transaction is aborted, commands ignored until end of transaction block
21.604262590408325
current transaction is aborted, commands ignored until end of transaction block
21.604262590408325
current transaction is aborted, commands ignored until end of transaction block
21.605559587478638
current transaction is aborted, commands ignored until end of transaction block
21.606263399124146
current transaction is aborted, commands ignored until end of transaction block
21.60726284980774
current transaction is aborted, commands ignored until end of transaction block
21.60726284980774
current transaction is aborted, commands ignored until end of transaction block
21.608299016952515
current transaction is aborted, commands ignored until end of transaction block
21.609308004379272
current transaction is aborted, commands ignored until end of transaction block
21.610313177108765
current transaction is aborted, commands ignored until end of transaction block
21.611262798309326
current transaction is aborted, commands ignored until end of transaction block
21.61236548423767
current transaction is aborted, commands ignored until end of transaction block
21.613269090652466
current transaction is aborted, commands ignored until end of transaction block
21.613269090652466
current transaction is aborted, commands ignored until end of transaction block
21.61430025100708
current transaction is aborted, commands ignored until end of transaction block
21.61526346206665
current transaction is aborted, commands ignored until end of transaction block
21.616267681121826
current transaction is aborted, commands ignored until end of transaction block
21.618260383605957
current transaction is aborted, commands ignored until end of transaction block
21.619357109069824
current transaction is aborted, commands ignored until end of transaction block
21.62126874923706
current transaction is aborted, commands ignored until end of transaction block
21.62126874923706
current transaction is aborted, commands ignored until end of transaction block
21.622267484664917
current transaction is aborted, commands ignored until end of transaction block
21.624265670776367
current transaction is aborted, commands ignored until end of transaction block
21.624265670776367
current transaction is aborted, commands ignored until end of transaction block
21.626262664794922
current transaction is aborted, commands ignored until end of transaction block
21.626317024230957
current transaction is aborted, commands ignored until end of transaction block
21.627270460128784
current transaction is aborted, commands ignored until end of transaction block
21.62825846672058
current transaction is aborted, commands ignored until end of transaction block
21.62929892539978
current transaction is aborted, commands ignored until end of transaction block
21.62929892539978
current transaction is aborted, commands ignored until end of transaction block
21.630300521850586
current transaction is aborted, commands ignored until end of transaction block
21.63127326965332
current transaction is aborted, commands ignored until end of transaction block
21.632615089416504
current transaction is aborted, commands ignored until end of transaction block
21.63325548171997
current transaction is aborted, commands ignored until end of transaction block
21.63429546356201
current transaction is aborted, commands ignored until end of transaction block
21.635263442993164
current transaction is aborted, commands ignored until end of transaction block
21.635525703430176
current transaction is aborted, commands ignored until end of transaction block
21.636515617370605
current transaction is aborted, commands ignored until end of transaction block
21.637351751327515
current transaction is aborted, commands ignored until end of transaction block
21.638275146484375
current transaction is aborted, commands ignored until end of transaction block
21.639302492141724
current transaction is aborted, commands ignored until end of transaction block
21.64047932624817
current transaction is aborted, commands ignored until end of transaction block
21.64130210876465
current transaction is aborted, commands ignored until end of transaction block
21.64130210876465
current transaction is aborted, commands ignored until end of transaction block
21.642334461212158
current transaction is aborted, commands ignored until end of transaction block
21.64325499534607
current transaction is aborted, commands ignored until end of transaction block
21.644261598587036
current transaction is aborted, commands ignored until end of transaction block
21.645554304122925
current transaction is aborted, commands ignored until end of transaction block
21.64626431465149
current transaction is aborted, commands ignored until end of transaction block
21.64726233482361
current transaction is aborted, commands ignored until end of transaction block
21.64826536178589
current transaction is aborted, commands ignored until end of transaction block
21.64826536178589
current transaction is aborted, commands ignored until end of transaction block
21.64937472343445
current transaction is aborted, commands ignored until end of transaction block
21.650465488433838
current transaction is aborted, commands ignored until end of transaction block
21.651259183883667
current transaction is aborted, commands ignored until end of transaction block
21.652257680892944
current transaction is aborted, commands ignored until end of transaction block
21.653314352035522
current transaction is aborted, commands ignored until end of transaction block
21.654300451278687
current transaction is aborted, commands ignored until end of transaction block
21.65528106689453
current transaction is aborted, commands ignored until end of transaction block
21.65528106689453
current transaction is aborted, commands ignored until end of transaction block
21.656269550323486
current transaction is aborted, commands ignored until end of transaction block
21.6573383808136
current transaction is aborted, commands ignored until end of transaction block
21.658260583877563
current transaction is aborted, commands ignored until end of transaction block
21.659531354904175
current transaction is aborted, commands ignored until end of transaction block
21.660300254821777
current transaction is aborted, commands ignored until end of transaction block
21.66130018234253
current transaction is aborted, commands ignored until end of transaction block
21.66158413887024
current transaction is aborted, commands ignored until end of transaction block
21.66230058670044
current transaction is aborted, commands ignored until end of transaction block
21.663404941558838
current transaction is aborted, commands ignored until end of transaction block
21.66426157951355
current transaction is aborted, commands ignored until end of transaction block
21.665261030197144
current transaction is aborted, commands ignored until end of transaction block
21.666268587112427
current transaction is aborted, commands ignored until end of transaction block
21.66738271713257
current transaction is aborted, commands ignored until end of transaction block
21.668259382247925
current transaction is aborted, commands ignored until end of transaction block
21.669299840927124
current transaction is aborted, commands ignored until end of transaction block
21.669299840927124
current transaction is aborted, commands ignored until end of transaction block
21.670299530029297
current transaction is aborted, commands ignored until end of transaction block
21.671257495880127
current transaction is aborted, commands ignored until end of transaction block
21.67227077484131
current transaction is aborted, commands ignored until end of transaction block
21.673603057861328
current transaction is aborted, commands ignored until end of transaction block
21.674766778945923
current transaction is aborted, commands ignored until end of transaction block
21.675257921218872
current transaction is aborted, commands ignored until end of transaction block
21.676265478134155
current transaction is aborted, commands ignored until end of transaction block
21.677301168441772
current transaction is aborted, commands ignored until end of transaction block
21.677550792694092
current transaction is aborted, commands ignored until end of transaction block
21.678263902664185
current transaction is aborted, commands ignored until end of transaction block
21.679336547851562
current transaction is aborted, commands ignored until end of transaction block
21.680264234542847
current transaction is aborted, commands ignored until end of transaction block
21.68125629425049
current transaction is aborted, commands ignored until end of transaction block
21.68125629425049
current transaction is aborted, commands ignored until end of transaction block
21.682302474975586
current transaction is aborted, commands ignored until end of transaction block
21.683299779891968
current transaction is aborted, commands ignored until end of transaction block
21.68429970741272
current transaction is aborted, commands ignored until end of transaction block
21.685261487960815
current transaction is aborted, commands ignored until end of transaction block
21.685261487960815
current transaction is aborted, commands ignored until end of transaction block
21.687371492385864
current transaction is aborted, commands ignored until end of transaction block
21.688256978988647
current transaction is aborted, commands ignored until end of transaction block
21.68930220603943
current transaction is aborted, commands ignored until end of transaction block
21.689534664154053
current transaction is aborted, commands ignored until end of transaction block
21.69049620628357
current transaction is aborted, commands ignored until end of transaction block
21.69129991531372
current transaction is aborted, commands ignored until end of transaction block
21.692268133163452
current transaction is aborted, commands ignored until end of transaction block
21.693275451660156
current transaction is aborted, commands ignored until end of transaction block
21.694257497787476
current transaction is aborted, commands ignored until end of transaction block
21.694257497787476
current transaction is aborted, commands ignored until end of transaction block
21.695300340652466
current transaction is aborted, commands ignored until end of transaction block
21.69630455970764
current transaction is aborted, commands ignored until end of transaction block
21.69726324081421
current transaction is aborted, commands ignored until end of transaction block
21.69726324081421
current transaction is aborted, commands ignored until end of transaction block
21.698406219482422
current transaction is aborted, commands ignored until end of transaction block
21.699259996414185
current transaction is aborted, commands ignored until end of transaction block
21.700281381607056
current transaction is aborted, commands ignored until end of transaction block
21.701258659362793
current transaction is aborted, commands ignored until end of transaction block
21.702301025390625
current transaction is aborted, commands ignored until end of transaction block
21.70326066017151
current transaction is aborted, commands ignored until end of transaction block
21.704301118850708
current transaction is aborted, commands ignored until end of transaction block
21.705263137817383
current transaction is aborted, commands ignored until end of transaction block
21.705263137817383
current transaction is aborted, commands ignored until end of transaction block
21.706302404403687
current transaction is aborted, commands ignored until end of transaction block
21.707306146621704
current transaction is aborted, commands ignored until end of transaction block
21.708309650421143
current transaction is aborted, commands ignored until end of transaction block
21.70954442024231
current transaction is aborted, commands ignored until end of transaction block
21.710299968719482
current transaction is aborted, commands ignored until end of transaction block
21.71126127243042
current transaction is aborted, commands ignored until end of transaction block
21.71126127243042
current transaction is aborted, commands ignored until end of transaction block
21.713056325912476
current transaction is aborted, commands ignored until end of transaction block
21.71325707435608
current transaction is aborted, commands ignored until end of transaction block
21.715256452560425
current transaction is aborted, commands ignored until end of transaction block
21.715256452560425
current transaction is aborted, commands ignored until end of transaction block
21.716742038726807
current transaction is aborted, commands ignored until end of transaction block
21.7173011302948
current transaction is aborted, commands ignored until end of transaction block
21.718271017074585
current transaction is aborted, commands ignored until end of transaction block
21.719301462173462
current transaction is aborted, commands ignored until end of transaction block
21.72030019760132
current transaction is aborted, commands ignored until end of transaction block
21.721312046051025
current transaction is aborted, commands ignored until end of transaction block
21.722261428833008
current transaction is aborted, commands ignored until end of transaction block
21.722261428833008
current transaction is aborted, commands ignored until end of transaction block
21.72330069541931
current transaction is aborted, commands ignored until end of transaction block
21.724299669265747
current transaction is aborted, commands ignored until end of transaction block
21.72530436515808
current transaction is aborted, commands ignored until end of transaction block
21.72530436515808
current transaction is aborted, commands ignored until end of transaction block
21.726305961608887
current transaction is aborted, commands ignored until end of transaction block
21.727261304855347
current transaction is aborted, commands ignored until end of transaction block
21.72926354408264
current transaction is aborted, commands ignored until end of transaction block
21.72926354408264
current transaction is aborted, commands ignored until end of transaction block
21.73030424118042
current transaction is aborted, commands ignored until end of transaction block
21.731266975402832
current transaction is aborted, commands ignored until end of transaction block
21.73226261138916
current transaction is aborted, commands ignored until end of transaction block
21.73226261138916
current transaction is aborted, commands ignored until end of transaction block
21.733317852020264
current transaction is aborted, commands ignored until end of transaction block
21.734262704849243
current transaction is aborted, commands ignored until end of transaction block
21.735262393951416
current transaction is aborted, commands ignored until end of transaction block
21.736260414123535
current transaction is aborted, commands ignored until end of transaction block
21.737443685531616
current transaction is aborted, commands ignored until end of transaction block
21.738300561904907
current transaction is aborted, commands ignored until end of transaction block
21.738300561904907
current transaction is aborted, commands ignored until end of transaction block
21.73930025100708
current transaction is aborted, commands ignored until end of transaction block
21.740479946136475
current transaction is aborted, commands ignored until end of transaction block
21.74137306213379
current transaction is aborted, commands ignored until end of transaction block
21.74227285385132
current transaction is aborted, commands ignored until end of transaction block
21.743300676345825
current transaction is aborted, commands ignored until end of transaction block
21.744633197784424
current transaction is aborted, commands ignored until end of transaction block
21.745500564575195
current transaction is aborted, commands ignored until end of transaction block
21.74630308151245
current transaction is aborted, commands ignored until end of transaction block
21.74730134010315
current transaction is aborted, commands ignored until end of transaction block
21.74730134010315
current transaction is aborted, commands ignored until end of transaction block
21.748300790786743
current transaction is aborted, commands ignored until end of transaction block
21.74931502342224
current transaction is aborted, commands ignored until end of transaction block
21.750260591506958
current transaction is aborted, commands ignored until end of transaction block
21.751301527023315
current transaction is aborted, commands ignored until end of transaction block
21.752300024032593
current transaction is aborted, commands ignored until end of transaction block
21.753301858901978
current transaction is aborted, commands ignored until end of transaction block
21.753301858901978
current transaction is aborted, commands ignored until end of transaction block
21.754300355911255
current transaction is aborted, commands ignored until end of transaction block
21.75526261329651
current transaction is aborted, commands ignored until end of transaction block
21.756550550460815
current transaction is aborted, commands ignored until end of transaction block
21.757301092147827
current transaction is aborted, commands ignored until end of transaction block
21.758517026901245
current transaction is aborted, commands ignored until end of transaction block
21.759264707565308
current transaction is aborted, commands ignored until end of transaction block
21.759264707565308
current transaction is aborted, commands ignored until end of transaction block
21.76030445098877
current transaction is aborted, commands ignored until end of transaction block
21.761587381362915
current transaction is aborted, commands ignored until end of transaction block
21.76326084136963
current transaction is aborted, commands ignored until end of transaction block
21.763930559158325
current transaction is aborted, commands ignored until end of transaction block
21.765300750732422
current transaction is aborted, commands ignored until end of transaction block
21.766261100769043
current transaction is aborted, commands ignored until end of transaction block
21.766261100769043
current transaction is aborted, commands ignored until end of transaction block
21.7678165435791
current transaction is aborted, commands ignored until end of transaction block
21.76825475692749
current transaction is aborted, commands ignored until end of transaction block
21.770312547683716
current transaction is aborted, commands ignored until end of transaction block
21.77125859260559
current transaction is aborted, commands ignored until end of transaction block
21.77226996421814
current transaction is aborted, commands ignored until end of transaction block
21.772417783737183
current transaction is aborted, commands ignored until end of transaction block
21.773263216018677
current transaction is aborted, commands ignored until end of transaction block
21.774269104003906
current transaction is aborted, commands ignored until end of transaction block
21.7752583026886
current transaction is aborted, commands ignored until end of transaction block
21.777286291122437
current transaction is aborted, commands ignored until end of transaction block
21.777692079544067
current transaction is aborted, commands ignored until end of transaction block
21.77826952934265
current transaction is aborted, commands ignored until end of transaction block
21.77926802635193
current transaction is aborted, commands ignored until end of transaction block
21.780269384384155
current transaction is aborted, commands ignored until end of transaction block
21.780269384384155
current transaction is aborted, commands ignored until end of transaction block
21.782256603240967
current transaction is aborted, commands ignored until end of transaction block
21.783271551132202
current transaction is aborted, commands ignored until end of transaction block
21.784263372421265
current transaction is aborted, commands ignored until end of transaction block
21.785258531570435
current transaction is aborted, commands ignored until end of transaction block
21.786259412765503
current transaction is aborted, commands ignored until end of transaction block
21.78725528717041
current transaction is aborted, commands ignored until end of transaction block
21.78725528717041
current transaction is aborted, commands ignored until end of transaction block
21.78858494758606
current transaction is aborted, commands ignored until end of transaction block
21.789260149002075
current transaction is aborted, commands ignored until end of transaction block
21.790271520614624
current transaction is aborted, commands ignored until end of transaction block
21.791257858276367
current transaction is aborted, commands ignored until end of transaction block
21.792318105697632
current transaction is aborted, commands ignored until end of transaction block
21.793259620666504
current transaction is aborted, commands ignored until end of transaction block
21.793259620666504
current transaction is aborted, commands ignored until end of transaction block
21.79431462287903
current transaction is aborted, commands ignored until end of transaction block
21.795475721359253
current transaction is aborted, commands ignored until end of transaction block
21.796262502670288
current transaction is aborted, commands ignored until end of transaction block
21.796262502670288
current transaction is aborted, commands ignored until end of transaction block
21.79828381538391
current transaction is aborted, commands ignored until end of transaction block
21.799262046813965
current transaction is aborted, commands ignored until end of transaction block
21.799262046813965
current transaction is aborted, commands ignored until end of transaction block
21.800389766693115
current transaction is aborted, commands ignored until end of transaction block
21.801302194595337
current transaction is aborted, commands ignored until end of transaction block
21.801302194595337
current transaction is aborted, commands ignored until end of transaction block
21.80229949951172
current transaction is aborted, commands ignored until end of transaction block
21.80326199531555
current transaction is aborted, commands ignored until end of transaction block
21.804272651672363
current transaction is aborted, commands ignored until end of transaction block
21.805270195007324
current transaction is aborted, commands ignored until end of transaction block
21.807262420654297
current transaction is aborted, commands ignored until end of transaction block
21.808262825012207
current transaction is aborted, commands ignored until end of transaction block
21.808262825012207
current transaction is aborted, commands ignored until end of transaction block
21.809303522109985
current transaction is aborted, commands ignored until end of transaction block
21.810258626937866
current transaction is aborted, commands ignored until end of transaction block
21.811262369155884
current transaction is aborted, commands ignored until end of transaction block
21.813274145126343
current transaction is aborted, commands ignored until end of transaction block
21.814258575439453
current transaction is aborted, commands ignored until end of transaction block
21.815301179885864
current transaction is aborted, commands ignored until end of transaction block
21.815535306930542
current transaction is aborted, commands ignored until end of transaction block
21.816728353500366
current transaction is aborted, commands ignored until end of transaction block
21.81740665435791
current transaction is aborted, commands ignored until end of transaction block
21.81826686859131
current transaction is aborted, commands ignored until end of transaction block
21.81925940513611
current transaction is aborted, commands ignored until end of transaction block
21.81925940513611
current transaction is aborted, commands ignored until end of transaction block
21.821316957473755
current transaction is aborted, commands ignored until end of transaction block
21.821316957473755
current transaction is aborted, commands ignored until end of transaction block
21.822317123413086
current transaction is aborted, commands ignored until end of transaction block
21.823304653167725
current transaction is aborted, commands ignored until end of transaction block
21.824265956878662
current transaction is aborted, commands ignored until end of transaction block
21.825270414352417
current transaction is aborted, commands ignored until end of transaction block
21.826406955718994
current transaction is aborted, commands ignored until end of transaction block
21.826406955718994
current transaction is aborted, commands ignored until end of transaction block
21.827301740646362
current transaction is aborted, commands ignored until end of transaction block
21.828259468078613
current transaction is aborted, commands ignored until end of transaction block
21.829299926757812
current transaction is aborted, commands ignored until end of transaction block
21.830303192138672
current transaction is aborted, commands ignored until end of transaction block
21.83126163482666
current transaction is aborted, commands ignored until end of transaction block
21.832263469696045
current transaction is aborted, commands ignored until end of transaction block
21.833261013031006
current transaction is aborted, commands ignored until end of transaction block
21.833261013031006
current transaction is aborted, commands ignored until end of transaction block
21.83469796180725
current transaction is aborted, commands ignored until end of transaction block
21.835299968719482
current transaction is aborted, commands ignored until end of transaction block
21.836265802383423
current transaction is aborted, commands ignored until end of transaction block
21.83730673789978
current transaction is aborted, commands ignored until end of transaction block
21.83730673789978
current transaction is aborted, commands ignored until end of transaction block
21.838520050048828
current transaction is aborted, commands ignored until end of transaction block
21.840256929397583
current transaction is aborted, commands ignored until end of transaction block
21.840256929397583
current transaction is aborted, commands ignored until end of transaction block
21.84126877784729
current transaction is aborted, commands ignored until end of transaction block
21.842304468154907
current transaction is aborted, commands ignored until end of transaction block
21.84330701828003
current transaction is aborted, commands ignored until end of transaction block
21.844300508499146
current transaction is aborted, commands ignored until end of transaction block
21.844300508499146
current transaction is aborted, commands ignored until end of transaction block
21.84537172317505
current transaction is aborted, commands ignored until end of transaction block
21.84630060195923
current transaction is aborted, commands ignored until end of transaction block
21.847310543060303
current transaction is aborted, commands ignored until end of transaction block
21.848636150360107
current transaction is aborted, commands ignored until end of transaction block
21.849268436431885
current transaction is aborted, commands ignored until end of transaction block
21.850263118743896
current transaction is aborted, commands ignored until end of transaction block
21.850263118743896
current transaction is aborted, commands ignored until end of transaction block
21.85138773918152
current transaction is aborted, commands ignored until end of transaction block
21.85326838493347
current transaction is aborted, commands ignored until end of transaction block
21.85426354408264
current transaction is aborted, commands ignored until end of transaction block
21.85426354408264
current transaction is aborted, commands ignored until end of transaction block
21.855602264404297
current transaction is aborted, commands ignored until end of transaction block
21.85626792907715
current transaction is aborted, commands ignored until end of transaction block
21.857444286346436
current transaction is aborted, commands ignored until end of transaction block
21.858301877975464
current transaction is aborted, commands ignored until end of transaction block
21.859269857406616
current transaction is aborted, commands ignored until end of transaction block
21.860310077667236
current transaction is aborted, commands ignored until end of transaction block
21.860310077667236
current transaction is aborted, commands ignored until end of transaction block
21.861300230026245
current transaction is aborted, commands ignored until end of transaction block
21.862558603286743
current transaction is aborted, commands ignored until end of transaction block
21.863269567489624
current transaction is aborted, commands ignored until end of transaction block
21.863269567489624
current transaction is aborted, commands ignored until end of transaction block
21.864303588867188
current transaction is aborted, commands ignored until end of transaction block
21.865654945373535
current transaction is aborted, commands ignored until end of transaction block
21.866369247436523
current transaction is aborted, commands ignored until end of transaction block
21.86778235435486
current transaction is aborted, commands ignored until end of transaction block
21.868260145187378
current transaction is aborted, commands ignored until end of transaction block
21.86943221092224
current transaction is aborted, commands ignored until end of transaction block
21.870287656784058
current transaction is aborted, commands ignored until end of transaction block
21.870287656784058
current transaction is aborted, commands ignored until end of transaction block
21.871254920959473
current transaction is aborted, commands ignored until end of transaction block
21.872288465499878
current transaction is aborted, commands ignored until end of transaction block
21.873263359069824
current transaction is aborted, commands ignored until end of transaction block
21.874255180358887
current transaction is aborted, commands ignored until end of transaction block
21.874255180358887
current transaction is aborted, commands ignored until end of transaction block
21.875311374664307
current transaction is aborted, commands ignored until end of transaction block
21.876261234283447
current transaction is aborted, commands ignored until end of transaction block
21.877260446548462
current transaction is aborted, commands ignored until end of transaction block
21.878451347351074
current transaction is aborted, commands ignored until end of transaction block
21.878451347351074
current transaction is aborted, commands ignored until end of transaction block
21.87952470779419
current transaction is aborted, commands ignored until end of transaction block
21.880269050598145
current transaction is aborted, commands ignored until end of transaction block
21.88131594657898
current transaction is aborted, commands ignored until end of transaction block
21.882306814193726
current transaction is aborted, commands ignored until end of transaction block
21.883300304412842
current transaction is aborted, commands ignored until end of transaction block
21.8836190700531
current transaction is aborted, commands ignored until end of transaction block
21.884588479995728
current transaction is aborted, commands ignored until end of transaction block
21.885299921035767
current transaction is aborted, commands ignored until end of transaction block
21.88629722595215
current transaction is aborted, commands ignored until end of transaction block
21.88725996017456
current transaction is aborted, commands ignored until end of transaction block
21.88831400871277
current transaction is aborted, commands ignored until end of transaction block
21.88831400871277
current transaction is aborted, commands ignored until end of transaction block
21.88926386833191
current transaction is aborted, commands ignored until end of transaction block
21.890501499176025
current transaction is aborted, commands ignored until end of transaction block
21.891263723373413
current transaction is aborted, commands ignored until end of transaction block
21.892300367355347
current transaction is aborted, commands ignored until end of transaction block
21.89325785636902
current transaction is aborted, commands ignored until end of transaction block
21.89368438720703
current transaction is aborted, commands ignored until end of transaction block
21.895332098007202
current transaction is aborted, commands ignored until end of transaction block
21.896260738372803
current transaction is aborted, commands ignored until end of transaction block
21.896260738372803
current transaction is aborted, commands ignored until end of transaction block
21.897672176361084
current transaction is aborted, commands ignored until end of transaction block
21.89826774597168
current transaction is aborted, commands ignored until end of transaction block
21.89926314353943
current transaction is aborted, commands ignored until end of transaction block
21.900256395339966
current transaction is aborted, commands ignored until end of transaction block
21.90044069290161
current transaction is aborted, commands ignored until end of transaction block
21.90126061439514
current transaction is aborted, commands ignored until end of transaction block
21.902621269226074
current transaction is aborted, commands ignored until end of transaction block
21.903542280197144
current transaction is aborted, commands ignored until end of transaction block
21.9044086933136
current transaction is aborted, commands ignored until end of transaction block
21.905263423919678
current transaction is aborted, commands ignored until end of transaction block
21.906302213668823
current transaction is aborted, commands ignored until end of transaction block
21.906302213668823
current transaction is aborted, commands ignored until end of transaction block
21.907262325286865
current transaction is aborted, commands ignored until end of transaction block
21.90929079055786
current transaction is aborted, commands ignored until end of transaction block
21.9102680683136
current transaction is aborted, commands ignored until end of transaction block
21.91126322746277
current transaction is aborted, commands ignored until end of transaction block
21.911516666412354
current transaction is aborted, commands ignored until end of transaction block
21.91230034828186
current transaction is aborted, commands ignored until end of transaction block
21.91341280937195
current transaction is aborted, commands ignored until end of transaction block
21.914258003234863
current transaction is aborted, commands ignored until end of transaction block
21.914258003234863
current transaction is aborted, commands ignored until end of transaction block
21.916256189346313
current transaction is aborted, commands ignored until end of transaction block
21.916256189346313
current transaction is aborted, commands ignored until end of transaction block
21.917260885238647
current transaction is aborted, commands ignored until end of transaction block
21.918437719345093
current transaction is aborted, commands ignored until end of transaction block
21.91925859451294
current transaction is aborted, commands ignored until end of transaction block
21.920258045196533
current transaction is aborted, commands ignored until end of transaction block
21.920258045196533
current transaction is aborted, commands ignored until end of transaction block
21.921258687973022
current transaction is aborted, commands ignored until end of transaction block
21.92225933074951
current transaction is aborted, commands ignored until end of transaction block
21.924258708953857
current transaction is aborted, commands ignored until end of transaction block
21.924258708953857
current transaction is aborted, commands ignored until end of transaction block
21.925318479537964
current transaction is aborted, commands ignored until end of transaction block
21.926257133483887
current transaction is aborted, commands ignored until end of transaction block
21.927257537841797
current transaction is aborted, commands ignored until end of transaction block
21.928305864334106
current transaction is aborted, commands ignored until end of transaction block
21.929262161254883
current transaction is aborted, commands ignored until end of transaction block
21.929262161254883
current transaction is aborted, commands ignored until end of transaction block
21.93026375770569
current transaction is aborted, commands ignored until end of transaction block
21.931314945220947
current transaction is aborted, commands ignored until end of transaction block
21.932300806045532
current transaction is aborted, commands ignored until end of transaction block
21.933300256729126
current transaction is aborted, commands ignored until end of transaction block
21.933300256729126
current transaction is aborted, commands ignored until end of transaction block
21.93527054786682
current transaction is aborted, commands ignored until end of transaction block
21.93527054786682
current transaction is aborted, commands ignored until end of transaction block
21.93631863594055
current transaction is aborted, commands ignored until end of transaction block
21.937288284301758
current transaction is aborted, commands ignored until end of transaction block
21.938302516937256
current transaction is aborted, commands ignored until end of transaction block
21.939300060272217
current transaction is aborted, commands ignored until end of transaction block
21.939300060272217
current transaction is aborted, commands ignored until end of transaction block
21.941293001174927
current transaction is aborted, commands ignored until end of transaction block
21.94225811958313
current transaction is aborted, commands ignored until end of transaction block
21.94331169128418
current transaction is aborted, commands ignored until end of transaction block
21.944257736206055
current transaction is aborted, commands ignored until end of transaction block
21.944257736206055
current transaction is aborted, commands ignored until end of transaction block
21.945300579071045
current transaction is aborted, commands ignored until end of transaction block
21.94629979133606
current transaction is aborted, commands ignored until end of transaction block
21.947259187698364
current transaction is aborted, commands ignored until end of transaction block
21.948299884796143
current transaction is aborted, commands ignored until end of transaction block
21.948627710342407
current transaction is aborted, commands ignored until end of transaction block
21.949262857437134
current transaction is aborted, commands ignored until end of transaction block
21.951374292373657
current transaction is aborted, commands ignored until end of transaction block
21.952258348464966
current transaction is aborted, commands ignored until end of transaction block
21.954262256622314
current transaction is aborted, commands ignored until end of transaction block
21.955262660980225
current transaction is aborted, commands ignored until end of transaction block
21.955262660980225
current transaction is aborted, commands ignored until end of transaction block
21.95625877380371
current transaction is aborted, commands ignored until end of transaction block
21.9572594165802
current transaction is aborted, commands ignored until end of transaction block
21.959259510040283
current transaction is aborted, commands ignored until end of transaction block
21.959507703781128
current transaction is aborted, commands ignored until end of transaction block
21.961263179779053
current transaction is aborted, commands ignored until end of transaction block
21.961263179779053
current transaction is aborted, commands ignored until end of transaction block
21.962265253067017
current transaction is aborted, commands ignored until end of transaction block
21.965267181396484
current transaction is aborted, commands ignored until end of transaction block
21.966262817382812
current transaction is aborted, commands ignored until end of transaction block
21.966262817382812
current transaction is aborted, commands ignored until end of transaction block
21.967259168624878
current transaction is aborted, commands ignored until end of transaction block
21.968257188796997
current transaction is aborted, commands ignored until end of transaction block
21.969311237335205
current transaction is aborted, commands ignored until end of transaction block
21.970258235931396
current transaction is aborted, commands ignored until end of transaction block
21.971328496932983
current transaction is aborted, commands ignored until end of transaction block
21.9722580909729
current transaction is aborted, commands ignored until end of transaction block
21.973296642303467
current transaction is aborted, commands ignored until end of transaction block
21.97442054748535
current transaction is aborted, commands ignored until end of transaction block
21.97525954246521
current transaction is aborted, commands ignored until end of transaction block
21.976300716400146
current transaction is aborted, commands ignored until end of transaction block
21.97726345062256
current transaction is aborted, commands ignored until end of transaction block
21.97828722000122
current transaction is aborted, commands ignored until end of transaction block
21.979262351989746
current transaction is aborted, commands ignored until end of transaction block
21.980268478393555
current transaction is aborted, commands ignored until end of transaction block
21.981257915496826
current transaction is aborted, commands ignored until end of transaction block
21.982630491256714
current transaction is aborted, commands ignored until end of transaction block
21.98326086997986
current transaction is aborted, commands ignored until end of transaction block
21.984347343444824
current transaction is aborted, commands ignored until end of transaction block
21.985260248184204
current transaction is aborted, commands ignored until end of transaction block
21.986270904541016
current transaction is aborted, commands ignored until end of transaction block
21.987530946731567
current transaction is aborted, commands ignored until end of transaction block
21.98825716972351
current transaction is aborted, commands ignored until end of transaction block
21.989261865615845
current transaction is aborted, commands ignored until end of transaction block
21.989261865615845
current transaction is aborted, commands ignored until end of transaction block
21.990299701690674
current transaction is aborted, commands ignored until end of transaction block
21.99232816696167
current transaction is aborted, commands ignored until end of transaction block
21.99325728416443
current transaction is aborted, commands ignored until end of transaction block
21.99325728416443
current transaction is aborted, commands ignored until end of transaction block
21.99439263343811
current transaction is aborted, commands ignored until end of transaction block
21.995259761810303
current transaction is aborted, commands ignored until end of transaction block
21.99625849723816
current transaction is aborted, commands ignored until end of transaction block
21.997259855270386
current transaction is aborted, commands ignored until end of transaction block
21.998263359069824
current transaction is aborted, commands ignored until end of transaction block
21.999266862869263
current transaction is aborted, commands ignored until end of transaction block
22.000270128250122
current transaction is aborted, commands ignored until end of transaction block
22.001300573349
current transaction is aborted, commands ignored until end of transaction block
22.001643180847168
current transaction is aborted, commands ignored until end of transaction block
22.00230360031128
current transaction is aborted, commands ignored until end of transaction block
22.003299951553345
current transaction is aborted, commands ignored until end of transaction block
22.004265546798706
current transaction is aborted, commands ignored until end of transaction block
22.005257844924927
current transaction is aborted, commands ignored until end of transaction block
22.006267786026
current transaction is aborted, commands ignored until end of transaction block
22.007372617721558
current transaction is aborted, commands ignored until end of transaction block
22.008259534835815
current transaction is aborted, commands ignored until end of transaction block
22.009300470352173
current transaction is aborted, commands ignored until end of transaction block
22.009300470352173
current transaction is aborted, commands ignored until end of transaction block
22.010420322418213
current transaction is aborted, commands ignored until end of transaction block
22.011529445648193
current transaction is aborted, commands ignored until end of transaction block
22.01234745979309
current transaction is aborted, commands ignored until end of transaction block
22.013655185699463
current transaction is aborted, commands ignored until end of transaction block
22.014255046844482
current transaction is aborted, commands ignored until end of transaction block
22.015474796295166
current transaction is aborted, commands ignored until end of transaction block
22.016255855560303
current transaction is aborted, commands ignored until end of transaction block
22.0172598361969
current transaction is aborted, commands ignored until end of transaction block
22.018255472183228
current transaction is aborted, commands ignored until end of transaction block
22.018255472183228
current transaction is aborted, commands ignored until end of transaction block
22.020716905593872
current transaction is aborted, commands ignored until end of transaction block
22.021263360977173
current transaction is aborted, commands ignored until end of transaction block
22.02229928970337
current transaction is aborted, commands ignored until end of transaction block
22.022629499435425
current transaction is aborted, commands ignored until end of transaction block
22.023300170898438
current transaction is aborted, commands ignored until end of transaction block
22.024269104003906
current transaction is aborted, commands ignored until end of transaction block
22.025256395339966
current transaction is aborted, commands ignored until end of transaction block
22.026256561279297
current transaction is aborted, commands ignored until end of transaction block
22.027260065078735
current transaction is aborted, commands ignored until end of transaction block
22.027260065078735
current transaction is aborted, commands ignored until end of transaction block
22.028300046920776
current transaction is aborted, commands ignored until end of transaction block
22.029379844665527
current transaction is aborted, commands ignored until end of transaction block
22.03026843070984
current transaction is aborted, commands ignored until end of transaction block
22.03026843070984
current transaction is aborted, commands ignored until end of transaction block
22.03130078315735
current transaction is aborted, commands ignored until end of transaction block
22.032363891601562
current transaction is aborted, commands ignored until end of transaction block
22.033260345458984
current transaction is aborted, commands ignored until end of transaction block
22.034650325775146
current transaction is aborted, commands ignored until end of transaction block
22.035258769989014
current transaction is aborted, commands ignored until end of transaction block
22.036270141601562
current transaction is aborted, commands ignored until end of transaction block
22.037302255630493
current transaction is aborted, commands ignored until end of transaction block
22.038300037384033
current transaction is aborted, commands ignored until end of transaction block
22.038300037384033
current transaction is aborted, commands ignored until end of transaction block
22.039257287979126
current transaction is aborted, commands ignored until end of transaction block
22.040257930755615
current transaction is aborted, commands ignored until end of transaction block
22.04125738143921
current transaction is aborted, commands ignored until end of transaction block
22.042317390441895
current transaction is aborted, commands ignored until end of transaction block
22.043304920196533
current transaction is aborted, commands ignored until end of transaction block
22.043304920196533
current transaction is aborted, commands ignored until end of transaction block
22.044300079345703
current transaction is aborted, commands ignored until end of transaction block
22.045405387878418
current transaction is aborted, commands ignored until end of transaction block
22.04626202583313
current transaction is aborted, commands ignored until end of transaction block
22.04626202583313
current transaction is aborted, commands ignored until end of transaction block
22.048262357711792
current transaction is aborted, commands ignored until end of transaction block
22.048667192459106
current transaction is aborted, commands ignored until end of transaction block
22.049298763275146
current transaction is aborted, commands ignored until end of transaction block
22.051270246505737
current transaction is aborted, commands ignored until end of transaction block
22.051270246505737
current transaction is aborted, commands ignored until end of transaction block
22.05241632461548
current transaction is aborted, commands ignored until end of transaction block
22.053268671035767
current transaction is aborted, commands ignored until end of transaction block
22.05430507659912
current transaction is aborted, commands ignored until end of transaction block
22.055542945861816
current transaction is aborted, commands ignored until end of transaction block
22.056300401687622
current transaction is aborted, commands ignored until end of transaction block
22.05735731124878
current transaction is aborted, commands ignored until end of transaction block
22.058299779891968
current transaction is aborted, commands ignored until end of transaction block
22.058299779891968
current transaction is aborted, commands ignored until end of transaction block
22.05929970741272
current transaction is aborted, commands ignored until end of transaction block
22.06025719642639
current transaction is aborted, commands ignored until end of transaction block
22.06127142906189
current transaction is aborted, commands ignored until end of transaction block
22.062567949295044
current transaction is aborted, commands ignored until end of transaction block
22.063258171081543
current transaction is aborted, commands ignored until end of transaction block
22.063258171081543
current transaction is aborted, commands ignored until end of transaction block
22.064263582229614
current transaction is aborted, commands ignored until end of transaction block
22.065301656723022
current transaction is aborted, commands ignored until end of transaction block
22.06630277633667
current transaction is aborted, commands ignored until end of transaction block
22.067267417907715
current transaction is aborted, commands ignored until end of transaction block
22.068259239196777
current transaction is aborted, commands ignored until end of transaction block
22.069401025772095
current transaction is aborted, commands ignored until end of transaction block
22.07026171684265
current transaction is aborted, commands ignored until end of transaction block
22.071456909179688
current transaction is aborted, commands ignored until end of transaction block
22.07230019569397
current transaction is aborted, commands ignored until end of transaction block
22.07230019569397
current transaction is aborted, commands ignored until end of transaction block
22.073625802993774
current transaction is aborted, commands ignored until end of transaction block
22.07425856590271
current transaction is aborted, commands ignored until end of transaction block
22.075268030166626
current transaction is aborted, commands ignored until end of transaction block
22.076271295547485
current transaction is aborted, commands ignored until end of transaction block
22.07729935646057
current transaction is aborted, commands ignored until end of transaction block
22.07828998565674
current transaction is aborted, commands ignored until end of transaction block
22.079299688339233
current transaction is aborted, commands ignored until end of transaction block
22.079299688339233
current transaction is aborted, commands ignored until end of transaction block
22.080302953720093
current transaction is aborted, commands ignored until end of transaction block
22.081259965896606
current transaction is aborted, commands ignored until end of transaction block
22.083261966705322
current transaction is aborted, commands ignored until end of transaction block
22.083261966705322
current transaction is aborted, commands ignored until end of transaction block
22.084299087524414
current transaction is aborted, commands ignored until end of transaction block
22.08554434776306
current transaction is aborted, commands ignored until end of transaction block
22.086339235305786
current transaction is aborted, commands ignored until end of transaction block
22.087300300598145
current transaction is aborted, commands ignored until end of transaction block
22.087300300598145
current transaction is aborted, commands ignored until end of transaction block
22.088307857513428
current transaction is aborted, commands ignored until end of transaction block
22.09026336669922
current transaction is aborted, commands ignored until end of transaction block
22.091301441192627
current transaction is aborted, commands ignored until end of transaction block
22.092267990112305
current transaction is aborted, commands ignored until end of transaction block
22.092267990112305
current transaction is aborted, commands ignored until end of transaction block
22.09363341331482
current transaction is aborted, commands ignored until end of transaction block
22.094268321990967
current transaction is aborted, commands ignored until end of transaction block
22.095317125320435
current transaction is aborted, commands ignored until end of transaction block
22.09630298614502
current transaction is aborted, commands ignored until end of transaction block
22.097290754318237
current transaction is aborted, commands ignored until end of transaction block
22.097290754318237
current transaction is aborted, commands ignored until end of transaction block
22.098491430282593
current transaction is aborted, commands ignored until end of transaction block
22.09925866127014
current transaction is aborted, commands ignored until end of transaction block
22.100301265716553
current transaction is aborted, commands ignored until end of transaction block
22.101261615753174
current transaction is aborted, commands ignored until end of transaction block
22.102259397506714
current transaction is aborted, commands ignored until end of transaction block
22.103307485580444
current transaction is aborted, commands ignored until end of transaction block
22.10426354408264
current transaction is aborted, commands ignored until end of transaction block
22.10426354408264
current transaction is aborted, commands ignored until end of transaction block
22.10569953918457
current transaction is aborted, commands ignored until end of transaction block
22.106300592422485
current transaction is aborted, commands ignored until end of transaction block
22.10726308822632
current transaction is aborted, commands ignored until end of transaction block
22.10726308822632
current transaction is aborted, commands ignored until end of transaction block
22.108545780181885
current transaction is aborted, commands ignored until end of transaction block
22.109492301940918
current transaction is aborted, commands ignored until end of transaction block
22.110487937927246
current transaction is aborted, commands ignored until end of transaction block
22.111484050750732
current transaction is aborted, commands ignored until end of transaction block
22.111484050750732
current transaction is aborted, commands ignored until end of transaction block
22.11251711845398
current transaction is aborted, commands ignored until end of transaction block
22.11352229118347
current transaction is aborted, commands ignored until end of transaction block
22.11449146270752
current transaction is aborted, commands ignored until end of transaction block
22.11547875404358
current transaction is aborted, commands ignored until end of transaction block
22.116488456726074
current transaction is aborted, commands ignored until end of transaction block
22.11750817298889
current transaction is aborted, commands ignored until end of transaction block
22.1184823513031
current transaction is aborted, commands ignored until end of transaction block
22.119496822357178
current transaction is aborted, commands ignored until end of transaction block
22.120522022247314
current transaction is aborted, commands ignored until end of transaction block
22.12152338027954
current transaction is aborted, commands ignored until end of transaction block
22.12152338027954
current transaction is aborted, commands ignored until end of transaction block
22.12247920036316
current transaction is aborted, commands ignored until end of transaction block
22.123493432998657
current transaction is aborted, commands ignored until end of transaction block
22.12466335296631
current transaction is aborted, commands ignored until end of transaction block
22.125487327575684
current transaction is aborted, commands ignored until end of transaction block
22.126482009887695
current transaction is aborted, commands ignored until end of transaction block
22.126482009887695
current transaction is aborted, commands ignored until end of transaction block
22.127527713775635
current transaction is aborted, commands ignored until end of transaction block
22.128522157669067
current transaction is aborted, commands ignored until end of transaction block
22.130492448806763
current transaction is aborted, commands ignored until end of transaction block
22.13153386116028
current transaction is aborted, commands ignored until end of transaction block
22.132482528686523
current transaction is aborted, commands ignored until end of transaction block
22.132482528686523
current transaction is aborted, commands ignored until end of transaction block
22.133673667907715
current transaction is aborted, commands ignored until end of transaction block
22.1347918510437
current transaction is aborted, commands ignored until end of transaction block
22.13552212715149
current transaction is aborted, commands ignored until end of transaction block
22.136483907699585
current transaction is aborted, commands ignored until end of transaction block
22.137483835220337
current transaction is aborted, commands ignored until end of transaction block
22.13865089416504
current transaction is aborted, commands ignored until end of transaction block
22.14048194885254
current transaction is aborted, commands ignored until end of transaction block
22.141481161117554
current transaction is aborted, commands ignored until end of transaction block
22.141481161117554
current transaction is aborted, commands ignored until end of transaction block
22.142478942871094
current transaction is aborted, commands ignored until end of transaction block
22.143563508987427
current transaction is aborted, commands ignored until end of transaction block
22.14448118209839
current transaction is aborted, commands ignored until end of transaction block
22.146480560302734
current transaction is aborted, commands ignored until end of transaction block
22.146480560302734
current transaction is aborted, commands ignored until end of transaction block
22.147478103637695
current transaction is aborted, commands ignored until end of transaction block
22.148483276367188
current transaction is aborted, commands ignored until end of transaction block
22.14948582649231
current transaction is aborted, commands ignored until end of transaction block
22.15056848526001
current transaction is aborted, commands ignored until end of transaction block
22.151480674743652
current transaction is aborted, commands ignored until end of transaction block
22.15247941017151
current transaction is aborted, commands ignored until end of transaction block
22.154479026794434
current transaction is aborted, commands ignored until end of transaction block
22.155482053756714
current transaction is aborted, commands ignored until end of transaction block
22.1564838886261
current transaction is aborted, commands ignored until end of transaction block
22.1564838886261
current transaction is aborted, commands ignored until end of transaction block
22.15748143196106
current transaction is aborted, commands ignored until end of transaction block
22.158493757247925
current transaction is aborted, commands ignored until end of transaction block
22.15956139564514
current transaction is aborted, commands ignored until end of transaction block
22.160912036895752
current transaction is aborted, commands ignored until end of transaction block
22.161627769470215
current transaction is aborted, commands ignored until end of transaction block
22.162484169006348
current transaction is aborted, commands ignored until end of transaction block
22.162811040878296
current transaction is aborted, commands ignored until end of transaction block
22.163912296295166
current transaction is aborted, commands ignored until end of transaction block
22.16448426246643
current transaction is aborted, commands ignored until end of transaction block
22.165483713150024
current transaction is aborted, commands ignored until end of transaction block
22.16673994064331
current transaction is aborted, commands ignored until end of transaction block
22.16748309135437
current transaction is aborted, commands ignored until end of transaction block
22.16848134994507
current transaction is aborted, commands ignored until end of transaction block
22.16848134994507
current transaction is aborted, commands ignored until end of transaction block
22.169729471206665
current transaction is aborted, commands ignored until end of transaction block
22.170802116394043
current transaction is aborted, commands ignored until end of transaction block
22.171489477157593
current transaction is aborted, commands ignored until end of transaction block
22.172497034072876
current transaction is aborted, commands ignored until end of transaction block
22.173809051513672
current transaction is aborted, commands ignored until end of transaction block
22.174522876739502
current transaction is aborted, commands ignored until end of transaction block
22.17549228668213
current transaction is aborted, commands ignored until end of transaction block
22.176482915878296
current transaction is aborted, commands ignored until end of transaction block
22.177483081817627
current transaction is aborted, commands ignored until end of transaction block
22.178485870361328
current transaction is aborted, commands ignored until end of transaction block
22.17948865890503
current transaction is aborted, commands ignored until end of transaction block
22.180480003356934
current transaction is aborted, commands ignored until end of transaction block
22.182481050491333
current transaction is aborted, commands ignored until end of transaction block
22.18348526954651
current transaction is aborted, commands ignored until end of transaction block
22.184497594833374
current transaction is aborted, commands ignored until end of transaction block
22.18549084663391
current transaction is aborted, commands ignored until end of transaction block
22.18651008605957
current transaction is aborted, commands ignored until end of transaction block
22.187484741210938
current transaction is aborted, commands ignored until end of transaction block
22.188488245010376
current transaction is aborted, commands ignored until end of transaction block
22.188690662384033
current transaction is aborted, commands ignored until end of transaction block
22.18952465057373
current transaction is aborted, commands ignored until end of transaction block
22.190491199493408
current transaction is aborted, commands ignored until end of transaction block
22.19149088859558
current transaction is aborted, commands ignored until end of transaction block
22.192481517791748
current transaction is aborted, commands ignored until end of transaction block
22.19348955154419
current transaction is aborted, commands ignored until end of transaction block
22.194527626037598
current transaction is aborted, commands ignored until end of transaction block
22.195486783981323
current transaction is aborted, commands ignored until end of transaction block
22.195486783981323
current transaction is aborted, commands ignored until end of transaction block
22.196574211120605
current transaction is aborted, commands ignored until end of transaction block
22.197522163391113
current transaction is aborted, commands ignored until end of transaction block
22.19852590560913
current transaction is aborted, commands ignored until end of transaction block
22.1995267868042
current transaction is aborted, commands ignored until end of transaction block
22.201481580734253
current transaction is aborted, commands ignored until end of transaction block
22.201481580734253
current transaction is aborted, commands ignored until end of transaction block
22.20248508453369
current transaction is aborted, commands ignored until end of transaction block
22.203529834747314
current transaction is aborted, commands ignored until end of transaction block
22.20452308654785
current transaction is aborted, commands ignored until end of transaction block
22.205524682998657
current transaction is aborted, commands ignored until end of transaction block
22.206136465072632
current transaction is aborted, commands ignored until end of transaction block
22.206523656845093
current transaction is aborted, commands ignored until end of transaction block
22.207484245300293
current transaction is aborted, commands ignored until end of transaction block
22.208524465560913
current transaction is aborted, commands ignored until end of transaction block
22.20954942703247
current transaction is aborted, commands ignored until end of transaction block
22.21052312850952
current transaction is aborted, commands ignored until end of transaction block
22.21052312850952
current transaction is aborted, commands ignored until end of transaction block
22.211478233337402
current transaction is aborted, commands ignored until end of transaction block
22.212526559829712
current transaction is aborted, commands ignored until end of transaction block
22.213489055633545
current transaction is aborted, commands ignored until end of transaction block
22.2145357131958
current transaction is aborted, commands ignored until end of transaction block
22.215485095977783
current transaction is aborted, commands ignored until end of transaction block
22.215485095977783
current transaction is aborted, commands ignored until end of transaction block
22.216749668121338
current transaction is aborted, commands ignored until end of transaction block
22.217530012130737
current transaction is aborted, commands ignored until end of transaction block
22.218522548675537
current transaction is aborted, commands ignored until end of transaction block
22.219524145126343
current transaction is aborted, commands ignored until end of transaction block
22.219629526138306
current transaction is aborted, commands ignored until end of transaction block
22.220479488372803
current transaction is aborted, commands ignored until end of transaction block
22.222489595413208
current transaction is aborted, commands ignored until end of transaction block
22.222489595413208
current transaction is aborted, commands ignored until end of transaction block
22.22364115715027
current transaction is aborted, commands ignored until end of transaction block
22.22448205947876
current transaction is aborted, commands ignored until end of transaction block
22.2255220413208
current transaction is aborted, commands ignored until end of transaction block
22.226524353027344
current transaction is aborted, commands ignored until end of transaction block
22.226524353027344
current transaction is aborted, commands ignored until end of transaction block
22.22913432121277
current transaction is aborted, commands ignored until end of transaction block
22.22948145866394
current transaction is aborted, commands ignored until end of transaction block
22.230480909347534
current transaction is aborted, commands ignored until end of transaction block
22.231523513793945
current transaction is aborted, commands ignored until end of transaction block
22.232523202896118
current transaction is aborted, commands ignored until end of transaction block
22.232523202896118
current transaction is aborted, commands ignored until end of transaction block
22.233500480651855
current transaction is aborted, commands ignored until end of transaction block
22.234493255615234
current transaction is aborted, commands ignored until end of transaction block
22.23548460006714
current transaction is aborted, commands ignored until end of transaction block
22.236788034439087
current transaction is aborted, commands ignored until end of transaction block
22.237508535385132
current transaction is aborted, commands ignored until end of transaction block
22.23852252960205
current transaction is aborted, commands ignored until end of transaction block
22.239485263824463
current transaction is aborted, commands ignored until end of transaction block
22.240477800369263
current transaction is aborted, commands ignored until end of transaction block
22.241493225097656
current transaction is aborted, commands ignored until end of transaction block
22.242493629455566
current transaction is aborted, commands ignored until end of transaction block
22.242493629455566
current transaction is aborted, commands ignored until end of transaction block
22.24352192878723
current transaction is aborted, commands ignored until end of transaction block
22.244577884674072
current transaction is aborted, commands ignored until end of transaction block
22.245522260665894
current transaction is aborted, commands ignored until end of transaction block
22.245522260665894
current transaction is aborted, commands ignored until end of transaction block
22.24648904800415
current transaction is aborted, commands ignored until end of transaction block
22.247480869293213
current transaction is aborted, commands ignored until end of transaction block
22.248488903045654
current transaction is aborted, commands ignored until end of transaction block
22.24948811531067
current transaction is aborted, commands ignored until end of transaction block
22.24948811531067
current transaction is aborted, commands ignored until end of transaction block
22.250521898269653
current transaction is aborted, commands ignored until end of transaction block
22.251723527908325
current transaction is aborted, commands ignored until end of transaction block
22.252521991729736
current transaction is aborted, commands ignored until end of transaction block
22.252521991729736
current transaction is aborted, commands ignored until end of transaction block
22.253488779067993
current transaction is aborted, commands ignored until end of transaction block
22.25448751449585
current transaction is aborted, commands ignored until end of transaction block
22.256498336791992
current transaction is aborted, commands ignored until end of transaction block
22.25748324394226
current transaction is aborted, commands ignored until end of transaction block
22.25748324394226
current transaction is aborted, commands ignored until end of transaction block
22.258820295333862
current transaction is aborted, commands ignored until end of transaction block
22.259629249572754
current transaction is aborted, commands ignored until end of transaction block
22.260528564453125
current transaction is aborted, commands ignored until end of transaction block
22.260528564453125
current transaction is aborted, commands ignored until end of transaction block
22.261653184890747
current transaction is aborted, commands ignored until end of transaction block
22.26248288154602
current transaction is aborted, commands ignored until end of transaction block
22.263484477996826
current transaction is aborted, commands ignored until end of transaction block
22.263882160186768
current transaction is aborted, commands ignored until end of transaction block
22.26547861099243
current transaction is aborted, commands ignored until end of transaction block
22.265710830688477
current transaction is aborted, commands ignored until end of transaction block
22.26652240753174
current transaction is aborted, commands ignored until end of transaction block
22.267524003982544
current transaction is aborted, commands ignored until end of transaction block
22.26847791671753
current transaction is aborted, commands ignored until end of transaction block
22.269489526748657
current transaction is aborted, commands ignored until end of transaction block
22.27048921585083
current transaction is aborted, commands ignored until end of transaction block
22.27048921585083
current transaction is aborted, commands ignored until end of transaction block
22.271522521972656
current transaction is aborted, commands ignored until end of transaction block
22.272855043411255
current transaction is aborted, commands ignored until end of transaction block
22.27352285385132
current transaction is aborted, commands ignored until end of transaction block
22.274522304534912
current transaction is aborted, commands ignored until end of transaction block
22.27557134628296
current transaction is aborted, commands ignored until end of transaction block
22.276491165161133
current transaction is aborted, commands ignored until end of transaction block
22.277626991271973
current transaction is aborted, commands ignored until end of transaction block
22.278486251831055
current transaction is aborted, commands ignored until end of transaction block
22.278486251831055
current transaction is aborted, commands ignored until end of transaction block
22.279485940933228
current transaction is aborted, commands ignored until end of transaction block
22.280659914016724
current transaction is aborted, commands ignored until end of transaction block
22.281522274017334
current transaction is aborted, commands ignored until end of transaction block
22.28248357772827
current transaction is aborted, commands ignored until end of transaction block
22.28248357772827
current transaction is aborted, commands ignored until end of transaction block
22.284488916397095
current transaction is aborted, commands ignored until end of transaction block
22.284488916397095
current transaction is aborted, commands ignored until end of transaction block
22.285524368286133
current transaction is aborted, commands ignored until end of transaction block
22.286526203155518
current transaction is aborted, commands ignored until end of transaction block
22.287484645843506
current transaction is aborted, commands ignored until end of transaction block
22.288485288619995
current transaction is aborted, commands ignored until end of transaction block
22.288485288619995
current transaction is aborted, commands ignored until end of transaction block
22.28948163986206
current transaction is aborted, commands ignored until end of transaction block
22.290480375289917
current transaction is aborted, commands ignored until end of transaction block
22.291483402252197
current transaction is aborted, commands ignored until end of transaction block
22.291483402252197
current transaction is aborted, commands ignored until end of transaction block
22.293195247650146
current transaction is aborted, commands ignored until end of transaction block
22.293477773666382
current transaction is aborted, commands ignored until end of transaction block
22.294490814208984
current transaction is aborted, commands ignored until end of transaction block
22.295522928237915
current transaction is aborted, commands ignored until end of transaction block
22.295522928237915
current transaction is aborted, commands ignored until end of transaction block
22.296482801437378
current transaction is aborted, commands ignored until end of transaction block
22.2975332736969
current transaction is aborted, commands ignored until end of transaction block
22.298710584640503
current transaction is aborted, commands ignored until end of transaction block
22.299526691436768
current transaction is aborted, commands ignored until end of transaction block
22.300108909606934
current transaction is aborted, commands ignored until end of transaction block
22.300524473190308
current transaction is aborted, commands ignored until end of transaction block
22.30152416229248
current transaction is aborted, commands ignored until end of transaction block
22.302632808685303
current transaction is aborted, commands ignored until end of transaction block
22.303526878356934
current transaction is aborted, commands ignored until end of transaction block
22.304484844207764
current transaction is aborted, commands ignored until end of transaction block
22.30552864074707
current transaction is aborted, commands ignored until end of transaction block
22.30570936203003
current transaction is aborted, commands ignored until end of transaction block
22.306522369384766
current transaction is aborted, commands ignored until end of transaction block
22.307523250579834
current transaction is aborted, commands ignored until end of transaction block
22.30837106704712
current transaction is aborted, commands ignored until end of transaction block
22.309433221817017
current transaction is aborted, commands ignored until end of transaction block
22.30992341041565
current transaction is aborted, commands ignored until end of transaction block
22.31039571762085
current transaction is aborted, commands ignored until end of transaction block
22.311448574066162
current transaction is aborted, commands ignored until end of transaction block
22.313395261764526
current transaction is aborted, commands ignored until end of transaction block
22.313395261764526
current transaction is aborted, commands ignored until end of transaction block
22.31442666053772
current transaction is aborted, commands ignored until end of transaction block
22.315465927124023
current transaction is aborted, commands ignored until end of transaction block
22.31639575958252
current transaction is aborted, commands ignored until end of transaction block
22.31639575958252
current transaction is aborted, commands ignored until end of transaction block
22.318408966064453
current transaction is aborted, commands ignored until end of transaction block
22.319393396377563
current transaction is aborted, commands ignored until end of transaction block
22.319393396377563
current transaction is aborted, commands ignored until end of transaction block
22.32102084159851
current transaction is aborted, commands ignored until end of transaction block
22.321393489837646
current transaction is aborted, commands ignored until end of transaction block
22.322402477264404
current transaction is aborted, commands ignored until end of transaction block
22.323433876037598
current transaction is aborted, commands ignored until end of transaction block
22.324435710906982
current transaction is aborted, commands ignored until end of transaction block
22.324435710906982
current transaction is aborted, commands ignored until end of transaction block
22.326443195343018
current transaction is aborted, commands ignored until end of transaction block
22.327396392822266
current transaction is aborted, commands ignored until end of transaction block
22.328433513641357
current transaction is aborted, commands ignored until end of transaction block
22.32960319519043
current transaction is aborted, commands ignored until end of transaction block
22.33054780960083
current transaction is aborted, commands ignored until end of transaction block
22.331388473510742
current transaction is aborted, commands ignored until end of transaction block
22.33244276046753
current transaction is aborted, commands ignored until end of transaction block
22.3333957195282
current transaction is aborted, commands ignored until end of transaction block
22.3333957195282
current transaction is aborted, commands ignored until end of transaction block
22.334518671035767
current transaction is aborted, commands ignored until end of transaction block
22.33543300628662
current transaction is aborted, commands ignored until end of transaction block
22.33643341064453
current transaction is aborted, commands ignored until end of transaction block
22.337597608566284
current transaction is aborted, commands ignored until end of transaction block
22.3383948802948
current transaction is aborted, commands ignored until end of transaction block
22.340395212173462
current transaction is aborted, commands ignored until end of transaction block
22.341392517089844
current transaction is aborted, commands ignored until end of transaction block
22.342392444610596
current transaction is aborted, commands ignored until end of transaction block
22.344390153884888
current transaction is aborted, commands ignored until end of transaction block
22.345513582229614
current transaction is aborted, commands ignored until end of transaction block
22.34639263153076
current transaction is aborted, commands ignored until end of transaction block
22.34938955307007
current transaction is aborted, commands ignored until end of transaction block
22.35038924217224
current transaction is aborted, commands ignored until end of transaction block
22.35138773918152
current transaction is aborted, commands ignored until end of transaction block
22.35138773918152
current transaction is aborted, commands ignored until end of transaction block
22.352400541305542
current transaction is aborted, commands ignored until end of transaction block
22.353497743606567
current transaction is aborted, commands ignored until end of transaction block
22.35439372062683
current transaction is aborted, commands ignored until end of transaction block
22.355777263641357
current transaction is aborted, commands ignored until end of transaction block
22.356435775756836
current transaction is aborted, commands ignored until end of transaction block
22.357433080673218
current transaction is aborted, commands ignored until end of transaction block
22.357433080673218
current transaction is aborted, commands ignored until end of transaction block
22.358432054519653
current transaction is aborted, commands ignored until end of transaction block
22.35940170288086
current transaction is aborted, commands ignored until end of transaction block
22.360450744628906
current transaction is aborted, commands ignored until end of transaction block
22.36139488220215
current transaction is aborted, commands ignored until end of transaction block
22.36260676383972
current transaction is aborted, commands ignored until end of transaction block
22.363433361053467
current transaction is aborted, commands ignored until end of transaction block
22.363433361053467
current transaction is aborted, commands ignored until end of transaction block
22.364561557769775
current transaction is aborted, commands ignored until end of transaction block
22.365439891815186
current transaction is aborted, commands ignored until end of transaction block
22.366401195526123
current transaction is aborted, commands ignored until end of transaction block
22.368390321731567
current transaction is aborted, commands ignored until end of transaction block
22.36940026283264
current transaction is aborted, commands ignored until end of transaction block
22.36940026283264
current transaction is aborted, commands ignored until end of transaction block
22.370527982711792
current transaction is aborted, commands ignored until end of transaction block
22.37139654159546
current transaction is aborted, commands ignored until end of transaction block
22.372391939163208
current transaction is aborted, commands ignored until end of transaction block
22.37439799308777
current transaction is aborted, commands ignored until end of transaction block
22.375402212142944
current transaction is aborted, commands ignored until end of transaction block
22.37643074989319
current transaction is aborted, commands ignored until end of transaction block
22.377445936203003
current transaction is aborted, commands ignored until end of transaction block
22.37861657142639
current transaction is aborted, commands ignored until end of transaction block
22.379388570785522
current transaction is aborted, commands ignored until end of transaction block
22.380393028259277
current transaction is aborted, commands ignored until end of transaction block
22.381397485733032
current transaction is aborted, commands ignored until end of transaction block
22.382407903671265
current transaction is aborted, commands ignored until end of transaction block
22.383208751678467
current transaction is aborted, commands ignored until end of transaction block
22.38339400291443
current transaction is aborted, commands ignored until end of transaction block
22.384597063064575
current transaction is aborted, commands ignored until end of transaction block
22.385432243347168
current transaction is aborted, commands ignored until end of transaction block
22.385432243347168
current transaction is aborted, commands ignored until end of transaction block
22.38638925552368
current transaction is aborted, commands ignored until end of transaction block
22.387467622756958
current transaction is aborted, commands ignored until end of transaction block
22.388445138931274
current transaction is aborted, commands ignored until end of transaction block
22.389400482177734
current transaction is aborted, commands ignored until end of transaction block
22.390396118164062
current transaction is aborted, commands ignored until end of transaction block
22.390396118164062
current transaction is aborted, commands ignored until end of transaction block
22.3916494846344
current transaction is aborted, commands ignored until end of transaction block
22.392488956451416
current transaction is aborted, commands ignored until end of transaction block
22.393394470214844
current transaction is aborted, commands ignored until end of transaction block
22.394437789916992
current transaction is aborted, commands ignored until end of transaction block
22.395419359207153
current transaction is aborted, commands ignored until end of transaction block
22.396429538726807
current transaction is aborted, commands ignored until end of transaction block
22.397398948669434
current transaction is aborted, commands ignored until end of transaction block
22.397398948669434
current transaction is aborted, commands ignored until end of transaction block
22.398441076278687
current transaction is aborted, commands ignored until end of transaction block
22.39939045906067
current transaction is aborted, commands ignored until end of transaction block
22.400456428527832
current transaction is aborted, commands ignored until end of transaction block
22.401390552520752
current transaction is aborted, commands ignored until end of transaction block
22.402646780014038
current transaction is aborted, commands ignored until end of transaction block
22.40339946746826
current transaction is aborted, commands ignored until end of transaction block
22.404399394989014
current transaction is aborted, commands ignored until end of transaction block
22.404399394989014
current transaction is aborted, commands ignored until end of transaction block
22.40543222427368
current transaction is aborted, commands ignored until end of transaction block
22.406432628631592
current transaction is aborted, commands ignored until end of transaction block
22.407447338104248
current transaction is aborted, commands ignored until end of transaction block
22.40858483314514
current transaction is aborted, commands ignored until end of transaction block
22.409580945968628
current transaction is aborted, commands ignored until end of transaction block
22.409580945968628
current transaction is aborted, commands ignored until end of transaction block
22.4106228351593
current transaction is aborted, commands ignored until end of transaction block
22.41176676750183
current transaction is aborted, commands ignored until end of transaction block
22.412582635879517
current transaction is aborted, commands ignored until end of transaction block
22.413583993911743
current transaction is aborted, commands ignored until end of transaction block
22.4145827293396
current transaction is aborted, commands ignored until end of transaction block
22.41586446762085
current transaction is aborted, commands ignored until end of transaction block
22.41657543182373
current transaction is aborted, commands ignored until end of transaction block
22.41657543182373
current transaction is aborted, commands ignored until end of transaction block
22.41762137413025
current transaction is aborted, commands ignored until end of transaction block
22.41858983039856
current transaction is aborted, commands ignored until end of transaction block
22.419583082199097
current transaction is aborted, commands ignored until end of transaction block
22.420060873031616
current transaction is aborted, commands ignored until end of transaction block
22.420620441436768
current transaction is aborted, commands ignored until end of transaction block
22.421586990356445
current transaction is aborted, commands ignored until end of transaction block
22.4237380027771
current transaction is aborted, commands ignored until end of transaction block
22.42458176612854
current transaction is aborted, commands ignored until end of transaction block
22.425578355789185
current transaction is aborted, commands ignored until end of transaction block
22.425578355789185
current transaction is aborted, commands ignored until end of transaction block
22.426621913909912
current transaction is aborted, commands ignored until end of transaction block
22.42781400680542
current transaction is aborted, commands ignored until end of transaction block
22.42857837677002
current transaction is aborted, commands ignored until end of transaction block
22.429621696472168
current transaction is aborted, commands ignored until end of transaction block
22.430579900741577
current transaction is aborted, commands ignored until end of transaction block
22.43163800239563
current transaction is aborted, commands ignored until end of transaction block
22.432620525360107
current transaction is aborted, commands ignored until end of transaction block
22.432620525360107
current transaction is aborted, commands ignored until end of transaction block
22.433582067489624
current transaction is aborted, commands ignored until end of transaction block
22.43462085723877
current transaction is aborted, commands ignored until end of transaction block
22.43558382987976
current transaction is aborted, commands ignored until end of transaction block
22.43661665916443
current transaction is aborted, commands ignored until end of transaction block
22.43758487701416
current transaction is aborted, commands ignored until end of transaction block
22.43758487701416
current transaction is aborted, commands ignored until end of transaction block
22.43861961364746
current transaction is aborted, commands ignored until end of transaction block
22.439587831497192
current transaction is aborted, commands ignored until end of transaction block
22.440582990646362
current transaction is aborted, commands ignored until end of transaction block
22.440582990646362
current transaction is aborted, commands ignored until end of transaction block
22.44161891937256
current transaction is aborted, commands ignored until end of transaction block
22.442588090896606
current transaction is aborted, commands ignored until end of transaction block
22.444579124450684
current transaction is aborted, commands ignored until end of transaction block
22.445578813552856
current transaction is aborted, commands ignored until end of transaction block
22.446619510650635
current transaction is aborted, commands ignored until end of transaction block
22.446619510650635
current transaction is aborted, commands ignored until end of transaction block
22.447919368743896
current transaction is aborted, commands ignored until end of transaction block
22.44861936569214
current transaction is aborted, commands ignored until end of transaction block
22.44957685470581
current transaction is aborted, commands ignored until end of transaction block
22.450582265853882
current transaction is aborted, commands ignored until end of transaction block
22.45157504081726
current transaction is aborted, commands ignored until end of transaction block
22.45157504081726
current transaction is aborted, commands ignored until end of transaction block
22.452619314193726
current transaction is aborted, commands ignored until end of transaction block
22.453741550445557
current transaction is aborted, commands ignored until end of transaction block
22.45462155342102
current transaction is aborted, commands ignored until end of transaction block
22.455620765686035
current transaction is aborted, commands ignored until end of transaction block
22.45585799217224
current transaction is aborted, commands ignored until end of transaction block
22.456597089767456
current transaction is aborted, commands ignored until end of transaction block
22.45762062072754
current transaction is aborted, commands ignored until end of transaction block
22.45862340927124
current transaction is aborted, commands ignored until end of transaction block
22.45962619781494
current transaction is aborted, commands ignored until end of transaction block
22.4606192111969
current transaction is aborted, commands ignored until end of transaction block
22.461623668670654
current transaction is aborted, commands ignored until end of transaction block
22.461623668670654
current transaction is aborted, commands ignored until end of transaction block
22.46284294128418
current transaction is aborted, commands ignored until end of transaction block
22.463582754135132
current transaction is aborted, commands ignored until end of transaction block
22.464771032333374
current transaction is aborted, commands ignored until end of transaction block
22.465619802474976
current transaction is aborted, commands ignored until end of transaction block
22.46665906906128
current transaction is aborted, commands ignored until end of transaction block
22.46665906906128
current transaction is aborted, commands ignored until end of transaction block
22.46763038635254
current transaction is aborted, commands ignored until end of transaction block
22.468619108200073
current transaction is aborted, commands ignored until end of transaction block
22.469605445861816
current transaction is aborted, commands ignored until end of transaction block
22.470587491989136
current transaction is aborted, commands ignored until end of transaction block
22.47163438796997
current transaction is aborted, commands ignored until end of transaction block
22.47262191772461
current transaction is aborted, commands ignored until end of transaction block
22.47262191772461
current transaction is aborted, commands ignored until end of transaction block
22.473968982696533
current transaction is aborted, commands ignored until end of transaction block
22.474780797958374
current transaction is aborted, commands ignored until end of transaction block
22.475619554519653
current transaction is aborted, commands ignored until end of transaction block
22.476575136184692
current transaction is aborted, commands ignored until end of transaction block
22.477578163146973
current transaction is aborted, commands ignored until end of transaction block
22.47866702079773
current transaction is aborted, commands ignored until end of transaction block
22.479577779769897
current transaction is aborted, commands ignored until end of transaction block
22.48058295249939
current transaction is aborted, commands ignored until end of transaction block
22.480786323547363
current transaction is aborted, commands ignored until end of transaction block
22.481619834899902
current transaction is aborted, commands ignored until end of transaction block
22.48268985748291
current transaction is aborted, commands ignored until end of transaction block
22.48357605934143
current transaction is aborted, commands ignored until end of transaction block
22.484585523605347
current transaction is aborted, commands ignored until end of transaction block
22.484585523605347
current transaction is aborted, commands ignored until end of transaction block
22.485968112945557
current transaction is aborted, commands ignored until end of transaction block
22.48662281036377
current transaction is aborted, commands ignored until end of transaction block
22.487722635269165
current transaction is aborted, commands ignored until end of transaction block
22.488738298416138
current transaction is aborted, commands ignored until end of transaction block
22.488738298416138
current transaction is aborted, commands ignored until end of transaction block
22.489619255065918
current transaction is aborted, commands ignored until end of transaction block
22.490701913833618
current transaction is aborted, commands ignored until end of transaction block
22.491589784622192
current transaction is aborted, commands ignored until end of transaction block
22.49262523651123
current transaction is aborted, commands ignored until end of transaction block
22.493619918823242
current transaction is aborted, commands ignored until end of transaction block
22.494619131088257
current transaction is aborted, commands ignored until end of transaction block
22.494943857192993
current transaction is aborted, commands ignored until end of transaction block
22.495619535446167
current transaction is aborted, commands ignored until end of transaction block
22.49661922454834
current transaction is aborted, commands ignored until end of transaction block
22.49661922454834
current transaction is aborted, commands ignored until end of transaction block
22.498579263687134
current transaction is aborted, commands ignored until end of transaction block
22.4995756149292
current transaction is aborted, commands ignored until end of transaction block
22.4995756149292
current transaction is aborted, commands ignored until end of transaction block
22.500619888305664
current transaction is aborted, commands ignored until end of transaction block
22.501622438430786
current transaction is aborted, commands ignored until end of transaction block
22.501622438430786
current transaction is aborted, commands ignored until end of transaction block
22.50261950492859
current transaction is aborted, commands ignored until end of transaction block
22.50362253189087
current transaction is aborted, commands ignored until end of transaction block
22.50471591949463
current transaction is aborted, commands ignored until end of transaction block
22.50558590888977
current transaction is aborted, commands ignored until end of transaction block
22.506609201431274
current transaction is aborted, commands ignored until end of transaction block
22.507588863372803
current transaction is aborted, commands ignored until end of transaction block
22.50865888595581
current transaction is aborted, commands ignored until end of transaction block
22.50865888595581
current transaction is aborted, commands ignored until end of transaction block
22.50965189933777
current transaction is aborted, commands ignored until end of transaction block
22.510953187942505
current transaction is aborted, commands ignored until end of transaction block
22.511654138565063
current transaction is aborted, commands ignored until end of transaction block
22.512654304504395
current transaction is aborted, commands ignored until end of transaction block
22.51471781730652
current transaction is aborted, commands ignored until end of transaction block
22.515650749206543
current transaction is aborted, commands ignored until end of transaction block
22.516690731048584
current transaction is aborted, commands ignored until end of transaction block
22.517796516418457
current transaction is aborted, commands ignored until end of transaction block
22.51864790916443
current transaction is aborted, commands ignored until end of transaction block
22.519697666168213
current transaction is aborted, commands ignored until end of transaction block
22.520652532577515
current transaction is aborted, commands ignored until end of transaction block
22.520652532577515
current transaction is aborted, commands ignored until end of transaction block
22.52168846130371
current transaction is aborted, commands ignored until end of transaction block
22.522687673568726
current transaction is aborted, commands ignored until end of transaction block
22.523655652999878
current transaction is aborted, commands ignored until end of transaction block
22.524694442749023
current transaction is aborted, commands ignored until end of transaction block
22.525651454925537
current transaction is aborted, commands ignored until end of transaction block
22.526660442352295
current transaction is aborted, commands ignored until end of transaction block
22.527743577957153
current transaction is aborted, commands ignored until end of transaction block
22.52864694595337
current transaction is aborted, commands ignored until end of transaction block
22.529650688171387
current transaction is aborted, commands ignored until end of transaction block
22.530656337738037
current transaction is aborted, commands ignored until end of transaction block
22.53164529800415
current transaction is aborted, commands ignored until end of transaction block
22.53264856338501
current transaction is aborted, commands ignored until end of transaction block
22.534698486328125
current transaction is aborted, commands ignored until end of transaction block
22.535646677017212
current transaction is aborted, commands ignored until end of transaction block
22.535646677017212
current transaction is aborted, commands ignored until end of transaction block
22.536803483963013
current transaction is aborted, commands ignored until end of transaction block
22.537689208984375
current transaction is aborted, commands ignored until end of transaction block
22.538870573043823
current transaction is aborted, commands ignored until end of transaction block
22.53964877128601
current transaction is aborted, commands ignored until end of transaction block
22.540693998336792
current transaction is aborted, commands ignored until end of transaction block
22.54165029525757
current transaction is aborted, commands ignored until end of transaction block
22.542797565460205
current transaction is aborted, commands ignored until end of transaction block
22.54367423057556
current transaction is aborted, commands ignored until end of transaction block
22.545154809951782
current transaction is aborted, commands ignored until end of transaction block
22.545656442642212
current transaction is aborted, commands ignored until end of transaction block
22.54665732383728
current transaction is aborted, commands ignored until end of transaction block
22.54765796661377
current transaction is aborted, commands ignored until end of transaction block
22.548659086227417
current transaction is aborted, commands ignored until end of transaction block
22.549652338027954
current transaction is aborted, commands ignored until end of transaction block
22.549652338027954
current transaction is aborted, commands ignored until end of transaction block
22.550652265548706
current transaction is aborted, commands ignored until end of transaction block
22.5518057346344
current transaction is aborted, commands ignored until end of transaction block
22.552688598632812
current transaction is aborted, commands ignored until end of transaction block
22.553691625595093
current transaction is aborted, commands ignored until end of transaction block
22.5546555519104
current transaction is aborted, commands ignored until end of transaction block
22.555647373199463
current transaction is aborted, commands ignored until end of transaction block
22.55664610862732
current transaction is aborted, commands ignored until end of transaction block
22.557650804519653
current transaction is aborted, commands ignored until end of transaction block
22.557650804519653
current transaction is aborted, commands ignored until end of transaction block
22.558687925338745
current transaction is aborted, commands ignored until end of transaction block
22.559687852859497
current transaction is aborted, commands ignored until end of transaction block
22.560647010803223
current transaction is aborted, commands ignored until end of transaction block
22.56264567375183
current transaction is aborted, commands ignored until end of transaction block
22.563647985458374
current transaction is aborted, commands ignored until end of transaction block
22.564648389816284
current transaction is aborted, commands ignored until end of transaction block
22.565645456314087
current transaction is aborted, commands ignored until end of transaction block
22.56669282913208
current transaction is aborted, commands ignored until end of transaction block
22.567649364471436
current transaction is aborted, commands ignored until end of transaction block
22.568690538406372
current transaction is aborted, commands ignored until end of transaction block
22.56966996192932
current transaction is aborted, commands ignored until end of transaction block
22.570688724517822
current transaction is aborted, commands ignored until end of transaction block
22.571646451950073
current transaction is aborted, commands ignored until end of transaction block
22.572655200958252
current transaction is aborted, commands ignored until end of transaction block
22.572655200958252
current transaction is aborted, commands ignored until end of transaction block
22.573691844940186
current transaction is aborted, commands ignored until end of transaction block
22.574687719345093
current transaction is aborted, commands ignored until end of transaction block
22.5756618976593
current transaction is aborted, commands ignored until end of transaction block
22.57720375061035
current transaction is aborted, commands ignored until end of transaction block
22.577653646469116
current transaction is aborted, commands ignored until end of transaction block
22.57891345024109
current transaction is aborted, commands ignored until end of transaction block
22.579689264297485
current transaction is aborted, commands ignored until end of transaction block
22.579689264297485
current transaction is aborted, commands ignored until end of transaction block
22.580687761306763
current transaction is aborted, commands ignored until end of transaction block
22.581648111343384
current transaction is aborted, commands ignored until end of transaction block
22.582702159881592
current transaction is aborted, commands ignored until end of transaction block
22.583646059036255
current transaction is aborted, commands ignored until end of transaction block
22.584646463394165
current transaction is aborted, commands ignored until end of transaction block
22.584646463394165
current transaction is aborted, commands ignored until end of transaction block
22.586652517318726
current transaction is aborted, commands ignored until end of transaction block
22.587655544281006
current transaction is aborted, commands ignored until end of transaction block
22.58774471282959
current transaction is aborted, commands ignored until end of transaction block
22.588645696640015
current transaction is aborted, commands ignored until end of transaction block
22.590643882751465
current transaction is aborted, commands ignored until end of transaction block
22.591647624969482
current transaction is aborted, commands ignored until end of transaction block
22.59268808364868
current transaction is aborted, commands ignored until end of transaction block
22.59268808364868
current transaction is aborted, commands ignored until end of transaction block
22.59368872642517
current transaction is aborted, commands ignored until end of transaction block
22.595014333724976
current transaction is aborted, commands ignored until end of transaction block
22.59564995765686
current transaction is aborted, commands ignored until end of transaction block
22.596691370010376
current transaction is aborted, commands ignored until end of transaction block
22.597690105438232
current transaction is aborted, commands ignored until end of transaction block
22.5986487865448
current transaction is aborted, commands ignored until end of transaction block
22.599849700927734
current transaction is aborted, commands ignored until end of transaction block
22.60064697265625
current transaction is aborted, commands ignored until end of transaction block
22.601747751235962
current transaction is aborted, commands ignored until end of transaction block
22.6026508808136
current transaction is aborted, commands ignored until end of transaction block
22.60374927520752
current transaction is aborted, commands ignored until end of transaction block
22.60374927520752
current transaction is aborted, commands ignored until end of transaction block
22.604695558547974
current transaction is aborted, commands ignored until end of transaction block
22.605934381484985
current transaction is aborted, commands ignored until end of transaction block
22.60665202140808
current transaction is aborted, commands ignored until end of transaction block
22.60665202140808
current transaction is aborted, commands ignored until end of transaction block
22.6079261302948
current transaction is aborted, commands ignored until end of transaction block
22.608769178390503
current transaction is aborted, commands ignored until end of transaction block
22.60975480079651
current transaction is aborted, commands ignored until end of transaction block
22.61096978187561
current transaction is aborted, commands ignored until end of transaction block
22.611092567443848
current transaction is aborted, commands ignored until end of transaction block
22.611788988113403
current transaction is aborted, commands ignored until end of transaction block
22.612887382507324
current transaction is aborted, commands ignored until end of transaction block
22.612887382507324
current transaction is aborted, commands ignored until end of transaction block
22.61375856399536
current transaction is aborted, commands ignored until end of transaction block
22.61475157737732
current transaction is aborted, commands ignored until end of transaction block
22.615748643875122
current transaction is aborted, commands ignored until end of transaction block
22.61676025390625
current transaction is aborted, commands ignored until end of transaction block
22.61794114112854
current transaction is aborted, commands ignored until end of transaction block
22.619084358215332
current transaction is aborted, commands ignored until end of transaction block
22.619789838790894
current transaction is aborted, commands ignored until end of transaction block
22.620789766311646
current transaction is aborted, commands ignored until end of transaction block
22.6211097240448
current transaction is aborted, commands ignored until end of transaction block
22.621800422668457
current transaction is aborted, commands ignored until end of transaction block
22.622755527496338
current transaction is aborted, commands ignored until end of transaction block
22.623753309249878
current transaction is aborted, commands ignored until end of transaction block
22.62475275993347
current transaction is aborted, commands ignored until end of transaction block
22.62575054168701
current transaction is aborted, commands ignored until end of transaction block
22.62575054168701
current transaction is aborted, commands ignored until end of transaction block
22.6268048286438
current transaction is aborted, commands ignored until end of transaction block
22.628138065338135
current transaction is aborted, commands ignored until end of transaction block
22.628791093826294
current transaction is aborted, commands ignored until end of transaction block
22.629753351211548
current transaction is aborted, commands ignored until end of transaction block
22.630756855010986
current transaction is aborted, commands ignored until end of transaction block
22.631752967834473
current transaction is aborted, commands ignored until end of transaction block
22.632791757583618
current transaction is aborted, commands ignored until end of transaction block
22.632791757583618
current transaction is aborted, commands ignored until end of transaction block
22.633758068084717
current transaction is aborted, commands ignored until end of transaction block
22.63475251197815
current transaction is aborted, commands ignored until end of transaction block
22.63575792312622
current transaction is aborted, commands ignored until end of transaction block
22.636757373809814
current transaction is aborted, commands ignored until end of transaction block
22.63775086402893
current transaction is aborted, commands ignored until end of transaction block
22.63875436782837
current transaction is aborted, commands ignored until end of transaction block
22.639752626419067
current transaction is aborted, commands ignored until end of transaction block
22.639752626419067
current transaction is aborted, commands ignored until end of transaction block
22.640790700912476
current transaction is aborted, commands ignored until end of transaction block
22.64178967475891
current transaction is aborted, commands ignored until end of transaction block
22.642789125442505
current transaction is aborted, commands ignored until end of transaction block
22.643789529800415
current transaction is aborted, commands ignored until end of transaction block
22.644756317138672
current transaction is aborted, commands ignored until end of transaction block
22.6460063457489
current transaction is aborted, commands ignored until end of transaction block
22.646748781204224
current transaction is aborted, commands ignored until end of transaction block
22.647753477096558
current transaction is aborted, commands ignored until end of transaction block
22.647753477096558
current transaction is aborted, commands ignored until end of transaction block
22.648792266845703
current transaction is aborted, commands ignored until end of transaction block
22.649791717529297
current transaction is aborted, commands ignored until end of transaction block
22.650748014450073
current transaction is aborted, commands ignored until end of transaction block
22.651752710342407
current transaction is aborted, commands ignored until end of transaction block
22.651752710342407
current transaction is aborted, commands ignored until end of transaction block
22.65278959274292
current transaction is aborted, commands ignored until end of transaction block
22.653751611709595
current transaction is aborted, commands ignored until end of transaction block
22.655056715011597
current transaction is aborted, commands ignored until end of transaction block
22.655758142471313
current transaction is aborted, commands ignored until end of transaction block
22.656747579574585
current transaction is aborted, commands ignored until end of transaction block
22.6577889919281
current transaction is aborted, commands ignored until end of transaction block
22.6577889919281
current transaction is aborted, commands ignored until end of transaction block
22.65879511833191
current transaction is aborted, commands ignored until end of transaction block
22.659816026687622
current transaction is aborted, commands ignored until end of transaction block
22.661064624786377
current transaction is aborted, commands ignored until end of transaction block
22.661789417266846
current transaction is aborted, commands ignored until end of transaction block
22.662802696228027
current transaction is aborted, commands ignored until end of transaction block
22.663795709609985
current transaction is aborted, commands ignored until end of transaction block
22.663795709609985
current transaction is aborted, commands ignored until end of transaction block
22.665762662887573
current transaction is aborted, commands ignored until end of transaction block
22.666894674301147
current transaction is aborted, commands ignored until end of transaction block
22.66787052154541
current transaction is aborted, commands ignored until end of transaction block
22.668790102005005
current transaction is aborted, commands ignored until end of transaction block
22.668790102005005
current transaction is aborted, commands ignored until end of transaction block
22.669837951660156
current transaction is aborted, commands ignored until end of transaction block
22.67086887359619
current transaction is aborted, commands ignored until end of transaction block
22.67174768447876
current transaction is aborted, commands ignored until end of transaction block
22.672749757766724
current transaction is aborted, commands ignored until end of transaction block
22.6739399433136
current transaction is aborted, commands ignored until end of transaction block
22.674752235412598
current transaction is aborted, commands ignored until end of transaction block
22.67579436302185
current transaction is aborted, commands ignored until end of transaction block
22.67579436302185
current transaction is aborted, commands ignored until end of transaction block
22.67674970626831
current transaction is aborted, commands ignored until end of transaction block
22.67775321006775
current transaction is aborted, commands ignored until end of transaction block
22.678757429122925
current transaction is aborted, commands ignored until end of transaction block
22.678757429122925
current transaction is aborted, commands ignored until end of transaction block
22.679790496826172
current transaction is aborted, commands ignored until end of transaction block
22.680857181549072
current transaction is aborted, commands ignored until end of transaction block
22.681793451309204
current transaction is aborted, commands ignored until end of transaction block
22.682799577713013
current transaction is aborted, commands ignored until end of transaction block
22.683794021606445
current transaction is aborted, commands ignored until end of transaction block
22.683794021606445
current transaction is aborted, commands ignored until end of transaction block
22.684789180755615
current transaction is aborted, commands ignored until end of transaction block
22.685749292373657
current transaction is aborted, commands ignored until end of transaction block
22.686805725097656
current transaction is aborted, commands ignored until end of transaction block
22.687747955322266
current transaction is aborted, commands ignored until end of transaction block
22.688745737075806
current transaction is aborted, commands ignored until end of transaction block
22.688745737075806
current transaction is aborted, commands ignored until end of transaction block
22.689789295196533
current transaction is aborted, commands ignored until end of transaction block
22.690845251083374
current transaction is aborted, commands ignored until end of transaction block
22.69198703765869
current transaction is aborted, commands ignored until end of transaction block
22.69198703765869
current transaction is aborted, commands ignored until end of transaction block
22.69282364845276
current transaction is aborted, commands ignored until end of transaction block
22.693801403045654
current transaction is aborted, commands ignored until end of transaction block
22.694749355316162
current transaction is aborted, commands ignored until end of transaction block
22.69579029083252
current transaction is aborted, commands ignored until end of transaction block
22.696951627731323
current transaction is aborted, commands ignored until end of transaction block
22.697747945785522
current transaction is aborted, commands ignored until end of transaction block
22.698758363723755
current transaction is aborted, commands ignored until end of transaction block
22.699008464813232
current transaction is aborted, commands ignored until end of transaction block
22.70077681541443
current transaction is aborted, commands ignored until end of transaction block
22.701797485351562
current transaction is aborted, commands ignored until end of transaction block
22.703518867492676
current transaction is aborted, commands ignored until end of transaction block
22.703812837600708
current transaction is aborted, commands ignored until end of transaction block
22.704790830612183
current transaction is aborted, commands ignored until end of transaction block
22.70574641227722
current transaction is aborted, commands ignored until end of transaction block
22.70674991607666
current transaction is aborted, commands ignored until end of transaction block
22.70775008201599
current transaction is aborted, commands ignored until end of transaction block
22.70879054069519
current transaction is aborted, commands ignored until end of transaction block
22.70879054069519
current transaction is aborted, commands ignored until end of transaction block
22.709796667099
current transaction is aborted, commands ignored until end of transaction block
22.711790561676025
current transaction is aborted, commands ignored until end of transaction block
22.712746381759644
current transaction is aborted, commands ignored until end of transaction block
22.71374821662903
current transaction is aborted, commands ignored until end of transaction block
22.715749502182007
current transaction is aborted, commands ignored until end of transaction block
22.715749502182007
current transaction is aborted, commands ignored until end of transaction block
22.716752529144287
current transaction is aborted, commands ignored until end of transaction block
22.71803045272827
current transaction is aborted, commands ignored until end of transaction block
22.718834161758423
current transaction is aborted, commands ignored until end of transaction block
22.719942331314087
current transaction is aborted, commands ignored until end of transaction block
22.720759868621826
current transaction is aborted, commands ignored until end of transaction block
22.72275400161743
current transaction is aborted, commands ignored until end of transaction block
22.723748922348022
current transaction is aborted, commands ignored until end of transaction block
22.724748373031616
current transaction is aborted, commands ignored until end of transaction block
22.724748373031616
current transaction is aborted, commands ignored until end of transaction block
22.725751161575317
current transaction is aborted, commands ignored until end of transaction block
22.726746082305908
current transaction is aborted, commands ignored until end of transaction block
22.727749586105347
current transaction is aborted, commands ignored until end of transaction block
22.7291738986969
current transaction is aborted, commands ignored until end of transaction block
22.72974944114685
current transaction is aborted, commands ignored until end of transaction block
22.73074746131897
current transaction is aborted, commands ignored until end of transaction block
22.73182511329651
current transaction is aborted, commands ignored until end of transaction block
22.73274803161621
current transaction is aborted, commands ignored until end of transaction block
22.733746767044067
current transaction is aborted, commands ignored until end of transaction block
22.734748601913452
current transaction is aborted, commands ignored until end of transaction block
22.735785722732544
current transaction is aborted, commands ignored until end of transaction block
22.73674726486206
current transaction is aborted, commands ignored until end of transaction block
22.737756729125977
current transaction is aborted, commands ignored until end of transaction block
22.738747596740723
current transaction is aborted, commands ignored until end of transaction block
22.73974871635437
current transaction is aborted, commands ignored until end of transaction block
22.74074673652649
current transaction is aborted, commands ignored until end of transaction block
22.74074673652649
current transaction is aborted, commands ignored until end of transaction block
22.74276900291443
current transaction is aborted, commands ignored until end of transaction block
22.74378514289856
current transaction is aborted, commands ignored until end of transaction block
22.744747638702393
current transaction is aborted, commands ignored until end of transaction block
22.745747804641724
current transaction is aborted, commands ignored until end of transaction block
22.746748685836792
current transaction is aborted, commands ignored until end of transaction block
22.74698233604431
current transaction is aborted, commands ignored until end of transaction block
22.74875044822693
current transaction is aborted, commands ignored until end of transaction block
22.749878406524658
current transaction is aborted, commands ignored until end of transaction block
22.751749515533447
current transaction is aborted, commands ignored until end of transaction block
22.752010583877563
current transaction is aborted, commands ignored until end of transaction block
22.753760814666748
current transaction is aborted, commands ignored until end of transaction block
22.754758596420288
current transaction is aborted, commands ignored until end of transaction block
22.755751848220825
current transaction is aborted, commands ignored until end of transaction block
22.75775384902954
current transaction is aborted, commands ignored until end of transaction block
22.759754180908203
current transaction is aborted, commands ignored until end of transaction block
22.759754180908203
current transaction is aborted, commands ignored until end of transaction block
22.760753870010376
current transaction is aborted, commands ignored until end of transaction block
22.76176166534424
current transaction is aborted, commands ignored until end of transaction block
22.762895345687866
current transaction is aborted, commands ignored until end of transaction block
22.763914585113525
current transaction is aborted, commands ignored until end of transaction block
22.764799118041992
current transaction is aborted, commands ignored until end of transaction block
22.765798330307007
current transaction is aborted, commands ignored until end of transaction block
22.76679277420044
current transaction is aborted, commands ignored until end of transaction block
22.76679277420044
current transaction is aborted, commands ignored until end of transaction block
22.76775884628296
current transaction is aborted, commands ignored until end of transaction block
22.76874542236328
current transaction is aborted, commands ignored until end of transaction block
22.769750118255615
current transaction is aborted, commands ignored until end of transaction block
22.770901441574097
current transaction is aborted, commands ignored until end of transaction block
22.771777391433716
current transaction is aborted, commands ignored until end of transaction block
22.77275824546814
current transaction is aborted, commands ignored until end of transaction block
22.77375102043152
current transaction is aborted, commands ignored until end of transaction block
22.77375102043152
current transaction is aborted, commands ignored until end of transaction block
22.774925231933594
current transaction is aborted, commands ignored until end of transaction block
22.775749444961548
current transaction is aborted, commands ignored until end of transaction block
22.776758193969727
current transaction is aborted, commands ignored until end of transaction block
22.777754306793213
current transaction is aborted, commands ignored until end of transaction block
22.777754306793213
current transaction is aborted, commands ignored until end of transaction block
22.778791904449463
current transaction is aborted, commands ignored until end of transaction block
22.77979040145874
current transaction is aborted, commands ignored until end of transaction block
22.780794858932495
current transaction is aborted, commands ignored until end of transaction block
22.78178882598877
current transaction is aborted, commands ignored until end of transaction block
22.78178882598877
current transaction is aborted, commands ignored until end of transaction block
22.78283953666687
current transaction is aborted, commands ignored until end of transaction block
22.783774614334106
current transaction is aborted, commands ignored until end of transaction block
22.78487253189087
current transaction is aborted, commands ignored until end of transaction block
22.785749197006226
current transaction is aborted, commands ignored until end of transaction block
22.78679060935974
current transaction is aborted, commands ignored until end of transaction block
22.787856340408325
current transaction is aborted, commands ignored until end of transaction block
22.788751363754272
current transaction is aborted, commands ignored until end of transaction block
22.788751363754272
current transaction is aborted, commands ignored until end of transaction block
22.78975248336792
current transaction is aborted, commands ignored until end of transaction block
22.790791034698486
current transaction is aborted, commands ignored until end of transaction block
22.791749477386475
current transaction is aborted, commands ignored until end of transaction block
22.792751789093018
current transaction is aborted, commands ignored until end of transaction block
22.792751789093018
current transaction is aborted, commands ignored until end of transaction block
22.793956518173218
current transaction is aborted, commands ignored until end of transaction block
22.79479217529297
current transaction is aborted, commands ignored until end of transaction block
22.79582715034485
current transaction is aborted, commands ignored until end of transaction block
22.796748876571655
current transaction is aborted, commands ignored until end of transaction block
22.79775595664978
current transaction is aborted, commands ignored until end of transaction block
22.798746824264526
current transaction is aborted, commands ignored until end of transaction block
22.798746824264526
current transaction is aborted, commands ignored until end of transaction block
22.800788640975952
current transaction is aborted, commands ignored until end of transaction block
22.800788640975952
current transaction is aborted, commands ignored until end of transaction block
22.8017897605896
current transaction is aborted, commands ignored until end of transaction block
22.8027560710907
current transaction is aborted, commands ignored until end of transaction block
22.804757595062256
current transaction is aborted, commands ignored until end of transaction block
22.80575728416443
current transaction is aborted, commands ignored until end of transaction block
22.80676555633545
current transaction is aborted, commands ignored until end of transaction block
22.807751893997192
current transaction is aborted, commands ignored until end of transaction block
22.808796405792236
current transaction is aborted, commands ignored until end of transaction block
22.808796405792236
current transaction is aborted, commands ignored until end of transaction block
22.810303688049316
current transaction is aborted, commands ignored until end of transaction block
22.81075119972229
current transaction is aborted, commands ignored until end of transaction block
22.812747955322266
current transaction is aborted, commands ignored until end of transaction block
22.812747955322266
current transaction is aborted, commands ignored until end of transaction block
22.813879013061523
current transaction is aborted, commands ignored until end of transaction block
22.814747095108032
current transaction is aborted, commands ignored until end of transaction block
22.815967082977295
current transaction is aborted, commands ignored until end of transaction block
22.816940784454346
current transaction is aborted, commands ignored until end of transaction block
22.817776203155518
current transaction is aborted, commands ignored until end of transaction block
22.817776203155518
current transaction is aborted, commands ignored until end of transaction block
22.8188054561615
current transaction is aborted, commands ignored until end of transaction block
22.8197922706604
current transaction is aborted, commands ignored until end of transaction block
22.82076668739319
current transaction is aborted, commands ignored until end of transaction block
22.821884870529175
current transaction is aborted, commands ignored until end of transaction block
22.823023319244385
current transaction is aborted, commands ignored until end of transaction block
22.823752880096436
current transaction is aborted, commands ignored until end of transaction block
22.82474970817566
current transaction is aborted, commands ignored until end of transaction block
22.82575249671936
current transaction is aborted, commands ignored until end of transaction block
22.826791763305664
current transaction is aborted, commands ignored until end of transaction block
22.826791763305664
current transaction is aborted, commands ignored until end of transaction block
22.827790021896362
current transaction is aborted, commands ignored until end of transaction block
22.828937768936157
current transaction is aborted, commands ignored until end of transaction block
22.829794883728027
current transaction is aborted, commands ignored until end of transaction block
22.830744743347168
current transaction is aborted, commands ignored until end of transaction block
22.830744743347168
current transaction is aborted, commands ignored until end of transaction block
22.83175492286682
current transaction is aborted, commands ignored until end of transaction block
22.83280062675476
current transaction is aborted, commands ignored until end of transaction block
22.833752393722534
current transaction is aborted, commands ignored until end of transaction block
22.834819793701172
current transaction is aborted, commands ignored until end of transaction block
22.83579158782959
current transaction is aborted, commands ignored until end of transaction block
22.836790084838867
current transaction is aborted, commands ignored until end of transaction block
22.836790084838867
current transaction is aborted, commands ignored until end of transaction block
22.838112592697144
current transaction is aborted, commands ignored until end of transaction block
22.838751792907715
current transaction is aborted, commands ignored until end of transaction block
22.840753316879272
current transaction is aborted, commands ignored until end of transaction block
22.840753316879272
current transaction is aborted, commands ignored until end of transaction block
22.8422212600708
current transaction is aborted, commands ignored until end of transaction block
22.84279179573059
current transaction is aborted, commands ignored until end of transaction block
22.843751907348633
current transaction is aborted, commands ignored until end of transaction block
22.84478998184204
current transaction is aborted, commands ignored until end of transaction block
22.84574842453003
current transaction is aborted, commands ignored until end of transaction block
22.84574842453003
current transaction is aborted, commands ignored until end of transaction block
22.847118854522705
current transaction is aborted, commands ignored until end of transaction block
22.847962617874146
current transaction is aborted, commands ignored until end of transaction block
22.848796129226685
current transaction is aborted, commands ignored until end of transaction block
22.849751234054565
current transaction is aborted, commands ignored until end of transaction block
22.849751234054565
current transaction is aborted, commands ignored until end of transaction block
22.85090684890747
current transaction is aborted, commands ignored until end of transaction block
22.851752042770386
current transaction is aborted, commands ignored until end of transaction block
22.852762460708618
current transaction is aborted, commands ignored until end of transaction block
22.85402774810791
current transaction is aborted, commands ignored until end of transaction block
22.85475206375122
current transaction is aborted, commands ignored until end of transaction block
22.855947256088257
current transaction is aborted, commands ignored until end of transaction block
22.856791019439697
current transaction is aborted, commands ignored until end of transaction block
22.856791019439697
current transaction is aborted, commands ignored until end of transaction block
22.85775589942932
current transaction is aborted, commands ignored until end of transaction block
22.858882188796997
current transaction is aborted, commands ignored until end of transaction block
22.859763383865356
current transaction is aborted, commands ignored until end of transaction block
22.860787868499756
current transaction is aborted, commands ignored until end of transaction block
22.86179494857788
current transaction is aborted, commands ignored until end of transaction block
22.86179494857788
current transaction is aborted, commands ignored until end of transaction block
22.862788677215576
current transaction is aborted, commands ignored until end of transaction block
22.863789558410645
current transaction is aborted, commands ignored until end of transaction block
22.864789485931396
current transaction is aborted, commands ignored until end of transaction block
22.864789485931396
current transaction is aborted, commands ignored until end of transaction block
22.86675190925598
current transaction is aborted, commands ignored until end of transaction block
22.867751598358154
current transaction is aborted, commands ignored until end of transaction block
22.86874794960022
current transaction is aborted, commands ignored until end of transaction block
22.869750499725342
current transaction is aborted, commands ignored until end of transaction block
22.869750499725342
current transaction is aborted, commands ignored until end of transaction block
22.870789527893066
current transaction is aborted, commands ignored until end of transaction block
22.872023820877075
current transaction is aborted, commands ignored until end of transaction block
22.872758626937866
current transaction is aborted, commands ignored until end of transaction block
22.87375497817993
current transaction is aborted, commands ignored until end of transaction block
22.87375497817993
current transaction is aborted, commands ignored until end of transaction block
22.874792337417603
current transaction is aborted, commands ignored until end of transaction block
22.876195669174194
current transaction is aborted, commands ignored until end of transaction block
22.87679624557495
current transaction is aborted, commands ignored until end of transaction block
22.877750158309937
current transaction is aborted, commands ignored until end of transaction block
22.87878918647766
current transaction is aborted, commands ignored until end of transaction block
22.87975239753723
current transaction is aborted, commands ignored until end of transaction block
22.88075351715088
current transaction is aborted, commands ignored until end of transaction block
22.881752729415894
current transaction is aborted, commands ignored until end of transaction block
22.882789134979248
current transaction is aborted, commands ignored until end of transaction block
22.882789134979248
current transaction is aborted, commands ignored until end of transaction block
22.88379192352295
current transaction is aborted, commands ignored until end of transaction block
22.885096788406372
current transaction is aborted, commands ignored until end of transaction block
22.885096788406372
current transaction is aborted, commands ignored until end of transaction block
22.885788917541504
current transaction is aborted, commands ignored until end of transaction block
22.886789798736572
current transaction is aborted, commands ignored until end of transaction block
22.888983726501465
current transaction is aborted, commands ignored until end of transaction block
22.889784574508667
current transaction is aborted, commands ignored until end of transaction block
22.89079189300537
current transaction is aborted, commands ignored until end of transaction block
22.89079189300537
current transaction is aborted, commands ignored until end of transaction block
22.891794443130493
current transaction is aborted, commands ignored until end of transaction block
22.89311122894287
current transaction is aborted, commands ignored until end of transaction block
22.893746614456177
current transaction is aborted, commands ignored until end of transaction block
22.89475131034851
current transaction is aborted, commands ignored until end of transaction block
22.89588737487793
current transaction is aborted, commands ignored until end of transaction block
22.89674735069275
current transaction is aborted, commands ignored until end of transaction block
22.89674735069275
current transaction is aborted, commands ignored until end of transaction block
22.897794246673584
current transaction is aborted, commands ignored until end of transaction block
22.898969411849976
current transaction is aborted, commands ignored until end of transaction block
22.899808168411255
current transaction is aborted, commands ignored until end of transaction block
22.90075707435608
current transaction is aborted, commands ignored until end of transaction block
22.901750087738037
current transaction is aborted, commands ignored until end of transaction block
22.90374755859375
current transaction is aborted, commands ignored until end of transaction block
22.90475630760193
current transaction is aborted, commands ignored until end of transaction block
22.90574812889099
current transaction is aborted, commands ignored until end of transaction block
22.906747817993164
current transaction is aborted, commands ignored until end of transaction block
22.907755136489868
current transaction is aborted, commands ignored until end of transaction block
22.908775091171265
current transaction is aborted, commands ignored until end of transaction block
22.910747051239014
current transaction is aborted, commands ignored until end of transaction block
22.91179084777832
current transaction is aborted, commands ignored until end of transaction block
22.91179084777832
current transaction is aborted, commands ignored until end of transaction block
22.913018703460693
current transaction is aborted, commands ignored until end of transaction block
22.913793802261353
current transaction is aborted, commands ignored until end of transaction block
22.9147469997406
current transaction is aborted, commands ignored until end of transaction block
22.916745901107788
current transaction is aborted, commands ignored until end of transaction block
22.91775417327881
current transaction is aborted, commands ignored until end of transaction block
22.91775417327881
current transaction is aborted, commands ignored until end of transaction block
22.918790340423584
current transaction is aborted, commands ignored until end of transaction block
22.91975736618042
current transaction is aborted, commands ignored until end of transaction block
22.920749187469482
current transaction is aborted, commands ignored until end of transaction block
22.921753644943237
current transaction is aborted, commands ignored until end of transaction block
22.922756910324097
current transaction is aborted, commands ignored until end of transaction block
22.92386293411255
current transaction is aborted, commands ignored until end of transaction block
22.924760818481445
current transaction is aborted, commands ignored until end of transaction block
22.925753593444824
current transaction is aborted, commands ignored until end of transaction block
22.925753593444824
current transaction is aborted, commands ignored until end of transaction block
22.927060842514038
current transaction is aborted, commands ignored until end of transaction block
22.927759170532227
current transaction is aborted, commands ignored until end of transaction block
22.92878270149231
current transaction is aborted, commands ignored until end of transaction block
22.930362701416016
current transaction is aborted, commands ignored until end of transaction block
22.930920124053955
current transaction is aborted, commands ignored until end of transaction block
22.931964635849
current transaction is aborted, commands ignored until end of transaction block
22.93281126022339
current transaction is aborted, commands ignored until end of transaction block
22.93415880203247
current transaction is aborted, commands ignored until end of transaction block
22.934751987457275
current transaction is aborted, commands ignored until end of transaction block
22.934751987457275
current transaction is aborted, commands ignored until end of transaction block
22.936797857284546
current transaction is aborted, commands ignored until end of transaction block
22.937758922576904
current transaction is aborted, commands ignored until end of transaction block
22.938756227493286
current transaction is aborted, commands ignored until end of transaction block
22.938918828964233
current transaction is aborted, commands ignored until end of transaction block
22.940288066864014
current transaction is aborted, commands ignored until end of transaction block
22.94090437889099
current transaction is aborted, commands ignored until end of transaction block
22.941754579544067
current transaction is aborted, commands ignored until end of transaction block
22.942745208740234
current transaction is aborted, commands ignored until end of transaction block
22.942745208740234
current transaction is aborted, commands ignored until end of transaction block
22.94375514984131
current transaction is aborted, commands ignored until end of transaction block
22.94476556777954
current transaction is aborted, commands ignored until end of transaction block
22.945858240127563
current transaction is aborted, commands ignored until end of transaction block
22.946748733520508
current transaction is aborted, commands ignored until end of transaction block
22.947792053222656
current transaction is aborted, commands ignored until end of transaction block
22.948744773864746
current transaction is aborted, commands ignored until end of transaction block
22.948744773864746
current transaction is aborted, commands ignored until end of transaction block
22.95083999633789
current transaction is aborted, commands ignored until end of transaction block
22.95175266265869
current transaction is aborted, commands ignored until end of transaction block
22.9527907371521
current transaction is aborted, commands ignored until end of transaction block
22.953290224075317
current transaction is aborted, commands ignored until end of transaction block
22.953790187835693
current transaction is aborted, commands ignored until end of transaction block
22.954756021499634
current transaction is aborted, commands ignored until end of transaction block
22.955746173858643
current transaction is aborted, commands ignored until end of transaction block
22.956752061843872
current transaction is aborted, commands ignored until end of transaction block
22.956752061843872
current transaction is aborted, commands ignored until end of transaction block
22.958139896392822
current transaction is aborted, commands ignored until end of transaction block
22.95879101753235
current transaction is aborted, commands ignored until end of transaction block
22.959808826446533
current transaction is aborted, commands ignored until end of transaction block
22.960891723632812
current transaction is aborted, commands ignored until end of transaction block
22.96178698539734
current transaction is aborted, commands ignored until end of transaction block
22.96178698539734
current transaction is aborted, commands ignored until end of transaction block
22.962745904922485
current transaction is aborted, commands ignored until end of transaction block
22.963749885559082
current transaction is aborted, commands ignored until end of transaction block
22.965131998062134
current transaction is aborted, commands ignored until end of transaction block
22.965749979019165
current transaction is aborted, commands ignored until end of transaction block
22.966754913330078
current transaction is aborted, commands ignored until end of transaction block
22.967789888381958
current transaction is aborted, commands ignored until end of transaction block
22.967789888381958
current transaction is aborted, commands ignored until end of transaction block
22.968790769577026
current transaction is aborted, commands ignored until end of transaction block
22.96988821029663
current transaction is aborted, commands ignored until end of transaction block
22.970759868621826
current transaction is aborted, commands ignored until end of transaction block
22.971803426742554
current transaction is aborted, commands ignored until end of transaction block
22.972795009613037
current transaction is aborted, commands ignored until end of transaction block
22.97378969192505
current transaction is aborted, commands ignored until end of transaction block
22.97409749031067
current transaction is aborted, commands ignored until end of transaction block
22.975144386291504
current transaction is aborted, commands ignored until end of transaction block
22.975932836532593
current transaction is aborted, commands ignored until end of transaction block
22.976747751235962
current transaction is aborted, commands ignored until end of transaction block
22.977749824523926
current transaction is aborted, commands ignored until end of transaction block
22.978782176971436
current transaction is aborted, commands ignored until end of transaction block
22.979755878448486
current transaction is aborted, commands ignored until end of transaction block
22.980751991271973
current transaction is aborted, commands ignored until end of transaction block
22.981431007385254
current transaction is aborted, commands ignored until end of transaction block
22.981789588928223
current transaction is aborted, commands ignored until end of transaction block
22.982876777648926
current transaction is aborted, commands ignored until end of transaction block
22.98376202583313
current transaction is aborted, commands ignored until end of transaction block
22.98475217819214
current transaction is aborted, commands ignored until end of transaction block
22.98475217819214
current transaction is aborted, commands ignored until end of transaction block
22.986259937286377
current transaction is aborted, commands ignored until end of transaction block
22.986793279647827
current transaction is aborted, commands ignored until end of transaction block
22.987807750701904
current transaction is aborted, commands ignored until end of transaction block
22.98875093460083
current transaction is aborted, commands ignored until end of transaction block
22.989789247512817
current transaction is aborted, commands ignored until end of transaction block
22.989789247512817
current transaction is aborted, commands ignored until end of transaction block
22.990752935409546
current transaction is aborted, commands ignored until end of transaction block
22.99176001548767
current transaction is aborted, commands ignored until end of transaction block
22.992875814437866
current transaction is aborted, commands ignored until end of transaction block
22.993757963180542
current transaction is aborted, commands ignored until end of transaction block
22.994752645492554
current transaction is aborted, commands ignored until end of transaction block
22.99575662612915
current transaction is aborted, commands ignored until end of transaction block
22.995863914489746
current transaction is aborted, commands ignored until end of transaction block
22.997124195098877
current transaction is aborted, commands ignored until end of transaction block
22.99776005744934
current transaction is aborted, commands ignored until end of transaction block
22.998755931854248
current transaction is aborted, commands ignored until end of transaction block
22.998755931854248
current transaction is aborted, commands ignored until end of transaction block
22.99997591972351
current transaction is aborted, commands ignored until end of transaction block
23.000927686691284
current transaction is aborted, commands ignored until end of transaction block
23.001862287521362
current transaction is aborted, commands ignored until end of transaction block
23.002791166305542
current transaction is aborted, commands ignored until end of transaction block
23.002791166305542
current transaction is aborted, commands ignored until end of transaction block
23.00378918647766
current transaction is aborted, commands ignored until end of transaction block
23.004748106002808
current transaction is aborted, commands ignored until end of transaction block
23.005754709243774
current transaction is aborted, commands ignored until end of transaction block
23.006748914718628
current transaction is aborted, commands ignored until end of transaction block
23.007752656936646
current transaction is aborted, commands ignored until end of transaction block
23.008790731430054
current transaction is aborted, commands ignored until end of transaction block
23.008790731430054
current transaction is aborted, commands ignored until end of transaction block
23.009750604629517
current transaction is aborted, commands ignored until end of transaction block
23.01079750061035
current transaction is aborted, commands ignored until end of transaction block
23.011746644973755
current transaction is aborted, commands ignored until end of transaction block
23.012752294540405
current transaction is aborted, commands ignored until end of transaction block
23.012752294540405
current transaction is aborted, commands ignored until end of transaction block
23.013790130615234
current transaction is aborted, commands ignored until end of transaction block
23.01475214958191
current transaction is aborted, commands ignored until end of transaction block
23.01580548286438
current transaction is aborted, commands ignored until end of transaction block
23.01675033569336
current transaction is aborted, commands ignored until end of transaction block
23.01675033569336
current transaction is aborted, commands ignored until end of transaction block
23.017881393432617
current transaction is aborted, commands ignored until end of transaction block
23.018791675567627
current transaction is aborted, commands ignored until end of transaction block
23.01979947090149
current transaction is aborted, commands ignored until end of transaction block
23.020959615707397
current transaction is aborted, commands ignored until end of transaction block
23.02175283432007
current transaction is aborted, commands ignored until end of transaction block
23.022759437561035
current transaction is aborted, commands ignored until end of transaction block
23.022759437561035
current transaction is aborted, commands ignored until end of transaction block
23.023789167404175
current transaction is aborted, commands ignored until end of transaction block
23.024818420410156
current transaction is aborted, commands ignored until end of transaction block
23.0257511138916
current transaction is aborted, commands ignored until end of transaction block
23.026798009872437
current transaction is aborted, commands ignored until end of transaction block
23.027833223342896
current transaction is aborted, commands ignored until end of transaction block
23.028753757476807
current transaction is aborted, commands ignored until end of transaction block
23.02974820137024
current transaction is aborted, commands ignored until end of transaction block
23.02974820137024
current transaction is aborted, commands ignored until end of transaction block
23.03079104423523
current transaction is aborted, commands ignored until end of transaction block
23.031789541244507
current transaction is aborted, commands ignored until end of transaction block
23.032768964767456
current transaction is aborted, commands ignored until end of transaction block
23.032768964767456
current transaction is aborted, commands ignored until end of transaction block
23.034749031066895
current transaction is aborted, commands ignored until end of transaction block
23.034749031066895
current transaction is aborted, commands ignored until end of transaction block
23.035747289657593
current transaction is aborted, commands ignored until end of transaction block
23.036797761917114
current transaction is aborted, commands ignored until end of transaction block
23.03775668144226
current transaction is aborted, commands ignored until end of transaction block
23.03775668144226
current transaction is aborted, commands ignored until end of transaction block
23.038750171661377
current transaction is aborted, commands ignored until end of transaction block
23.039747714996338
current transaction is aborted, commands ignored until end of transaction block
23.04079508781433
current transaction is aborted, commands ignored until end of transaction block
23.042747259140015
current transaction is aborted, commands ignored until end of transaction block
23.043802976608276
current transaction is aborted, commands ignored until end of transaction block
23.043802976608276
current transaction is aborted, commands ignored until end of transaction block
23.045044660568237
current transaction is aborted, commands ignored until end of transaction block
23.045755624771118
current transaction is aborted, commands ignored until end of transaction block
23.0467586517334
current transaction is aborted, commands ignored until end of transaction block
23.047801971435547
current transaction is aborted, commands ignored until end of transaction block
23.047801971435547
current transaction is aborted, commands ignored until end of transaction block
23.04875111579895
current transaction is aborted, commands ignored until end of transaction block
23.049789905548096
current transaction is aborted, commands ignored until end of transaction block
23.050791025161743
current transaction is aborted, commands ignored until end of transaction block
23.050791025161743
current transaction is aborted, commands ignored until end of transaction block
23.05192756652832
current transaction is aborted, commands ignored until end of transaction block
23.05297350883484
current transaction is aborted, commands ignored until end of transaction block
23.053834676742554
current transaction is aborted, commands ignored until end of transaction block
23.05475091934204
current transaction is aborted, commands ignored until end of transaction block
23.05575203895569
current transaction is aborted, commands ignored until end of transaction block
23.05575203895569
current transaction is aborted, commands ignored until end of transaction block
23.05686068534851
current transaction is aborted, commands ignored until end of transaction block
23.05775022506714
current transaction is aborted, commands ignored until end of transaction block
23.058798789978027
current transaction is aborted, commands ignored until end of transaction block
23.059792280197144
current transaction is aborted, commands ignored until end of transaction block
23.059792280197144
current transaction is aborted, commands ignored until end of transaction block
23.061811447143555
current transaction is aborted, commands ignored until end of transaction block
23.062747955322266
current transaction is aborted, commands ignored until end of transaction block
23.06404185295105
current transaction is aborted, commands ignored until end of transaction block
23.064785480499268
current transaction is aborted, commands ignored until end of transaction block
23.064785480499268
current transaction is aborted, commands ignored until end of transaction block
23.06575894355774
current transaction is aborted, commands ignored until end of transaction block
23.066834688186646
current transaction is aborted, commands ignored until end of transaction block
23.06776261329651
current transaction is aborted, commands ignored until end of transaction block
23.069000959396362
current transaction is aborted, commands ignored until end of transaction block
23.069769859313965
current transaction is aborted, commands ignored until end of transaction block
23.069769859313965
current transaction is aborted, commands ignored until end of transaction block
23.070986032485962
current transaction is aborted, commands ignored until end of transaction block
23.072190761566162
current transaction is aborted, commands ignored until end of transaction block
23.07275390625
current transaction is aborted, commands ignored until end of transaction block
23.07374382019043
current transaction is aborted, commands ignored until end of transaction block
23.074755668640137
current transaction is aborted, commands ignored until end of transaction block
23.07674741744995
current transaction is aborted, commands ignored until end of transaction block
23.078076124191284
current transaction is aborted, commands ignored until end of transaction block
23.07882046699524
current transaction is aborted, commands ignored until end of transaction block
23.07974886894226
current transaction is aborted, commands ignored until end of transaction block
23.07974886894226
current transaction is aborted, commands ignored until end of transaction block
23.080745220184326
current transaction is aborted, commands ignored until end of transaction block
23.082802772521973
current transaction is aborted, commands ignored until end of transaction block
23.08375096321106
current transaction is aborted, commands ignored until end of transaction block
23.08375096321106
current transaction is aborted, commands ignored until end of transaction block
23.085034608840942
current transaction is aborted, commands ignored until end of transaction block
23.085789442062378
current transaction is aborted, commands ignored until end of transaction block
23.086787700653076
current transaction is aborted, commands ignored until end of transaction block
23.087746381759644
current transaction is aborted, commands ignored until end of transaction block
23.087746381759644
current transaction is aborted, commands ignored until end of transaction block
23.089760780334473
current transaction is aborted, commands ignored until end of transaction block
23.091758966445923
current transaction is aborted, commands ignored until end of transaction block
23.091758966445923
current transaction is aborted, commands ignored until end of transaction block
23.09283471107483
current transaction is aborted, commands ignored until end of transaction block
23.09375023841858
current transaction is aborted, commands ignored until end of transaction block
23.095750093460083
current transaction is aborted, commands ignored until end of transaction block
23.09674906730652
current transaction is aborted, commands ignored until end of transaction block
23.09674906730652
current transaction is aborted, commands ignored until end of transaction block
23.097748279571533
current transaction is aborted, commands ignored until end of transaction block
23.09898018836975
current transaction is aborted, commands ignored until end of transaction block
23.10012173652649
current transaction is aborted, commands ignored until end of transaction block
23.100902557373047
current transaction is aborted, commands ignored until end of transaction block
23.101748943328857
current transaction is aborted, commands ignored until end of transaction block
23.10275149345398
current transaction is aborted, commands ignored until end of transaction block
23.103755712509155
current transaction is aborted, commands ignored until end of transaction block
23.10475492477417
current transaction is aborted, commands ignored until end of transaction block
23.10576367378235
current transaction is aborted, commands ignored until end of transaction block
23.10576367378235
current transaction is aborted, commands ignored until end of transaction block
23.1067898273468
current transaction is aborted, commands ignored until end of transaction block
23.107820749282837
current transaction is aborted, commands ignored until end of transaction block
23.108749866485596
current transaction is aborted, commands ignored until end of transaction block
23.10976243019104
current transaction is aborted, commands ignored until end of transaction block
23.110767364501953
current transaction is aborted, commands ignored until end of transaction block
23.111843585968018
current transaction is aborted, commands ignored until end of transaction block
23.11279010772705
current transaction is aborted, commands ignored until end of transaction block
23.113789796829224
current transaction is aborted, commands ignored until end of transaction block
23.113789796829224
current transaction is aborted, commands ignored until end of transaction block
23.115752935409546
current transaction is aborted, commands ignored until end of transaction block
23.115752935409546
current transaction is aborted, commands ignored until end of transaction block
23.1187903881073
current transaction is aborted, commands ignored until end of transaction block
23.1187903881073
current transaction is aborted, commands ignored until end of transaction block
23.11978554725647
current transaction is aborted, commands ignored until end of transaction block
23.121025800704956
current transaction is aborted, commands ignored until end of transaction block
23.121750116348267
current transaction is aborted, commands ignored until end of transaction block
23.122747659683228
current transaction is aborted, commands ignored until end of transaction block
23.12375259399414
current transaction is aborted, commands ignored until end of transaction block
23.12375259399414
current transaction is aborted, commands ignored until end of transaction block
23.124763250350952
current transaction is aborted, commands ignored until end of transaction block
23.125755071640015
current transaction is aborted, commands ignored until end of transaction block
23.12717580795288
current transaction is aborted, commands ignored until end of transaction block
23.12812638282776
current transaction is aborted, commands ignored until end of transaction block
23.1287944316864
current transaction is aborted, commands ignored until end of transaction block
23.129751205444336
current transaction is aborted, commands ignored until end of transaction block
23.130751132965088
current transaction is aborted, commands ignored until end of transaction block
23.13175082206726
current transaction is aborted, commands ignored until end of transaction block
23.132789611816406
current transaction is aborted, commands ignored until end of transaction block
23.133607864379883
current transaction is aborted, commands ignored until end of transaction block
23.133753299713135
current transaction is aborted, commands ignored until end of transaction block
23.134962797164917
current transaction is aborted, commands ignored until end of transaction block
23.13578963279724
current transaction is aborted, commands ignored until end of transaction block
23.136829137802124
current transaction is aborted, commands ignored until end of transaction block
23.13775110244751
current transaction is aborted, commands ignored until end of transaction block
23.13775110244751
current transaction is aborted, commands ignored until end of transaction block
23.13895034790039
current transaction is aborted, commands ignored until end of transaction block
23.13980269432068
current transaction is aborted, commands ignored until end of transaction block
23.140793800354004
current transaction is aborted, commands ignored until end of transaction block
23.14187979698181
current transaction is aborted, commands ignored until end of transaction block
23.142752647399902
current transaction is aborted, commands ignored until end of transaction block
23.142752647399902
current transaction is aborted, commands ignored until end of transaction block
23.143749952316284
current transaction is aborted, commands ignored until end of transaction block
23.145752429962158
current transaction is aborted, commands ignored until end of transaction block
23.145752429962158
current transaction is aborted, commands ignored until end of transaction block
23.146791458129883
current transaction is aborted, commands ignored until end of transaction block
23.147748470306396
current transaction is aborted, commands ignored until end of transaction block
23.148750066757202
current transaction is aborted, commands ignored until end of transaction block
23.149746656417847
current transaction is aborted, commands ignored until end of transaction block
23.149746656417847
current transaction is aborted, commands ignored until end of transaction block
23.150790691375732
current transaction is aborted, commands ignored until end of transaction block
23.15178918838501
current transaction is aborted, commands ignored until end of transaction block
23.152791500091553
current transaction is aborted, commands ignored until end of transaction block
23.153813123703003
current transaction is aborted, commands ignored until end of transaction block
23.15478801727295
current transaction is aborted, commands ignored until end of transaction block
23.155791521072388
current transaction is aborted, commands ignored until end of transaction block
23.155791521072388
current transaction is aborted, commands ignored until end of transaction block
23.15678834915161
current transaction is aborted, commands ignored until end of transaction block
23.157913208007812
current transaction is aborted, commands ignored until end of transaction block
23.158809661865234
current transaction is aborted, commands ignored until end of transaction block
23.15975332260132
current transaction is aborted, commands ignored until end of transaction block
23.15975332260132
current transaction is aborted, commands ignored until end of transaction block
23.161137580871582
current transaction is aborted, commands ignored until end of transaction block
23.161999225616455
current transaction is aborted, commands ignored until end of transaction block
23.162790060043335
current transaction is aborted, commands ignored until end of transaction block
23.16378927230835
current transaction is aborted, commands ignored until end of transaction block
23.16378927230835
current transaction is aborted, commands ignored until end of transaction block
23.16474676132202
current transaction is aborted, commands ignored until end of transaction block
23.165804624557495
current transaction is aborted, commands ignored until end of transaction block
23.166754245758057
current transaction is aborted, commands ignored until end of transaction block
23.1679630279541
current transaction is aborted, commands ignored until end of transaction block
23.168789386749268
current transaction is aborted, commands ignored until end of transaction block
23.168789386749268
current transaction is aborted, commands ignored until end of transaction block
23.16983985900879
current transaction is aborted, commands ignored until end of transaction block
23.170759201049805
current transaction is aborted, commands ignored until end of transaction block
23.171843767166138
current transaction is aborted, commands ignored until end of transaction block
23.172770500183105
current transaction is aborted, commands ignored until end of transaction block
23.17374849319458
current transaction is aborted, commands ignored until end of transaction block
23.17374849319458
current transaction is aborted, commands ignored until end of transaction block
23.17475962638855
current transaction is aborted, commands ignored until end of transaction block
23.175886631011963
current transaction is aborted, commands ignored until end of transaction block
23.176754236221313
current transaction is aborted, commands ignored until end of transaction block
23.176754236221313
current transaction is aborted, commands ignored until end of transaction block
23.178759336471558
current transaction is aborted, commands ignored until end of transaction block
23.178759336471558
current transaction is aborted, commands ignored until end of transaction block
23.179791927337646
current transaction is aborted, commands ignored until end of transaction block
23.181748151779175
current transaction is aborted, commands ignored until end of transaction block
23.181809902191162
current transaction is aborted, commands ignored until end of transaction block
23.183115005493164
current transaction is aborted, commands ignored until end of transaction block
23.18378782272339
current transaction is aborted, commands ignored until end of transaction block
23.184789419174194
current transaction is aborted, commands ignored until end of transaction block
23.185750722885132
current transaction is aborted, commands ignored until end of transaction block
23.186766862869263
current transaction is aborted, commands ignored until end of transaction block
23.18774962425232
current transaction is aborted, commands ignored until end of transaction block
23.188748836517334
current transaction is aborted, commands ignored until end of transaction block
23.18890142440796
current transaction is aborted, commands ignored until end of transaction block
23.189746856689453
current transaction is aborted, commands ignored until end of transaction block
23.190747022628784
current transaction is aborted, commands ignored until end of transaction block
23.191754579544067
current transaction is aborted, commands ignored until end of transaction block
23.192811012268066
current transaction is aborted, commands ignored until end of transaction block
23.193764209747314
current transaction is aborted, commands ignored until end of transaction block
23.19475221633911
current transaction is aborted, commands ignored until end of transaction block
23.19475221633911
current transaction is aborted, commands ignored until end of transaction block
23.196221113204956
current transaction is aborted, commands ignored until end of transaction block
23.196749448776245
current transaction is aborted, commands ignored until end of transaction block
23.19797134399414
current transaction is aborted, commands ignored until end of transaction block
23.198744297027588
current transaction is aborted, commands ignored until end of transaction block
23.199750900268555
current transaction is aborted, commands ignored until end of transaction block
23.200775146484375
current transaction is aborted, commands ignored until end of transaction block
23.20179319381714
current transaction is aborted, commands ignored until end of transaction block
23.202788829803467
current transaction is aborted, commands ignored until end of transaction block
23.203062534332275
current transaction is aborted, commands ignored until end of transaction block
23.203938722610474
current transaction is aborted, commands ignored until end of transaction block
23.205000162124634
current transaction is aborted, commands ignored until end of transaction block
23.205745220184326
current transaction is aborted, commands ignored until end of transaction block
23.206754207611084
current transaction is aborted, commands ignored until end of transaction block
23.207794189453125
current transaction is aborted, commands ignored until end of transaction block
23.208882331848145
current transaction is aborted, commands ignored until end of transaction block
23.208882331848145
current transaction is aborted, commands ignored until end of transaction block
23.209798336029053
current transaction is aborted, commands ignored until end of transaction block
23.210931301116943
current transaction is aborted, commands ignored until end of transaction block
23.211791276931763
current transaction is aborted, commands ignored until end of transaction block
23.212746620178223
current transaction is aborted, commands ignored until end of transaction block
23.21375274658203
current transaction is aborted, commands ignored until end of transaction block
23.21475625038147
current transaction is aborted, commands ignored until end of transaction block
23.215155601501465
current transaction is aborted, commands ignored until end of transaction block
23.2157883644104
current transaction is aborted, commands ignored until end of transaction block
23.21693515777588
current transaction is aborted, commands ignored until end of transaction block
23.21775245666504
current transaction is aborted, commands ignored until end of transaction block
23.21775245666504
current transaction is aborted, commands ignored until end of transaction block
23.21878981590271
current transaction is aborted, commands ignored until end of transaction block
23.21980047225952
current transaction is aborted, commands ignored until end of transaction block
23.220754623413086
current transaction is aborted, commands ignored until end of transaction block
23.221945762634277
current transaction is aborted, commands ignored until end of transaction block
23.222789525985718
current transaction is aborted, commands ignored until end of transaction block
23.22378897666931
current transaction is aborted, commands ignored until end of transaction block
23.22378897666931
current transaction is aborted, commands ignored until end of transaction block
23.224937438964844
current transaction is aborted, commands ignored until end of transaction block
23.225908994674683
current transaction is aborted, commands ignored until end of transaction block
23.22675108909607
current transaction is aborted, commands ignored until end of transaction block
23.22675108909607
current transaction is aborted, commands ignored until end of transaction block
23.228750705718994
current transaction is aborted, commands ignored until end of transaction block
23.22974920272827
current transaction is aborted, commands ignored until end of transaction block
23.22974920272827
current transaction is aborted, commands ignored until end of transaction block
23.230789184570312
current transaction is aborted, commands ignored until end of transaction block
23.23191523551941
current transaction is aborted, commands ignored until end of transaction block
23.232788562774658
current transaction is aborted, commands ignored until end of transaction block
23.232961654663086
current transaction is aborted, commands ignored until end of transaction block
23.2337486743927
current transaction is aborted, commands ignored until end of transaction block
23.23475217819214
current transaction is aborted, commands ignored until end of transaction block
23.23581027984619
current transaction is aborted, commands ignored until end of transaction block
23.236886978149414
current transaction is aborted, commands ignored until end of transaction block
23.237752437591553
current transaction is aborted, commands ignored until end of transaction block
23.237752437591553
current transaction is aborted, commands ignored until end of transaction block
23.238799810409546
current transaction is aborted, commands ignored until end of transaction block
23.239900588989258
current transaction is aborted, commands ignored until end of transaction block
23.24075675010681
current transaction is aborted, commands ignored until end of transaction block
23.241751432418823
current transaction is aborted, commands ignored until end of transaction block
23.242753744125366
current transaction is aborted, commands ignored until end of transaction block
23.242753744125366
current transaction is aborted, commands ignored until end of transaction block
23.243788957595825
current transaction is aborted, commands ignored until end of transaction block
23.24519395828247
current transaction is aborted, commands ignored until end of transaction block
23.245789051055908
current transaction is aborted, commands ignored until end of transaction block
23.245789051055908
current transaction is aborted, commands ignored until end of transaction block
23.246901035308838
current transaction is aborted, commands ignored until end of transaction block
23.248809814453125
current transaction is aborted, commands ignored until end of transaction block
23.249747276306152
current transaction is aborted, commands ignored until end of transaction block
23.249747276306152
current transaction is aborted, commands ignored until end of transaction block
23.250791549682617
current transaction is aborted, commands ignored until end of transaction block
23.251791954040527
current transaction is aborted, commands ignored until end of transaction block
23.251791954040527
current transaction is aborted, commands ignored until end of transaction block
23.252788305282593
current transaction is aborted, commands ignored until end of transaction block
23.25378918647766
current transaction is aborted, commands ignored until end of transaction block
23.254755973815918
current transaction is aborted, commands ignored until end of transaction block
23.255804777145386
current transaction is aborted, commands ignored until end of transaction block
23.256746530532837
current transaction is aborted, commands ignored until end of transaction block
23.257747650146484
current transaction is aborted, commands ignored until end of transaction block
23.257747650146484
current transaction is aborted, commands ignored until end of transaction block
23.25879192352295
current transaction is aborted, commands ignored until end of transaction block
23.259753942489624
current transaction is aborted, commands ignored until end of transaction block
23.260788917541504
current transaction is aborted, commands ignored until end of transaction block
23.261753797531128
current transaction is aborted, commands ignored until end of transaction block
23.261753797531128
current transaction is aborted, commands ignored until end of transaction block
23.26420831680298
current transaction is aborted, commands ignored until end of transaction block
23.264791011810303
current transaction is aborted, commands ignored until end of transaction block
23.26574969291687
current transaction is aborted, commands ignored until end of transaction block
23.266748428344727
current transaction is aborted, commands ignored until end of transaction block
23.2679123878479
current transaction is aborted, commands ignored until end of transaction block
23.268794059753418
current transaction is aborted, commands ignored until end of transaction block
23.26975417137146
current transaction is aborted, commands ignored until end of transaction block
23.270751237869263
current transaction is aborted, commands ignored until end of transaction block
23.270751237869263
current transaction is aborted, commands ignored until end of transaction block
23.27230739593506
current transaction is aborted, commands ignored until end of transaction block
23.272789239883423
current transaction is aborted, commands ignored until end of transaction block
23.273752450942993
current transaction is aborted, commands ignored until end of transaction block
23.27475142478943
current transaction is aborted, commands ignored until end of transaction block
23.2757887840271
current transaction is aborted, commands ignored until end of transaction block
23.27676272392273
current transaction is aborted, commands ignored until end of transaction block
23.278747081756592
current transaction is aborted, commands ignored until end of transaction block
23.27978801727295
current transaction is aborted, commands ignored until end of transaction block
23.27978801727295
current transaction is aborted, commands ignored until end of transaction block
23.280752182006836
current transaction is aborted, commands ignored until end of transaction block
23.282751083374023
current transaction is aborted, commands ignored until end of transaction block
23.282751083374023
current transaction is aborted, commands ignored until end of transaction block
23.284745931625366
current transaction is aborted, commands ignored until end of transaction block
23.285751342773438
current transaction is aborted, commands ignored until end of transaction block
23.28678798675537
current transaction is aborted, commands ignored until end of transaction block
23.28696298599243
current transaction is aborted, commands ignored until end of transaction block
23.28779125213623
current transaction is aborted, commands ignored until end of transaction block
23.2890305519104
current transaction is aborted, commands ignored until end of transaction block
23.28979229927063
current transaction is aborted, commands ignored until end of transaction block
23.29177188873291
current transaction is aborted, commands ignored until end of transaction block
23.29275131225586
current transaction is aborted, commands ignored until end of transaction block
23.293922424316406
current transaction is aborted, commands ignored until end of transaction block
23.294747591018677
current transaction is aborted, commands ignored until end of transaction block
23.295747756958008
current transaction is aborted, commands ignored until end of transaction block
23.295747756958008
current transaction is aborted, commands ignored until end of transaction block
23.297805309295654
current transaction is aborted, commands ignored until end of transaction block
23.298073768615723
current transaction is aborted, commands ignored until end of transaction block
23.298744440078735
current transaction is aborted, commands ignored until end of transaction block
23.29995059967041
current transaction is aborted, commands ignored until end of transaction block
23.300806522369385
current transaction is aborted, commands ignored until end of transaction block
23.301748037338257
current transaction is aborted, commands ignored until end of transaction block
23.30275058746338
current transaction is aborted, commands ignored until end of transaction block
23.30275058746338
current transaction is aborted, commands ignored until end of transaction block
23.304757833480835
current transaction is aborted, commands ignored until end of transaction block
23.304757833480835
current transaction is aborted, commands ignored until end of transaction block
23.305747747421265
current transaction is aborted, commands ignored until end of transaction block
23.30679941177368
current transaction is aborted, commands ignored until end of transaction block
23.307748317718506
current transaction is aborted, commands ignored until end of transaction block
23.30797839164734
current transaction is aborted, commands ignored until end of transaction block
23.308748722076416
current transaction is aborted, commands ignored until end of transaction block
23.30986475944519
current transaction is aborted, commands ignored until end of transaction block
23.310749292373657
current transaction is aborted, commands ignored until end of transaction block
23.311763048171997
current transaction is aborted, commands ignored until end of transaction block
23.312747955322266
current transaction is aborted, commands ignored until end of transaction block
23.313992738723755
current transaction is aborted, commands ignored until end of transaction block
23.314752101898193
current transaction is aborted, commands ignored until end of transaction block
23.31575107574463
current transaction is aborted, commands ignored until end of transaction block
23.31575107574463
current transaction is aborted, commands ignored until end of transaction block
23.317057609558105
current transaction is aborted, commands ignored until end of transaction block
23.317747354507446
current transaction is aborted, commands ignored until end of transaction block
23.31974697113037
current transaction is aborted, commands ignored until end of transaction block
23.320756673812866
current transaction is aborted, commands ignored until end of transaction block
23.320756673812866
current transaction is aborted, commands ignored until end of transaction block
23.321794748306274
current transaction is aborted, commands ignored until end of transaction block
23.322750091552734
current transaction is aborted, commands ignored until end of transaction block
23.324037551879883
current transaction is aborted, commands ignored until end of transaction block
23.324746131896973
current transaction is aborted, commands ignored until end of transaction block
23.325755834579468
current transaction is aborted, commands ignored until end of transaction block
23.326807022094727
current transaction is aborted, commands ignored until end of transaction block
23.327789306640625
current transaction is aborted, commands ignored until end of transaction block
23.327789306640625
current transaction is aborted, commands ignored until end of transaction block
23.328909158706665
current transaction is aborted, commands ignored until end of transaction block
23.329941034317017
current transaction is aborted, commands ignored until end of transaction block
23.330745458602905
current transaction is aborted, commands ignored until end of transaction block
23.331850051879883
current transaction is aborted, commands ignored until end of transaction block
23.332757711410522
current transaction is aborted, commands ignored until end of transaction block
23.333796739578247
current transaction is aborted, commands ignored until end of transaction block
23.33478832244873
current transaction is aborted, commands ignored until end of transaction block
23.33478832244873
current transaction is aborted, commands ignored until end of transaction block
23.335922241210938
current transaction is aborted, commands ignored until end of transaction block
23.336926221847534
current transaction is aborted, commands ignored until end of transaction block
23.337966203689575
current transaction is aborted, commands ignored until end of transaction block
23.33875346183777
current transaction is aborted, commands ignored until end of transaction block
23.3398916721344
current transaction is aborted, commands ignored until end of transaction block
23.34079122543335
current transaction is aborted, commands ignored until end of transaction block
23.34174656867981
current transaction is aborted, commands ignored until end of transaction block
23.342746257781982
current transaction is aborted, commands ignored until end of transaction block
23.342746257781982
current transaction is aborted, commands ignored until end of transaction block
23.344082593917847
current transaction is aborted, commands ignored until end of transaction block
23.344866514205933
current transaction is aborted, commands ignored until end of transaction block
23.345746994018555
current transaction is aborted, commands ignored until end of transaction block
23.347278118133545
current transaction is aborted, commands ignored until end of transaction block
23.3477463722229
current transaction is aborted, commands ignored until end of transaction block
23.348745584487915
current transaction is aborted, commands ignored until end of transaction block
23.34984278678894
current transaction is aborted, commands ignored until end of transaction block
23.350743532180786
current transaction is aborted, commands ignored until end of transaction block
23.350743532180786
current transaction is aborted, commands ignored until end of transaction block
23.35174870491028
current transaction is aborted, commands ignored until end of transaction block
23.353745460510254
current transaction is aborted, commands ignored until end of transaction block
23.353745460510254
current transaction is aborted, commands ignored until end of transaction block
23.35474467277527
current transaction is aborted, commands ignored until end of transaction block
23.355788946151733
current transaction is aborted, commands ignored until end of transaction block
23.356791496276855
current transaction is aborted, commands ignored until end of transaction block
23.356791496276855
current transaction is aborted, commands ignored until end of transaction block
23.357788562774658
current transaction is aborted, commands ignored until end of transaction block
23.358850717544556
current transaction is aborted, commands ignored until end of transaction block
23.35975193977356
current transaction is aborted, commands ignored until end of transaction block
23.360780000686646
current transaction is aborted, commands ignored until end of transaction block
23.360982418060303
current transaction is aborted, commands ignored until end of transaction block
23.361791372299194
current transaction is aborted, commands ignored until end of transaction block
23.36284351348877
current transaction is aborted, commands ignored until end of transaction block
23.36379098892212
current transaction is aborted, commands ignored until end of transaction block
23.36379098892212
current transaction is aborted, commands ignored until end of transaction block
23.364811182022095
current transaction is aborted, commands ignored until end of transaction block
23.365745782852173
current transaction is aborted, commands ignored until end of transaction block
23.366756200790405
current transaction is aborted, commands ignored until end of transaction block
23.367745876312256
current transaction is aborted, commands ignored until end of transaction block
23.368751287460327
current transaction is aborted, commands ignored until end of transaction block
23.369789361953735
current transaction is aborted, commands ignored until end of transaction block
23.369789361953735
current transaction is aborted, commands ignored until end of transaction block
23.370789527893066
current transaction is aborted, commands ignored until end of transaction block
23.371790647506714
current transaction is aborted, commands ignored until end of transaction block
23.372753381729126
current transaction is aborted, commands ignored until end of transaction block
23.37374711036682
current transaction is aborted, commands ignored until end of transaction block
23.374804258346558
current transaction is aborted, commands ignored until end of transaction block
23.375749588012695
current transaction is aborted, commands ignored until end of transaction block
23.375749588012695
current transaction is aborted, commands ignored until end of transaction block
23.376750707626343
current transaction is aborted, commands ignored until end of transaction block
23.377933502197266
current transaction is aborted, commands ignored until end of transaction block
23.37878942489624
current transaction is aborted, commands ignored until end of transaction block
23.379748821258545
current transaction is aborted, commands ignored until end of transaction block
23.379748821258545
current transaction is aborted, commands ignored until end of transaction block
23.381752014160156
current transaction is aborted, commands ignored until end of transaction block
23.381752014160156
current transaction is aborted, commands ignored until end of transaction block
23.383026361465454
current transaction is aborted, commands ignored until end of transaction block
23.383887767791748
current transaction is aborted, commands ignored until end of transaction block
23.384759187698364
current transaction is aborted, commands ignored until end of transaction block
23.385789394378662
current transaction is aborted, commands ignored until end of transaction block
23.385964393615723
current transaction is aborted, commands ignored until end of transaction block
23.386749267578125
current transaction is aborted, commands ignored until end of transaction block
23.388750553131104
current transaction is aborted, commands ignored until end of transaction block
23.389752626419067
current transaction is aborted, commands ignored until end of transaction block
23.389752626419067
current transaction is aborted, commands ignored until end of transaction block
23.390801906585693
current transaction is aborted, commands ignored until end of transaction block
23.39191722869873
current transaction is aborted, commands ignored until end of transaction block
23.392752170562744
current transaction is aborted, commands ignored until end of transaction block
23.39375066757202
current transaction is aborted, commands ignored until end of transaction block
23.394757747650146
current transaction is aborted, commands ignored until end of transaction block
23.39575695991516
current transaction is aborted, commands ignored until end of transaction block
23.396751165390015
current transaction is aborted, commands ignored until end of transaction block
23.396751165390015
current transaction is aborted, commands ignored until end of transaction block
23.39779758453369
current transaction is aborted, commands ignored until end of transaction block
23.398756742477417
current transaction is aborted, commands ignored until end of transaction block
23.399752855300903
current transaction is aborted, commands ignored until end of transaction block
23.4007568359375
current transaction is aborted, commands ignored until end of transaction block
23.401752710342407
current transaction is aborted, commands ignored until end of transaction block
23.402759075164795
current transaction is aborted, commands ignored until end of transaction block
23.402759075164795
current transaction is aborted, commands ignored until end of transaction block
23.403793573379517
current transaction is aborted, commands ignored until end of transaction block
23.404797315597534
current transaction is aborted, commands ignored until end of transaction block
23.405760049819946
current transaction is aborted, commands ignored until end of transaction block
23.40675163269043
current transaction is aborted, commands ignored until end of transaction block
23.40775156021118
current transaction is aborted, commands ignored until end of transaction block
23.40875768661499
current transaction is aborted, commands ignored until end of transaction block
23.40974736213684
current transaction is aborted, commands ignored until end of transaction block
23.410788774490356
current transaction is aborted, commands ignored until end of transaction block
23.410788774490356
current transaction is aborted, commands ignored until end of transaction block
23.41189980506897
current transaction is aborted, commands ignored until end of transaction block
23.41284441947937
current transaction is aborted, commands ignored until end of transaction block
23.41375708580017
current transaction is aborted, commands ignored until end of transaction block
23.414782762527466
current transaction is aborted, commands ignored until end of transaction block
23.415767431259155
current transaction is aborted, commands ignored until end of transaction block
23.416789054870605
current transaction is aborted, commands ignored until end of transaction block
23.41778826713562
current transaction is aborted, commands ignored until end of transaction block
23.41808581352234
current transaction is aborted, commands ignored until end of transaction block
23.41879105567932
current transaction is aborted, commands ignored until end of transaction block
23.419748544692993
current transaction is aborted, commands ignored until end of transaction block
23.420835733413696
current transaction is aborted, commands ignored until end of transaction block
23.421749353408813
current transaction is aborted, commands ignored until end of transaction block
23.42276096343994
current transaction is aborted, commands ignored until end of transaction block
23.423750400543213
current transaction is aborted, commands ignored until end of transaction block
23.425054788589478
current transaction is aborted, commands ignored until end of transaction block
23.4257493019104
current transaction is aborted, commands ignored until end of transaction block
23.4257493019104
current transaction is aborted, commands ignored until end of transaction block
23.426788568496704
current transaction is aborted, commands ignored until end of transaction block
23.427753686904907
current transaction is aborted, commands ignored until end of transaction block
23.4287531375885
current transaction is aborted, commands ignored until end of transaction block
23.4287531375885
current transaction is aborted, commands ignored until end of transaction block
23.429794788360596
current transaction is aborted, commands ignored until end of transaction block
23.43075156211853
current transaction is aborted, commands ignored until end of transaction block
23.431997060775757
current transaction is aborted, commands ignored until end of transaction block
23.432788372039795
current transaction is aborted, commands ignored until end of transaction block
23.433788776397705
current transaction is aborted, commands ignored until end of transaction block
23.433788776397705
current transaction is aborted, commands ignored until end of transaction block
23.434854984283447
current transaction is aborted, commands ignored until end of transaction block
23.435900926589966
current transaction is aborted, commands ignored until end of transaction block
23.436781644821167
current transaction is aborted, commands ignored until end of transaction block
23.437790155410767
current transaction is aborted, commands ignored until end of transaction block
23.4387526512146
current transaction is aborted, commands ignored until end of transaction block
23.439934253692627
current transaction is aborted, commands ignored until end of transaction block
23.440937042236328
current transaction is aborted, commands ignored until end of transaction block
23.441749095916748
current transaction is aborted, commands ignored until end of transaction block
23.441749095916748
current transaction is aborted, commands ignored until end of transaction block
23.442754983901978
current transaction is aborted, commands ignored until end of transaction block
23.443770170211792
current transaction is aborted, commands ignored until end of transaction block
23.44482970237732
current transaction is aborted, commands ignored until end of transaction block
23.44482970237732
current transaction is aborted, commands ignored until end of transaction block
23.446750164031982
current transaction is aborted, commands ignored until end of transaction block
23.446750164031982
current transaction is aborted, commands ignored until end of transaction block
23.44789171218872
current transaction is aborted, commands ignored until end of transaction block
23.448745727539062
current transaction is aborted, commands ignored until end of transaction block
23.448745727539062
current transaction is aborted, commands ignored until end of transaction block
23.45179796218872
current transaction is aborted, commands ignored until end of transaction block
23.452757358551025
current transaction is aborted, commands ignored until end of transaction block
23.453757286071777
current transaction is aborted, commands ignored until end of transaction block
23.454755783081055
current transaction is aborted, commands ignored until end of transaction block
23.454755783081055
current transaction is aborted, commands ignored until end of transaction block
23.45574688911438
current transaction is aborted, commands ignored until end of transaction block
23.456748485565186
current transaction is aborted, commands ignored until end of transaction block
23.45775604248047
current transaction is aborted, commands ignored until end of transaction block
23.458857536315918
current transaction is aborted, commands ignored until end of transaction block
23.45975160598755
current transaction is aborted, commands ignored until end of transaction block
23.460750818252563
current transaction is aborted, commands ignored until end of transaction block
23.460750818252563
current transaction is aborted, commands ignored until end of transaction block
23.46184277534485
current transaction is aborted, commands ignored until end of transaction block
23.462754726409912
current transaction is aborted, commands ignored until end of transaction block
23.46376132965088
current transaction is aborted, commands ignored until end of transaction block
23.464747667312622
current transaction is aborted, commands ignored until end of transaction block
23.465994358062744
current transaction is aborted, commands ignored until end of transaction block
23.466790199279785
current transaction is aborted, commands ignored until end of transaction block
23.46774935722351
current transaction is aborted, commands ignored until end of transaction block
23.468793153762817
current transaction is aborted, commands ignored until end of transaction block
23.46980571746826
current transaction is aborted, commands ignored until end of transaction block
23.470752000808716
current transaction is aborted, commands ignored until end of transaction block
23.471765756607056
current transaction is aborted, commands ignored until end of transaction block
23.472748517990112
current transaction is aborted, commands ignored until end of transaction block
23.473748683929443
current transaction is aborted, commands ignored until end of transaction block
23.474752187728882
current transaction is aborted, commands ignored until end of transaction block
23.47579073905945
current transaction is aborted, commands ignored until end of transaction block
23.47579073905945
current transaction is aborted, commands ignored until end of transaction block
23.477753400802612
current transaction is aborted, commands ignored until end of transaction block
23.47876501083374
current transaction is aborted, commands ignored until end of transaction block
23.479748249053955
current transaction is aborted, commands ignored until end of transaction block
23.480748176574707
current transaction is aborted, commands ignored until end of transaction block
23.480748176574707
current transaction is aborted, commands ignored until end of transaction block
23.481746673583984
current transaction is aborted, commands ignored until end of transaction block
23.482841730117798
current transaction is aborted, commands ignored until end of transaction block
23.483744859695435
current transaction is aborted, commands ignored until end of transaction block
23.48474884033203
current transaction is aborted, commands ignored until end of transaction block
23.48474884033203
current transaction is aborted, commands ignored until end of transaction block
23.485748529434204
current transaction is aborted, commands ignored until end of transaction block
23.487227201461792
current transaction is aborted, commands ignored until end of transaction block
23.4878568649292
current transaction is aborted, commands ignored until end of transaction block
23.48878288269043
current transaction is aborted, commands ignored until end of transaction block
23.48974871635437
current transaction is aborted, commands ignored until end of transaction block
23.490747451782227
current transaction is aborted, commands ignored until end of transaction block
23.490747451782227
current transaction is aborted, commands ignored until end of transaction block
23.49274778366089
current transaction is aborted, commands ignored until end of transaction block
23.49274778366089
current transaction is aborted, commands ignored until end of transaction block
23.494094133377075
current transaction is aborted, commands ignored until end of transaction block
23.49474811553955
current transaction is aborted, commands ignored until end of transaction block
23.49574875831604
current transaction is aborted, commands ignored until end of transaction block
23.495830059051514
current transaction is aborted, commands ignored until end of transaction block
23.496747970581055
current transaction is aborted, commands ignored until end of transaction block
23.497748613357544
current transaction is aborted, commands ignored until end of transaction block
23.498809814453125
current transaction is aborted, commands ignored until end of transaction block
23.499831676483154
current transaction is aborted, commands ignored until end of transaction block
23.500753164291382
current transaction is aborted, commands ignored until end of transaction block
23.50174903869629
current transaction is aborted, commands ignored until end of transaction block
23.50174903869629
current transaction is aborted, commands ignored until end of transaction block
23.5027494430542
current transaction is aborted, commands ignored until end of transaction block
23.50375747680664
current transaction is aborted, commands ignored until end of transaction block
23.505748748779297
current transaction is aborted, commands ignored until end of transaction block
23.50677251815796
current transaction is aborted, commands ignored until end of transaction block
23.508759021759033
current transaction is aborted, commands ignored until end of transaction block
23.508759021759033
current transaction is aborted, commands ignored until end of transaction block
23.50975012779236
current transaction is aborted, commands ignored until end of transaction block
23.510790586471558
current transaction is aborted, commands ignored until end of transaction block
23.511749505996704
current transaction is aborted, commands ignored until end of transaction block
23.51274871826172
current transaction is aborted, commands ignored until end of transaction block
23.51300597190857
current transaction is aborted, commands ignored until end of transaction block
23.51379108428955
current transaction is aborted, commands ignored until end of transaction block
23.514747619628906
current transaction is aborted, commands ignored until end of transaction block
23.51601243019104
current transaction is aborted, commands ignored until end of transaction block
23.51675271987915
current transaction is aborted, commands ignored until end of transaction block
23.517903804779053
current transaction is aborted, commands ignored until end of transaction block
23.51875352859497
current transaction is aborted, commands ignored until end of transaction block
23.519757986068726
current transaction is aborted, commands ignored until end of transaction block
23.520750761032104
current transaction is aborted, commands ignored until end of transaction block
23.520840406417847
current transaction is aborted, commands ignored until end of transaction block
23.522085189819336
current transaction is aborted, commands ignored until end of transaction block
23.522789239883423
current transaction is aborted, commands ignored until end of transaction block
23.52375078201294
current transaction is aborted, commands ignored until end of transaction block
23.52475357055664
current transaction is aborted, commands ignored until end of transaction block
23.52475357055664
current transaction is aborted, commands ignored until end of transaction block
23.52577018737793
current transaction is aborted, commands ignored until end of transaction block
23.527340173721313
current transaction is aborted, commands ignored until end of transaction block
23.52779483795166
current transaction is aborted, commands ignored until end of transaction block
23.5287606716156
current transaction is aborted, commands ignored until end of transaction block
23.529755115509033
current transaction is aborted, commands ignored until end of transaction block
23.529755115509033
current transaction is aborted, commands ignored until end of transaction block
23.53078007698059
current transaction is aborted, commands ignored until end of transaction block
23.531789302825928
current transaction is aborted, commands ignored until end of transaction block
23.53283977508545
current transaction is aborted, commands ignored until end of transaction block
23.534689903259277
current transaction is aborted, commands ignored until end of transaction block
23.534769296646118
current transaction is aborted, commands ignored until end of transaction block
23.536072969436646
current transaction is aborted, commands ignored until end of transaction block
23.53679060935974
current transaction is aborted, commands ignored until end of transaction block
23.53777241706848
current transaction is aborted, commands ignored until end of transaction block
23.53777241706848
current transaction is aborted, commands ignored until end of transaction block
23.538790225982666
current transaction is aborted, commands ignored until end of transaction block
23.539748907089233
current transaction is aborted, commands ignored until end of transaction block
23.54074501991272
current transaction is aborted, commands ignored until end of transaction block
23.54199719429016
current transaction is aborted, commands ignored until end of transaction block
23.542752265930176
current transaction is aborted, commands ignored until end of transaction block
23.543760061264038
current transaction is aborted, commands ignored until end of transaction block
23.544790744781494
current transaction is aborted, commands ignored until end of transaction block
23.544790744781494
current transaction is aborted, commands ignored until end of transaction block
23.545799732208252
current transaction is aborted, commands ignored until end of transaction block
23.546758890151978
current transaction is aborted, commands ignored until end of transaction block
23.547811031341553
current transaction is aborted, commands ignored until end of transaction block
23.548755645751953
current transaction is aborted, commands ignored until end of transaction block
23.549790143966675
current transaction is aborted, commands ignored until end of transaction block
23.549790143966675
current transaction is aborted, commands ignored until end of transaction block
23.55080771446228
current transaction is aborted, commands ignored until end of transaction block
23.55192255973816
current transaction is aborted, commands ignored until end of transaction block
23.55279016494751
current transaction is aborted, commands ignored until end of transaction block
23.55379343032837
current transaction is aborted, commands ignored until end of transaction block
23.554805755615234
current transaction is aborted, commands ignored until end of transaction block
23.55609965324402
current transaction is aborted, commands ignored until end of transaction block
23.55678915977478
current transaction is aborted, commands ignored until end of transaction block
23.558066368103027
current transaction is aborted, commands ignored until end of transaction block
23.558917999267578
current transaction is aborted, commands ignored until end of transaction block
23.5597984790802
current transaction is aborted, commands ignored until end of transaction block
23.560753107070923
current transaction is aborted, commands ignored until end of transaction block
23.56226396560669
current transaction is aborted, commands ignored until end of transaction block
23.562748670578003
current transaction is aborted, commands ignored until end of transaction block
23.563748836517334
current transaction is aborted, commands ignored until end of transaction block
23.564745903015137
current transaction is aborted, commands ignored until end of transaction block
23.565795421600342
current transaction is aborted, commands ignored until end of transaction block
23.565795421600342
current transaction is aborted, commands ignored until end of transaction block
23.566776990890503
current transaction is aborted, commands ignored until end of transaction block
23.567761659622192
current transaction is aborted, commands ignored until end of transaction block
23.56874656677246
current transaction is aborted, commands ignored until end of transaction block
23.569759368896484
current transaction is aborted, commands ignored until end of transaction block
23.57079029083252
current transaction is aborted, commands ignored until end of transaction block
23.57175302505493
current transaction is aborted, commands ignored until end of transaction block
23.572790145874023
current transaction is aborted, commands ignored until end of transaction block
23.57314109802246
current transaction is aborted, commands ignored until end of transaction block
23.573744535446167
current transaction is aborted, commands ignored until end of transaction block
23.574751377105713
current transaction is aborted, commands ignored until end of transaction block
23.57603907585144
current transaction is aborted, commands ignored until end of transaction block
23.576753616333008
current transaction is aborted, commands ignored until end of transaction block
23.5777530670166
current transaction is aborted, commands ignored until end of transaction block
23.5777530670166
current transaction is aborted, commands ignored until end of transaction block
23.579004526138306
current transaction is aborted, commands ignored until end of transaction block
23.579914093017578
current transaction is aborted, commands ignored until end of transaction block
23.580745458602905
current transaction is aborted, commands ignored until end of transaction block
23.581748485565186
current transaction is aborted, commands ignored until end of transaction block
23.582746267318726
current transaction is aborted, commands ignored until end of transaction block
23.583788871765137
current transaction is aborted, commands ignored until end of transaction block
23.583788871765137
current transaction is aborted, commands ignored until end of transaction block
23.58500051498413
current transaction is aborted, commands ignored until end of transaction block
23.585914373397827
current transaction is aborted, commands ignored until end of transaction block
23.58678960800171
current transaction is aborted, commands ignored until end of transaction block
23.58678960800171
current transaction is aborted, commands ignored until end of transaction block
23.58783507347107
current transaction is aborted, commands ignored until end of transaction block
23.58875799179077
current transaction is aborted, commands ignored until end of transaction block
23.590747833251953
current transaction is aborted, commands ignored until end of transaction block
23.590747833251953
current transaction is aborted, commands ignored until end of transaction block
23.591789722442627
current transaction is aborted, commands ignored until end of transaction block
23.59279155731201
current transaction is aborted, commands ignored until end of transaction block
23.593790292739868
current transaction is aborted, commands ignored until end of transaction block
23.59475088119507
current transaction is aborted, commands ignored until end of transaction block
23.59575366973877
current transaction is aborted, commands ignored until end of transaction block
23.59575366973877
current transaction is aborted, commands ignored until end of transaction block
23.597190856933594
current transaction is aborted, commands ignored until end of transaction block
23.59822940826416
current transaction is aborted, commands ignored until end of transaction block
23.598790884017944
current transaction is aborted, commands ignored until end of transaction block
23.59975242614746
current transaction is aborted, commands ignored until end of transaction block
23.60075879096985
current transaction is aborted, commands ignored until end of transaction block
23.60075879096985
current transaction is aborted, commands ignored until end of transaction block
23.60175108909607
current transaction is aborted, commands ignored until end of transaction block
23.60374641418457
current transaction is aborted, commands ignored until end of transaction block
23.60374641418457
current transaction is aborted, commands ignored until end of transaction block
23.60478973388672
current transaction is aborted, commands ignored until end of transaction block
23.60596990585327
current transaction is aborted, commands ignored until end of transaction block
23.606789588928223
current transaction is aborted, commands ignored until end of transaction block
23.607788801193237
current transaction is aborted, commands ignored until end of transaction block
23.607788801193237
current transaction is aborted, commands ignored until end of transaction block
23.60875153541565
current transaction is aborted, commands ignored until end of transaction block
23.609758138656616
current transaction is aborted, commands ignored until end of transaction block
23.611123085021973
current transaction is aborted, commands ignored until end of transaction block
23.61179208755493
current transaction is aborted, commands ignored until end of transaction block
23.612754821777344
current transaction is aborted, commands ignored until end of transaction block
23.613905668258667
current transaction is aborted, commands ignored until end of transaction block
23.614017724990845
current transaction is aborted, commands ignored until end of transaction block
23.614789247512817
current transaction is aborted, commands ignored until end of transaction block
23.615752696990967
current transaction is aborted, commands ignored until end of transaction block
23.617748975753784
current transaction is aborted, commands ignored until end of transaction block
23.618749141693115
current transaction is aborted, commands ignored until end of transaction block
23.61974835395813
current transaction is aborted, commands ignored until end of transaction block
23.619789361953735
current transaction is aborted, commands ignored until end of transaction block
23.620752334594727
current transaction is aborted, commands ignored until end of transaction block
23.621943473815918
current transaction is aborted, commands ignored until end of transaction block
23.62275743484497
current transaction is aborted, commands ignored until end of transaction block
23.62375831604004
current transaction is aborted, commands ignored until end of transaction block
23.62520146369934
current transaction is aborted, commands ignored until end of transaction block
23.62580442428589
current transaction is aborted, commands ignored until end of transaction block
23.626789331436157
current transaction is aborted, commands ignored until end of transaction block
23.62775754928589
current transaction is aborted, commands ignored until end of transaction block
23.628758430480957
current transaction is aborted, commands ignored until end of transaction block
23.629752159118652
current transaction is aborted, commands ignored until end of transaction block
23.629752159118652
current transaction is aborted, commands ignored until end of transaction block
23.630805730819702
current transaction is aborted, commands ignored until end of transaction block
23.63175106048584
current transaction is aborted, commands ignored until end of transaction block
23.632789373397827
current transaction is aborted, commands ignored until end of transaction block
23.633790969848633
current transaction is aborted, commands ignored until end of transaction block
23.634749174118042
current transaction is aborted, commands ignored until end of transaction block
23.635751008987427
current transaction is aborted, commands ignored until end of transaction block
23.635751008987427
current transaction is aborted, commands ignored until end of transaction block
23.637755632400513
current transaction is aborted, commands ignored until end of transaction block
23.638757705688477
current transaction is aborted, commands ignored until end of transaction block
23.639793157577515
current transaction is aborted, commands ignored until end of transaction block
23.64078950881958
current transaction is aborted, commands ignored until end of transaction block
23.641836643218994
current transaction is aborted, commands ignored until end of transaction block
23.64275574684143
current transaction is aborted, commands ignored until end of transaction block
23.64375376701355
current transaction is aborted, commands ignored until end of transaction block
23.64484977722168
current transaction is aborted, commands ignored until end of transaction block
23.645748376846313
current transaction is aborted, commands ignored until end of transaction block
23.64675211906433
current transaction is aborted, commands ignored until end of transaction block
23.64675211906433
current transaction is aborted, commands ignored until end of transaction block
23.647804021835327
current transaction is aborted, commands ignored until end of transaction block
23.649032831192017
current transaction is aborted, commands ignored until end of transaction block
23.649791717529297
current transaction is aborted, commands ignored until end of transaction block
23.6507511138916
current transaction is aborted, commands ignored until end of transaction block
23.651750087738037
current transaction is aborted, commands ignored until end of transaction block
23.65375256538391
current transaction is aborted, commands ignored until end of transaction block
23.65375256538391
current transaction is aborted, commands ignored until end of transaction block
23.65475082397461
current transaction is aborted, commands ignored until end of transaction block
23.65578269958496
current transaction is aborted, commands ignored until end of transaction block
23.65684223175049
current transaction is aborted, commands ignored until end of transaction block
23.658761262893677
current transaction is aborted, commands ignored until end of transaction block
23.659786462783813
current transaction is aborted, commands ignored until end of transaction block
23.660748958587646
current transaction is aborted, commands ignored until end of transaction block
23.661784648895264
current transaction is aborted, commands ignored until end of transaction block
23.661784648895264
current transaction is aborted, commands ignored until end of transaction block
23.66278886795044
current transaction is aborted, commands ignored until end of transaction block
23.66379475593567
current transaction is aborted, commands ignored until end of transaction block
23.664748668670654
current transaction is aborted, commands ignored until end of transaction block
23.666752815246582
current transaction is aborted, commands ignored until end of transaction block
23.66775369644165
current transaction is aborted, commands ignored until end of transaction block
23.66775369644165
current transaction is aborted, commands ignored until end of transaction block
23.66878080368042
current transaction is aborted, commands ignored until end of transaction block
23.670247316360474
current transaction is aborted, commands ignored until end of transaction block
23.670791149139404
current transaction is aborted, commands ignored until end of transaction block
23.6717586517334
current transaction is aborted, commands ignored until end of transaction block
23.672773361206055
current transaction is aborted, commands ignored until end of transaction block
23.67375946044922
current transaction is aborted, commands ignored until end of transaction block
23.67375946044922
current transaction is aborted, commands ignored until end of transaction block
23.675203561782837
current transaction is aborted, commands ignored until end of transaction block
23.675856113433838
current transaction is aborted, commands ignored until end of transaction block
23.676745176315308
current transaction is aborted, commands ignored until end of transaction block
23.676877975463867
current transaction is aborted, commands ignored until end of transaction block
23.677794933319092
current transaction is aborted, commands ignored until end of transaction block
23.678754091262817
current transaction is aborted, commands ignored until end of transaction block
23.679745197296143
current transaction is aborted, commands ignored until end of transaction block
23.679745197296143
current transaction is aborted, commands ignored until end of transaction block
23.68174910545349
current transaction is aborted, commands ignored until end of transaction block
23.68174910545349
current transaction is aborted, commands ignored until end of transaction block
23.68278932571411
current transaction is aborted, commands ignored until end of transaction block
23.68375039100647
current transaction is aborted, commands ignored until end of transaction block
23.684791803359985
current transaction is aborted, commands ignored until end of transaction block
23.685750007629395
current transaction is aborted, commands ignored until end of transaction block
23.686809301376343
current transaction is aborted, commands ignored until end of transaction block
23.687758207321167
current transaction is aborted, commands ignored until end of transaction block
23.688751459121704
current transaction is aborted, commands ignored until end of transaction block
23.68974804878235
current transaction is aborted, commands ignored until end of transaction block
23.690747022628784
current transaction is aborted, commands ignored until end of transaction block
23.691749334335327
current transaction is aborted, commands ignored until end of transaction block
23.692748546600342
current transaction is aborted, commands ignored until end of transaction block
23.693788290023804
current transaction is aborted, commands ignored until end of transaction block
23.694791078567505
current transaction is aborted, commands ignored until end of transaction block
23.695789575576782
current transaction is aborted, commands ignored until end of transaction block
23.695789575576782
current transaction is aborted, commands ignored until end of transaction block
23.696789503097534
current transaction is aborted, commands ignored until end of transaction block
23.698047637939453
current transaction is aborted, commands ignored until end of transaction block
23.698744535446167
current transaction is aborted, commands ignored until end of transaction block
23.69975996017456
current transaction is aborted, commands ignored until end of transaction block
23.700767040252686
current transaction is aborted, commands ignored until end of transaction block
23.701884031295776
current transaction is aborted, commands ignored until end of transaction block
23.702850818634033
current transaction is aborted, commands ignored until end of transaction block
23.703789472579956
current transaction is aborted, commands ignored until end of transaction block
23.704744577407837
current transaction is aborted, commands ignored until end of transaction block
23.704744577407837
current transaction is aborted, commands ignored until end of transaction block
23.706111669540405
current transaction is aborted, commands ignored until end of transaction block
23.70674705505371
current transaction is aborted, commands ignored until end of transaction block
23.707791805267334
current transaction is aborted, commands ignored until end of transaction block
23.70888924598694
current transaction is aborted, commands ignored until end of transaction block
23.710079431533813
current transaction is aborted, commands ignored until end of transaction block
23.71093463897705
current transaction is aborted, commands ignored until end of transaction block
23.71093463897705
current transaction is aborted, commands ignored until end of transaction block
23.712027311325073
current transaction is aborted, commands ignored until end of transaction block
23.712887048721313
current transaction is aborted, commands ignored until end of transaction block
23.713891744613647
current transaction is aborted, commands ignored until end of transaction block
23.71513819694519
current transaction is aborted, commands ignored until end of transaction block
23.71592664718628
current transaction is aborted, commands ignored until end of transaction block
23.71692705154419
current transaction is aborted, commands ignored until end of transaction block
23.71692705154419
current transaction is aborted, commands ignored until end of transaction block
23.71788454055786
current transaction is aborted, commands ignored until end of transaction block
23.718960285186768
current transaction is aborted, commands ignored until end of transaction block
23.71988868713379
current transaction is aborted, commands ignored until end of transaction block
23.72089457511902
current transaction is aborted, commands ignored until end of transaction block
23.722083806991577
current transaction is aborted, commands ignored until end of transaction block
23.72288942337036
current transaction is aborted, commands ignored until end of transaction block
23.723889112472534
current transaction is aborted, commands ignored until end of transaction block
23.724236965179443
current transaction is aborted, commands ignored until end of transaction block
23.724934101104736
current transaction is aborted, commands ignored until end of transaction block
23.72608256340027
current transaction is aborted, commands ignored until end of transaction block
23.726888179779053
current transaction is aborted, commands ignored until end of transaction block
23.727890729904175
current transaction is aborted, commands ignored until end of transaction block
23.729182243347168
current transaction is aborted, commands ignored until end of transaction block
23.729928493499756
current transaction is aborted, commands ignored until end of transaction block
23.730925798416138
current transaction is aborted, commands ignored until end of transaction block
23.730925798416138
current transaction is aborted, commands ignored until end of transaction block
23.732927322387695
current transaction is aborted, commands ignored until end of transaction block
23.732927322387695
current transaction is aborted, commands ignored until end of transaction block
23.733893632888794
current transaction is aborted, commands ignored until end of transaction block
23.734928369522095
current transaction is aborted, commands ignored until end of transaction block
23.735934734344482
current transaction is aborted, commands ignored until end of transaction block
23.736887216567993
current transaction is aborted, commands ignored until end of transaction block
23.73792815208435
current transaction is aborted, commands ignored until end of transaction block
23.73792815208435
current transaction is aborted, commands ignored until end of transaction block
23.738927841186523
current transaction is aborted, commands ignored until end of transaction block
23.739927053451538
current transaction is aborted, commands ignored until end of transaction block
23.740925073623657
current transaction is aborted, commands ignored until end of transaction block
23.74191188812256
current transaction is aborted, commands ignored until end of transaction block
23.742929220199585
current transaction is aborted, commands ignored until end of transaction block
23.744348526000977
current transaction is aborted, commands ignored until end of transaction block
23.744932651519775
current transaction is aborted, commands ignored until end of transaction block
23.745925426483154
current transaction is aborted, commands ignored until end of transaction block
23.745925426483154
current transaction is aborted, commands ignored until end of transaction block
23.74714684486389
current transaction is aborted, commands ignored until end of transaction block
23.747894525527954
current transaction is aborted, commands ignored until end of transaction block
23.748947858810425
current transaction is aborted, commands ignored until end of transaction block
23.749889373779297
current transaction is aborted, commands ignored until end of transaction block
23.75088930130005
current transaction is aborted, commands ignored until end of transaction block
23.751927375793457
current transaction is aborted, commands ignored until end of transaction block
23.751927375793457
current transaction is aborted, commands ignored until end of transaction block
23.75289297103882
current transaction is aborted, commands ignored until end of transaction block
23.753989696502686
current transaction is aborted, commands ignored until end of transaction block
23.754894733428955
current transaction is aborted, commands ignored until end of transaction block
23.755916357040405
current transaction is aborted, commands ignored until end of transaction block
23.757091760635376
current transaction is aborted, commands ignored until end of transaction block
23.7579288482666
current transaction is aborted, commands ignored until end of transaction block
23.758885860443115
current transaction is aborted, commands ignored until end of transaction block
23.759886264801025
current transaction is aborted, commands ignored until end of transaction block
23.761257886886597
current transaction is aborted, commands ignored until end of transaction block
23.761897563934326
current transaction is aborted, commands ignored until end of transaction block
23.762890100479126
current transaction is aborted, commands ignored until end of transaction block
23.763935565948486
current transaction is aborted, commands ignored until end of transaction block
23.764893531799316
current transaction is aborted, commands ignored until end of transaction block
23.766109704971313
current transaction is aborted, commands ignored until end of transaction block
23.7669620513916
current transaction is aborted, commands ignored until end of transaction block
23.767927169799805
current transaction is aborted, commands ignored until end of transaction block
23.767927169799805
current transaction is aborted, commands ignored until end of transaction block
23.769243001937866
current transaction is aborted, commands ignored until end of transaction block
23.76988983154297
current transaction is aborted, commands ignored until end of transaction block
23.770887851715088
current transaction is aborted, commands ignored until end of transaction block
23.772141456604004
current transaction is aborted, commands ignored until end of transaction block
23.77289080619812
current transaction is aborted, commands ignored until end of transaction block
23.773523092269897
current transaction is aborted, commands ignored until end of transaction block
23.77392554283142
current transaction is aborted, commands ignored until end of transaction block
23.774991512298584
current transaction is aborted, commands ignored until end of transaction block
23.775888204574585
current transaction is aborted, commands ignored until end of transaction block
23.77694058418274
current transaction is aborted, commands ignored until end of transaction block
23.77789282798767
current transaction is aborted, commands ignored until end of transaction block
23.778889179229736
current transaction is aborted, commands ignored until end of transaction block
23.778889179229736
current transaction is aborted, commands ignored until end of transaction block
23.779924392700195
current transaction is aborted, commands ignored until end of transaction block
23.780892372131348
current transaction is aborted, commands ignored until end of transaction block
23.781928777694702
current transaction is aborted, commands ignored until end of transaction block
23.781928777694702
current transaction is aborted, commands ignored until end of transaction block
23.78394913673401
current transaction is aborted, commands ignored until end of transaction block
23.784892797470093
current transaction is aborted, commands ignored until end of transaction block
23.785966157913208
current transaction is aborted, commands ignored until end of transaction block
23.7869291305542
current transaction is aborted, commands ignored until end of transaction block
23.7869291305542
current transaction is aborted, commands ignored until end of transaction block
23.78798747062683
current transaction is aborted, commands ignored until end of transaction block
23.7888925075531
current transaction is aborted, commands ignored until end of transaction block
23.789971828460693
current transaction is aborted, commands ignored until end of transaction block
23.790892839431763
current transaction is aborted, commands ignored until end of transaction block
23.79188370704651
current transaction is aborted, commands ignored until end of transaction block
23.792926788330078
current transaction is aborted, commands ignored until end of transaction block
23.793108463287354
current transaction is aborted, commands ignored until end of transaction block
23.793926000595093
current transaction is aborted, commands ignored until end of transaction block
23.79492497444153
current transaction is aborted, commands ignored until end of transaction block
23.795928239822388
current transaction is aborted, commands ignored until end of transaction block
23.796208381652832
current transaction is aborted, commands ignored until end of transaction block
23.796887636184692
current transaction is aborted, commands ignored until end of transaction block
23.798479557037354
current transaction is aborted, commands ignored until end of transaction block
23.798925161361694
current transaction is aborted, commands ignored until end of transaction block
23.798925161361694
current transaction is aborted, commands ignored until end of transaction block
23.800135612487793
current transaction is aborted, commands ignored until end of transaction block
23.801134824752808
current transaction is aborted, commands ignored until end of transaction block
23.801931858062744
current transaction is aborted, commands ignored until end of transaction block
23.802884578704834
current transaction is aborted, commands ignored until end of transaction block
23.802884578704834
current transaction is aborted, commands ignored until end of transaction block
23.804046392440796
current transaction is aborted, commands ignored until end of transaction block
23.804885625839233
current transaction is aborted, commands ignored until end of transaction block
23.805925607681274
current transaction is aborted, commands ignored until end of transaction block
23.806931257247925
current transaction is aborted, commands ignored until end of transaction block
23.806931257247925
current transaction is aborted, commands ignored until end of transaction block
23.808196544647217
current transaction is aborted, commands ignored until end of transaction block
23.80872106552124
current transaction is aborted, commands ignored until end of transaction block
23.809787034988403
current transaction is aborted, commands ignored until end of transaction block
23.810754537582397
current transaction is aborted, commands ignored until end of transaction block
23.81176257133484
current transaction is aborted, commands ignored until end of transaction block
23.812744140625
current transaction is aborted, commands ignored until end of transaction block
23.813894748687744
current transaction is aborted, commands ignored until end of transaction block
23.814051151275635
current transaction is aborted, commands ignored until end of transaction block
23.814789056777954
current transaction is aborted, commands ignored until end of transaction block
23.81578516960144
current transaction is aborted, commands ignored until end of transaction block
23.81674361228943
current transaction is aborted, commands ignored until end of transaction block
23.81706714630127
current transaction is aborted, commands ignored until end of transaction block
23.81784963607788
current transaction is aborted, commands ignored until end of transaction block
23.81877040863037
current transaction is aborted, commands ignored until end of transaction block
23.820013999938965
current transaction is aborted, commands ignored until end of transaction block
23.820751905441284
current transaction is aborted, commands ignored until end of transaction block
23.820751905441284
current transaction is aborted, commands ignored until end of transaction block
23.821751356124878
current transaction is aborted, commands ignored until end of transaction block
23.822750091552734
current transaction is aborted, commands ignored until end of transaction block
23.823742151260376
current transaction is aborted, commands ignored until end of transaction block
23.824745416641235
current transaction is aborted, commands ignored until end of transaction block
23.826786279678345
current transaction is aborted, commands ignored until end of transaction block
23.827749013900757
current transaction is aborted, commands ignored until end of transaction block
23.82878541946411
current transaction is aborted, commands ignored until end of transaction block
23.82878541946411
current transaction is aborted, commands ignored until end of transaction block
23.829897165298462
current transaction is aborted, commands ignored until end of transaction block
23.830939054489136
current transaction is aborted, commands ignored until end of transaction block
23.83174467086792
current transaction is aborted, commands ignored until end of transaction block
23.83274531364441
current transaction is aborted, commands ignored until end of transaction block
23.83274531364441
current transaction is aborted, commands ignored until end of transaction block
23.833784580230713
current transaction is aborted, commands ignored until end of transaction block
23.834757804870605
current transaction is aborted, commands ignored until end of transaction block
23.835784435272217
current transaction is aborted, commands ignored until end of transaction block
23.835784435272217
current transaction is aborted, commands ignored until end of transaction block
23.83678650856018
current transaction is aborted, commands ignored until end of transaction block
23.837743997573853
current transaction is aborted, commands ignored until end of transaction block
23.83875346183777
current transaction is aborted, commands ignored until end of transaction block
23.83974266052246
current transaction is aborted, commands ignored until end of transaction block
23.840744495391846
current transaction is aborted, commands ignored until end of transaction block
23.841748476028442
current transaction is aborted, commands ignored until end of transaction block
23.842752933502197
current transaction is aborted, commands ignored until end of transaction block
23.843815803527832
current transaction is aborted, commands ignored until end of transaction block
23.844833612442017
current transaction is aborted, commands ignored until end of transaction block
23.845746278762817
current transaction is aborted, commands ignored until end of transaction block
23.847742795944214
current transaction is aborted, commands ignored until end of transaction block
23.848744869232178
current transaction is aborted, commands ignored until end of transaction block
23.84974503517151
current transaction is aborted, commands ignored until end of transaction block
23.850744009017944
current transaction is aborted, commands ignored until end of transaction block
23.85174822807312
current transaction is aborted, commands ignored until end of transaction block
23.85174822807312
current transaction is aborted, commands ignored until end of transaction block
23.853748321533203
current transaction is aborted, commands ignored until end of transaction block
23.854747772216797
current transaction is aborted, commands ignored until end of transaction block
23.85574769973755
current transaction is aborted, commands ignored until end of transaction block
23.85574769973755
current transaction is aborted, commands ignored until end of transaction block
23.85690712928772
current transaction is aborted, commands ignored until end of transaction block
23.857789278030396
current transaction is aborted, commands ignored until end of transaction block
23.8587486743927
current transaction is aborted, commands ignored until end of transaction block
23.859747886657715
current transaction is aborted, commands ignored until end of transaction block
23.860748529434204
current transaction is aborted, commands ignored until end of transaction block
23.86174702644348
current transaction is aborted, commands ignored until end of transaction block
23.86174702644348
current transaction is aborted, commands ignored until end of transaction block
23.86278462409973
current transaction is aborted, commands ignored until end of transaction block
23.864033222198486
current transaction is aborted, commands ignored until end of transaction block
23.86485719680786
current transaction is aborted, commands ignored until end of transaction block
23.865748643875122
current transaction is aborted, commands ignored until end of transaction block
23.865748643875122
current transaction is aborted, commands ignored until end of transaction block
23.867743968963623
current transaction is aborted, commands ignored until end of transaction block
23.868744611740112
current transaction is aborted, commands ignored until end of transaction block
23.869744062423706
current transaction is aborted, commands ignored until end of transaction block
23.870748043060303
current transaction is aborted, commands ignored until end of transaction block
23.871748447418213
current transaction is aborted, commands ignored until end of transaction block
23.87207269668579
current transaction is aborted, commands ignored until end of transaction block
23.872748136520386
current transaction is aborted, commands ignored until end of transaction block
23.874741792678833
current transaction is aborted, commands ignored until end of transaction block
23.874741792678833
current transaction is aborted, commands ignored until end of transaction block
23.876023054122925
current transaction is aborted, commands ignored until end of transaction block
23.87675404548645
current transaction is aborted, commands ignored until end of transaction block
23.87784218788147
current transaction is aborted, commands ignored until end of transaction block
23.878743886947632
current transaction is aborted, commands ignored until end of transaction block
23.879750967025757
current transaction is aborted, commands ignored until end of transaction block
23.880746364593506
current transaction is aborted, commands ignored until end of transaction block
23.88274073600769
current transaction is aborted, commands ignored until end of transaction block
23.88274073600769
current transaction is aborted, commands ignored until end of transaction block
23.88391351699829
current transaction is aborted, commands ignored until end of transaction block
23.885117292404175
current transaction is aborted, commands ignored until end of transaction block
23.885849714279175
current transaction is aborted, commands ignored until end of transaction block
23.88674807548523
current transaction is aborted, commands ignored until end of transaction block
23.88674807548523
current transaction is aborted, commands ignored until end of transaction block
23.88778781890869
current transaction is aborted, commands ignored until end of transaction block
23.889191389083862
current transaction is aborted, commands ignored until end of transaction block
23.889789581298828
current transaction is aborted, commands ignored until end of transaction block
23.890753507614136
current transaction is aborted, commands ignored until end of transaction block
23.891955614089966
current transaction is aborted, commands ignored until end of transaction block
23.892791509628296
current transaction is aborted, commands ignored until end of transaction block
23.893749952316284
current transaction is aborted, commands ignored until end of transaction block
23.894906044006348
current transaction is aborted, commands ignored until end of transaction block
23.895958423614502
current transaction is aborted, commands ignored until end of transaction block
23.89674401283264
current transaction is aborted, commands ignored until end of transaction block
23.897443056106567
current transaction is aborted, commands ignored until end of transaction block
23.897786378860474
current transaction is aborted, commands ignored until end of transaction block
23.899046659469604
current transaction is aborted, commands ignored until end of transaction block
23.899786949157715
current transaction is aborted, commands ignored until end of transaction block
23.900749683380127
current transaction is aborted, commands ignored until end of transaction block
23.901759386062622
current transaction is aborted, commands ignored until end of transaction block
23.90274453163147
current transaction is aborted, commands ignored until end of transaction block
23.90375328063965
current transaction is aborted, commands ignored until end of transaction block
23.90375328063965
current transaction is aborted, commands ignored until end of transaction block
23.904785871505737
current transaction is aborted, commands ignored until end of transaction block
23.90587568283081
current transaction is aborted, commands ignored until end of transaction block
23.907055139541626
current transaction is aborted, commands ignored until end of transaction block
23.90782856941223
current transaction is aborted, commands ignored until end of transaction block
23.909751892089844
current transaction is aborted, commands ignored until end of transaction block
23.91078495979309
current transaction is aborted, commands ignored until end of transaction block
23.911083459854126
current transaction is aborted, commands ignored until end of transaction block
23.91178560256958
current transaction is aborted, commands ignored until end of transaction block
23.913022756576538
current transaction is aborted, commands ignored until end of transaction block
23.9137864112854
current transaction is aborted, commands ignored until end of transaction block
23.914745092391968
current transaction is aborted, commands ignored until end of transaction block
23.916749238967896
current transaction is aborted, commands ignored until end of transaction block
23.916749238967896
current transaction is aborted, commands ignored until end of transaction block
23.917784929275513
current transaction is aborted, commands ignored until end of transaction block
23.918936491012573
current transaction is aborted, commands ignored until end of transaction block
23.919790744781494
current transaction is aborted, commands ignored until end of transaction block
23.920784950256348
current transaction is aborted, commands ignored until end of transaction block
23.920784950256348
current transaction is aborted, commands ignored until end of transaction block
23.92174744606018
current transaction is aborted, commands ignored until end of transaction block
23.92374610900879
current transaction is aborted, commands ignored until end of transaction block
23.924747228622437
current transaction is aborted, commands ignored until end of transaction block
23.925785779953003
current transaction is aborted, commands ignored until end of transaction block
23.925785779953003
current transaction is aborted, commands ignored until end of transaction block
23.927387714385986
current transaction is aborted, commands ignored until end of transaction block
23.927793502807617
current transaction is aborted, commands ignored until end of transaction block
23.92875099182129
current transaction is aborted, commands ignored until end of transaction block
23.929901123046875
current transaction is aborted, commands ignored until end of transaction block
23.931747913360596
current transaction is aborted, commands ignored until end of transaction block
23.931747913360596
current transaction is aborted, commands ignored until end of transaction block
23.93274450302124
current transaction is aborted, commands ignored until end of transaction block
23.933785915374756
current transaction is aborted, commands ignored until end of transaction block
23.934748649597168
current transaction is aborted, commands ignored until end of transaction block
23.935750007629395
current transaction is aborted, commands ignored until end of transaction block
23.936794757843018
current transaction is aborted, commands ignored until end of transaction block
23.93708825111389
current transaction is aborted, commands ignored until end of transaction block
23.937785387039185
current transaction is aborted, commands ignored until end of transaction block
23.93928575515747
current transaction is aborted, commands ignored until end of transaction block
23.940112590789795
current transaction is aborted, commands ignored until end of transaction block
23.9407856464386
current transaction is aborted, commands ignored until end of transaction block
23.941760063171387
current transaction is aborted, commands ignored until end of transaction block
23.941760063171387
current transaction is aborted, commands ignored until end of transaction block
23.942756175994873
current transaction is aborted, commands ignored until end of transaction block
23.943798065185547
current transaction is aborted, commands ignored until end of transaction block
23.944787979125977
current transaction is aborted, commands ignored until end of transaction block
23.94590950012207
current transaction is aborted, commands ignored until end of transaction block
23.946746587753296
current transaction is aborted, commands ignored until end of transaction block
23.947874784469604
current transaction is aborted, commands ignored until end of transaction block
23.948885917663574
current transaction is aborted, commands ignored until end of transaction block
23.949756145477295
current transaction is aborted, commands ignored until end of transaction block
23.95080876350403
current transaction is aborted, commands ignored until end of transaction block
23.952167510986328
current transaction is aborted, commands ignored until end of transaction block
23.95308232307434
current transaction is aborted, commands ignored until end of transaction block
23.95375943183899
current transaction is aborted, commands ignored until end of transaction block
23.95474624633789
current transaction is aborted, commands ignored until end of transaction block
23.95474624633789
current transaction is aborted, commands ignored until end of transaction block
23.95574450492859
current transaction is aborted, commands ignored until end of transaction block
23.956751346588135
current transaction is aborted, commands ignored until end of transaction block
23.9580078125
current transaction is aborted, commands ignored until end of transaction block
23.95879817008972
current transaction is aborted, commands ignored until end of transaction block
23.959749221801758
current transaction is aborted, commands ignored until end of transaction block
23.960753202438354
current transaction is aborted, commands ignored until end of transaction block
23.96175503730774
current transaction is aborted, commands ignored until end of transaction block
23.96274709701538
current transaction is aborted, commands ignored until end of transaction block
23.962884664535522
current transaction is aborted, commands ignored until end of transaction block
23.96375346183777
current transaction is aborted, commands ignored until end of transaction block
23.96474838256836
current transaction is aborted, commands ignored until end of transaction block
23.96578598022461
current transaction is aborted, commands ignored until end of transaction block
23.966840267181396
current transaction is aborted, commands ignored until end of transaction block
23.96776008605957
current transaction is aborted, commands ignored until end of transaction block
23.96776008605957
current transaction is aborted, commands ignored until end of transaction block
23.96925115585327
current transaction is aborted, commands ignored until end of transaction block
23.969825267791748
current transaction is aborted, commands ignored until end of transaction block
23.97074866294861
current transaction is aborted, commands ignored until end of transaction block
23.971840620040894
current transaction is aborted, commands ignored until end of transaction block
23.972792625427246
current transaction is aborted, commands ignored until end of transaction block
23.973468780517578
current transaction is aborted, commands ignored until end of transaction block
23.973785400390625
current transaction is aborted, commands ignored until end of transaction block
23.974889516830444
current transaction is aborted, commands ignored until end of transaction block
23.97575569152832
current transaction is aborted, commands ignored until end of transaction block
23.97684144973755
current transaction is aborted, commands ignored until end of transaction block
23.977757453918457
current transaction is aborted, commands ignored until end of transaction block
23.979785203933716
current transaction is aborted, commands ignored until end of transaction block
23.979785203933716
current transaction is aborted, commands ignored until end of transaction block
23.9807448387146
current transaction is aborted, commands ignored until end of transaction block
23.981752634048462
current transaction is aborted, commands ignored until end of transaction block
23.98278570175171
current transaction is aborted, commands ignored until end of transaction block
23.98278570175171
current transaction is aborted, commands ignored until end of transaction block
23.983749389648438
current transaction is aborted, commands ignored until end of transaction block
23.984750509262085
current transaction is aborted, commands ignored until end of transaction block
23.985989332199097
current transaction is aborted, commands ignored until end of transaction block
23.987064838409424
current transaction is aborted, commands ignored until end of transaction block
23.987788438796997
current transaction is aborted, commands ignored until end of transaction block
23.988787174224854
current transaction is aborted, commands ignored until end of transaction block
23.989886045455933
current transaction is aborted, commands ignored until end of transaction block
23.99074625968933
current transaction is aborted, commands ignored until end of transaction block
23.99174666404724
current transaction is aborted, commands ignored until end of transaction block
23.99321699142456
current transaction is aborted, commands ignored until end of transaction block
23.99378514289856
current transaction is aborted, commands ignored until end of transaction block
23.994749546051025
current transaction is aborted, commands ignored until end of transaction block
23.994749546051025
current transaction is aborted, commands ignored until end of transaction block
23.99578619003296
current transaction is aborted, commands ignored until end of transaction block
23.996755123138428
current transaction is aborted, commands ignored until end of transaction block
23.997879028320312
current transaction is aborted, commands ignored until end of transaction block
23.998755931854248
current transaction is aborted, commands ignored until end of transaction block
23.999788999557495
current transaction is aborted, commands ignored until end of transaction block
24.00078558921814
current transaction is aborted, commands ignored until end of transaction block
24.001790285110474
current transaction is aborted, commands ignored until end of transaction block
24.002748489379883
current transaction is aborted, commands ignored until end of transaction block
24.003787994384766
current transaction is aborted, commands ignored until end of transaction block
24.003787994384766
current transaction is aborted, commands ignored until end of transaction block
24.004748344421387
current transaction is aborted, commands ignored until end of transaction block
24.005777835845947
current transaction is aborted, commands ignored until end of transaction block
24.007750272750854
current transaction is aborted, commands ignored until end of transaction block
24.008787870407104
current transaction is aborted, commands ignored until end of transaction block
24.009089708328247
current transaction is aborted, commands ignored until end of transaction block
24.010006189346313
current transaction is aborted, commands ignored until end of transaction block
24.01096749305725
current transaction is aborted, commands ignored until end of transaction block
24.01096749305725
current transaction is aborted, commands ignored until end of transaction block
24.011969089508057
current transaction is aborted, commands ignored until end of transaction block
24.014020204544067
current transaction is aborted, commands ignored until end of transaction block
24.01598024368286
current transaction is aborted, commands ignored until end of transaction block
24.01614737510681
current transaction is aborted, commands ignored until end of transaction block
24.01700448989868
current transaction is aborted, commands ignored until end of transaction block
24.01816201210022
current transaction is aborted, commands ignored until end of transaction block
24.018967151641846
current transaction is aborted, commands ignored until end of transaction block
24.019965648651123
current transaction is aborted, commands ignored until end of transaction block
24.02096700668335
current transaction is aborted, commands ignored until end of transaction block
24.021963119506836
current transaction is aborted, commands ignored until end of transaction block
24.02206325531006
current transaction is aborted, commands ignored until end of transaction block
24.02296781539917
current transaction is aborted, commands ignored until end of transaction block
24.024195194244385
current transaction is aborted, commands ignored until end of transaction block
24.024972915649414
current transaction is aborted, commands ignored until end of transaction block
24.025969982147217
current transaction is aborted, commands ignored until end of transaction block
24.027361154556274
current transaction is aborted, commands ignored until end of transaction block
24.02796983718872
current transaction is aborted, commands ignored until end of transaction block
24.029006719589233
current transaction is aborted, commands ignored until end of transaction block
24.029972314834595
current transaction is aborted, commands ignored until end of transaction block
24.03096842765808
current transaction is aborted, commands ignored until end of transaction block
24.03096842765808
current transaction is aborted, commands ignored until end of transaction block
24.031972646713257
current transaction is aborted, commands ignored until end of transaction block
24.03445029258728
current transaction is aborted, commands ignored until end of transaction block
24.03496503829956
current transaction is aborted, commands ignored until end of transaction block
24.035962343215942
current transaction is aborted, commands ignored until end of transaction block
24.037084579467773
current transaction is aborted, commands ignored until end of transaction block
24.03796911239624
current transaction is aborted, commands ignored until end of transaction block
24.038963794708252
current transaction is aborted, commands ignored until end of transaction block
24.0399751663208
current transaction is aborted, commands ignored until end of transaction block
24.040977478027344
current transaction is aborted, commands ignored until end of transaction block
24.04198122024536
current transaction is aborted, commands ignored until end of transaction block
24.043172359466553
current transaction is aborted, commands ignored until end of transaction block
24.043971300125122
current transaction is aborted, commands ignored until end of transaction block
24.044999837875366
current transaction is aborted, commands ignored until end of transaction block
24.04595923423767
current transaction is aborted, commands ignored until end of transaction block
24.04697322845459
current transaction is aborted, commands ignored until end of transaction block
24.047977209091187
current transaction is aborted, commands ignored until end of transaction block
24.047977209091187
current transaction is aborted, commands ignored until end of transaction block
24.049003839492798
current transaction is aborted, commands ignored until end of transaction block
24.049972534179688
current transaction is aborted, commands ignored until end of transaction block
24.051098585128784
current transaction is aborted, commands ignored until end of transaction block
24.052016496658325
current transaction is aborted, commands ignored until end of transaction block
24.052016496658325
current transaction is aborted, commands ignored until end of transaction block
24.05296039581299
current transaction is aborted, commands ignored until end of transaction block
24.053961038589478
current transaction is aborted, commands ignored until end of transaction block
24.05500054359436
current transaction is aborted, commands ignored until end of transaction block
24.056973457336426
current transaction is aborted, commands ignored until end of transaction block
24.05728530883789
current transaction is aborted, commands ignored until end of transaction block
24.05801033973694
current transaction is aborted, commands ignored until end of transaction block
24.059006214141846
current transaction is aborted, commands ignored until end of transaction block
24.059961318969727
current transaction is aborted, commands ignored until end of transaction block
24.059961318969727
current transaction is aborted, commands ignored until end of transaction block
24.06198000907898
current transaction is aborted, commands ignored until end of transaction block
24.062968730926514
current transaction is aborted, commands ignored until end of transaction block
24.063968181610107
current transaction is aborted, commands ignored until end of transaction block
24.064964771270752
current transaction is aborted, commands ignored until end of transaction block
24.064964771270752
current transaction is aborted, commands ignored until end of transaction block
24.066001653671265
current transaction is aborted, commands ignored until end of transaction block
24.067975759506226
current transaction is aborted, commands ignored until end of transaction block
24.0689640045166
current transaction is aborted, commands ignored until end of transaction block
24.07025980949402
current transaction is aborted, commands ignored until end of transaction block
24.071970462799072
current transaction is aborted, commands ignored until end of transaction block
24.07296919822693
current transaction is aborted, commands ignored until end of transaction block
24.07296919822693
current transaction is aborted, commands ignored until end of transaction block
24.073962688446045
current transaction is aborted, commands ignored until end of transaction block
24.075321435928345
current transaction is aborted, commands ignored until end of transaction block
24.07615566253662
current transaction is aborted, commands ignored until end of transaction block
24.077077627182007
current transaction is aborted, commands ignored until end of transaction block
24.077967882156372
current transaction is aborted, commands ignored until end of transaction block
24.077967882156372
current transaction is aborted, commands ignored until end of transaction block
24.07941460609436
current transaction is aborted, commands ignored until end of transaction block
24.0800039768219
current transaction is aborted, commands ignored until end of transaction block
24.080970764160156
current transaction is aborted, commands ignored until end of transaction block
24.080970764160156
current transaction is aborted, commands ignored until end of transaction block
24.082965850830078
current transaction is aborted, commands ignored until end of transaction block
24.082965850830078
current transaction is aborted, commands ignored until end of transaction block
24.0841064453125
current transaction is aborted, commands ignored until end of transaction block
24.084964752197266
current transaction is aborted, commands ignored until end of transaction block
24.08596682548523
current transaction is aborted, commands ignored until end of transaction block
24.08652687072754
current transaction is aborted, commands ignored until end of transaction block
24.087015628814697
current transaction is aborted, commands ignored until end of transaction block
24.08796715736389
current transaction is aborted, commands ignored until end of transaction block
24.089967966079712
current transaction is aborted, commands ignored until end of transaction block
24.090083360671997
current transaction is aborted, commands ignored until end of transaction block
24.091005563735962
current transaction is aborted, commands ignored until end of transaction block
24.09216618537903
current transaction is aborted, commands ignored until end of transaction block
24.09318518638611
current transaction is aborted, commands ignored until end of transaction block
24.09400486946106
current transaction is aborted, commands ignored until end of transaction block
24.09400486946106
current transaction is aborted, commands ignored until end of transaction block
24.094972372055054
current transaction is aborted, commands ignored until end of transaction block
24.095970153808594
current transaction is aborted, commands ignored until end of transaction block
24.097007751464844
current transaction is aborted, commands ignored until end of transaction block
24.09817337989807
current transaction is aborted, commands ignored until end of transaction block
24.09896755218506
current transaction is aborted, commands ignored until end of transaction block
24.099969148635864
current transaction is aborted, commands ignored until end of transaction block
24.099969148635864
current transaction is aborted, commands ignored until end of transaction block
24.101011514663696
current transaction is aborted, commands ignored until end of transaction block
24.101961374282837
current transaction is aborted, commands ignored until end of transaction block
24.102975845336914
current transaction is aborted, commands ignored until end of transaction block
24.104196071624756
current transaction is aborted, commands ignored until end of transaction block
24.10497236251831
current transaction is aborted, commands ignored until end of transaction block
24.105963468551636
current transaction is aborted, commands ignored until end of transaction block
24.106971263885498
current transaction is aborted, commands ignored until end of transaction block
24.10713768005371
current transaction is aborted, commands ignored until end of transaction block
24.108004808425903
current transaction is aborted, commands ignored until end of transaction block
24.108824491500854
current transaction is aborted, commands ignored until end of transaction block
24.11024308204651
current transaction is aborted, commands ignored until end of transaction block
24.110803365707397
current transaction is aborted, commands ignored until end of transaction block
24.11180067062378
current transaction is aborted, commands ignored until end of transaction block
24.112839937210083
current transaction is aborted, commands ignored until end of transaction block
24.112839937210083
current transaction is aborted, commands ignored until end of transaction block
24.113800287246704
current transaction is aborted, commands ignored until end of transaction block
24.1148362159729
current transaction is aborted, commands ignored until end of transaction block
24.115798711776733
current transaction is aborted, commands ignored until end of transaction block
24.116804599761963
current transaction is aborted, commands ignored until end of transaction block
24.117794513702393
current transaction is aborted, commands ignored until end of transaction block
24.118837594985962
current transaction is aborted, commands ignored until end of transaction block
24.119805097579956
current transaction is aborted, commands ignored until end of transaction block
24.12083601951599
current transaction is aborted, commands ignored until end of transaction block
24.12083601951599
current transaction is aborted, commands ignored until end of transaction block
24.12183666229248
current transaction is aborted, commands ignored until end of transaction block
24.122796535491943
current transaction is aborted, commands ignored until end of transaction block
24.123798847198486
current transaction is aborted, commands ignored until end of transaction block
24.12544322013855
current transaction is aborted, commands ignored until end of transaction block
24.125797271728516
current transaction is aborted, commands ignored until end of transaction block
24.12679934501648
current transaction is aborted, commands ignored until end of transaction block
24.12679934501648
current transaction is aborted, commands ignored until end of transaction block
24.127798318862915
current transaction is aborted, commands ignored until end of transaction block
24.12896227836609
current transaction is aborted, commands ignored until end of transaction block
24.129867553710938
current transaction is aborted, commands ignored until end of transaction block
24.130802392959595
current transaction is aborted, commands ignored until end of transaction block
24.131803512573242
current transaction is aborted, commands ignored until end of transaction block
24.132797956466675
current transaction is aborted, commands ignored until end of transaction block
24.133837461471558
current transaction is aborted, commands ignored until end of transaction block
24.133837461471558
current transaction is aborted, commands ignored until end of transaction block
24.134835243225098
current transaction is aborted, commands ignored until end of transaction block
24.135801553726196
current transaction is aborted, commands ignored until end of transaction block
24.136804580688477
current transaction is aborted, commands ignored until end of transaction block
24.137802600860596
current transaction is aborted, commands ignored until end of transaction block
24.13882565498352
current transaction is aborted, commands ignored until end of transaction block
24.139798164367676
current transaction is aborted, commands ignored until end of transaction block
24.1403911113739
current transaction is aborted, commands ignored until end of transaction block
24.14083480834961
current transaction is aborted, commands ignored until end of transaction block
24.141798496246338
current transaction is aborted, commands ignored until end of transaction block
24.141798496246338
current transaction is aborted, commands ignored until end of transaction block
24.143190622329712
current transaction is aborted, commands ignored until end of transaction block
24.143803358078003
current transaction is aborted, commands ignored until end of transaction block
24.144814014434814
current transaction is aborted, commands ignored until end of transaction block
24.14579725265503
current transaction is aborted, commands ignored until end of transaction block
24.14679527282715
current transaction is aborted, commands ignored until end of transaction block
24.147839069366455
current transaction is aborted, commands ignored until end of transaction block
24.149075269699097
current transaction is aborted, commands ignored until end of transaction block
24.149837493896484
current transaction is aborted, commands ignored until end of transaction block
24.149837493896484
current transaction is aborted, commands ignored until end of transaction block
24.150863885879517
current transaction is aborted, commands ignored until end of transaction block
24.151803731918335
current transaction is aborted, commands ignored until end of transaction block
24.15285038948059
current transaction is aborted, commands ignored until end of transaction block
24.153841972351074
current transaction is aborted, commands ignored until end of transaction block
24.15483593940735
current transaction is aborted, commands ignored until end of transaction block
24.155802011489868
current transaction is aborted, commands ignored until end of transaction block
24.156068563461304
current transaction is aborted, commands ignored until end of transaction block
24.156960487365723
current transaction is aborted, commands ignored until end of transaction block
24.157864332199097
current transaction is aborted, commands ignored until end of transaction block
24.158854246139526
current transaction is aborted, commands ignored until end of transaction block
24.15983557701111
current transaction is aborted, commands ignored until end of transaction block
24.160835027694702
current transaction is aborted, commands ignored until end of transaction block
24.16107940673828
current transaction is aborted, commands ignored until end of transaction block
24.161834716796875
current transaction is aborted, commands ignored until end of transaction block
24.162834882736206
current transaction is aborted, commands ignored until end of transaction block
24.16383671760559
current transaction is aborted, commands ignored until end of transaction block
24.164798259735107
current transaction is aborted, commands ignored until end of transaction block
24.16585111618042
current transaction is aborted, commands ignored until end of transaction block
24.16585111618042
current transaction is aborted, commands ignored until end of transaction block
24.16683530807495
current transaction is aborted, commands ignored until end of transaction block
24.167935132980347
current transaction is aborted, commands ignored until end of transaction block
24.168835163116455
current transaction is aborted, commands ignored until end of transaction block
24.16983461380005
current transaction is aborted, commands ignored until end of transaction block
24.16983461380005
current transaction is aborted, commands ignored until end of transaction block
24.17083477973938
current transaction is aborted, commands ignored until end of transaction block
24.17179584503174
current transaction is aborted, commands ignored until end of transaction block
24.17330288887024
current transaction is aborted, commands ignored until end of transaction block
24.173805236816406
current transaction is aborted, commands ignored until end of transaction block
24.174863576889038
current transaction is aborted, commands ignored until end of transaction block
24.174863576889038
current transaction is aborted, commands ignored until end of transaction block
24.17579674720764
current transaction is aborted, commands ignored until end of transaction block
24.176865339279175
current transaction is aborted, commands ignored until end of transaction block
24.17784023284912
current transaction is aborted, commands ignored until end of transaction block
24.178797721862793
current transaction is aborted, commands ignored until end of transaction block
24.178797721862793
current transaction is aborted, commands ignored until end of transaction block
24.180334091186523
current transaction is aborted, commands ignored until end of transaction block
24.180798530578613
current transaction is aborted, commands ignored until end of transaction block
24.18198037147522
current transaction is aborted, commands ignored until end of transaction block
24.18283438682556
current transaction is aborted, commands ignored until end of transaction block
24.18383526802063
current transaction is aborted, commands ignored until end of transaction block
24.18383526802063
current transaction is aborted, commands ignored until end of transaction block
24.184805631637573
current transaction is aborted, commands ignored until end of transaction block
24.185898065567017
current transaction is aborted, commands ignored until end of transaction block
24.186792850494385
current transaction is aborted, commands ignored until end of transaction block
24.187836170196533
current transaction is aborted, commands ignored until end of transaction block
24.18883442878723
current transaction is aborted, commands ignored until end of transaction block
24.18912625312805
current transaction is aborted, commands ignored until end of transaction block
24.190097093582153
current transaction is aborted, commands ignored until end of transaction block
24.190837144851685
current transaction is aborted, commands ignored until end of transaction block
24.19179105758667
current transaction is aborted, commands ignored until end of transaction block
24.19279980659485
current transaction is aborted, commands ignored until end of transaction block
24.194029569625854
current transaction is aborted, commands ignored until end of transaction block
24.194793462753296
current transaction is aborted, commands ignored until end of transaction block
24.194793462753296
current transaction is aborted, commands ignored until end of transaction block
24.195968627929688
current transaction is aborted, commands ignored until end of transaction block
24.196802854537964
current transaction is aborted, commands ignored until end of transaction block
24.19780421257019
current transaction is aborted, commands ignored until end of transaction block
24.19780421257019
current transaction is aborted, commands ignored until end of transaction block
24.198995351791382
current transaction is aborted, commands ignored until end of transaction block
24.201179265975952
current transaction is aborted, commands ignored until end of transaction block
24.201793432235718
current transaction is aborted, commands ignored until end of transaction block
24.20279598236084
current transaction is aborted, commands ignored until end of transaction block
24.2037935256958
current transaction is aborted, commands ignored until end of transaction block
24.204151153564453
current transaction is aborted, commands ignored until end of transaction block
24.204792499542236
current transaction is aborted, commands ignored until end of transaction block
24.205876350402832
current transaction is aborted, commands ignored until end of transaction block
24.20679497718811
current transaction is aborted, commands ignored until end of transaction block
24.20779013633728
current transaction is aborted, commands ignored until end of transaction block
24.209041833877563
current transaction is aborted, commands ignored until end of transaction block
24.21005606651306
current transaction is aborted, commands ignored until end of transaction block
24.21005606651306
current transaction is aborted, commands ignored until end of transaction block
24.21105432510376
current transaction is aborted, commands ignored until end of transaction block
24.212127685546875
current transaction is aborted, commands ignored until end of transaction block
24.213056087493896
current transaction is aborted, commands ignored until end of transaction block
24.214065074920654
current transaction is aborted, commands ignored until end of transaction block
24.21505570411682
current transaction is aborted, commands ignored until end of transaction block
24.21505570411682
current transaction is aborted, commands ignored until end of transaction block
24.21610403060913
current transaction is aborted, commands ignored until end of transaction block
24.217100858688354
current transaction is aborted, commands ignored until end of transaction block
24.21805691719055
current transaction is aborted, commands ignored until end of transaction block
24.21805691719055
current transaction is aborted, commands ignored until end of transaction block
24.219059944152832
current transaction is aborted, commands ignored until end of transaction block
24.22107219696045
current transaction is aborted, commands ignored until end of transaction block
24.22305941581726
current transaction is aborted, commands ignored until end of transaction block
24.22406244277954
current transaction is aborted, commands ignored until end of transaction block
24.22406244277954
current transaction is aborted, commands ignored until end of transaction block
24.22510290145874
current transaction is aborted, commands ignored until end of transaction block
24.226059913635254
current transaction is aborted, commands ignored until end of transaction block
24.227160692214966
current transaction is aborted, commands ignored until end of transaction block
24.22906231880188
current transaction is aborted, commands ignored until end of transaction block
24.230100631713867
current transaction is aborted, commands ignored until end of transaction block
24.231058359146118
current transaction is aborted, commands ignored until end of transaction block
24.232062578201294
current transaction is aborted, commands ignored until end of transaction block
24.233099699020386
current transaction is aborted, commands ignored until end of transaction block
24.233099699020386
current transaction is aborted, commands ignored until end of transaction block
24.23405909538269
current transaction is aborted, commands ignored until end of transaction block
24.236155033111572
current transaction is aborted, commands ignored until end of transaction block
24.236155033111572
current transaction is aborted, commands ignored until end of transaction block
24.237102031707764
current transaction is aborted, commands ignored until end of transaction block
24.23807382583618
current transaction is aborted, commands ignored until end of transaction block
24.239099979400635
current transaction is aborted, commands ignored until end of transaction block
24.240061283111572
current transaction is aborted, commands ignored until end of transaction block
24.24105739593506
current transaction is aborted, commands ignored until end of transaction block
24.242076873779297
current transaction is aborted, commands ignored until end of transaction block
24.243063926696777
current transaction is aborted, commands ignored until end of transaction block
24.243063926696777
current transaction is aborted, commands ignored until end of transaction block
24.244569778442383
current transaction is aborted, commands ignored until end of transaction block
24.24509048461914
current transaction is aborted, commands ignored until end of transaction block
24.246246576309204
current transaction is aborted, commands ignored until end of transaction block
24.247268676757812
current transaction is aborted, commands ignored until end of transaction block
24.248271226882935
current transaction is aborted, commands ignored until end of transaction block
24.249286651611328
current transaction is aborted, commands ignored until end of transaction block
24.25027346611023
current transaction is aborted, commands ignored until end of transaction block
24.251264810562134
current transaction is aborted, commands ignored until end of transaction block
24.25227999687195
current transaction is aborted, commands ignored until end of transaction block
24.253273963928223
current transaction is aborted, commands ignored until end of transaction block
24.254263877868652
current transaction is aborted, commands ignored until end of transaction block
24.25527548789978
current transaction is aborted, commands ignored until end of transaction block
24.25726866722107
current transaction is aborted, commands ignored until end of transaction block
24.258273124694824
current transaction is aborted, commands ignored until end of transaction block
24.259268760681152
current transaction is aborted, commands ignored until end of transaction block
24.26030683517456
current transaction is aborted, commands ignored until end of transaction block
24.26052951812744
current transaction is aborted, commands ignored until end of transaction block
24.261263608932495
current transaction is aborted, commands ignored until end of transaction block
24.262265920639038
current transaction is aborted, commands ignored until end of transaction block
24.263262510299683
current transaction is aborted, commands ignored until end of transaction block
24.264708042144775
current transaction is aborted, commands ignored until end of transaction block
24.265992403030396
current transaction is aborted, commands ignored until end of transaction block
24.2663791179657
current transaction is aborted, commands ignored until end of transaction block
24.267311811447144
current transaction is aborted, commands ignored until end of transaction block
24.268264532089233
current transaction is aborted, commands ignored until end of transaction block
24.268264532089233
current transaction is aborted, commands ignored until end of transaction block
24.270263195037842
current transaction is aborted, commands ignored until end of transaction block
24.271262407302856
current transaction is aborted, commands ignored until end of transaction block
24.272307634353638
current transaction is aborted, commands ignored until end of transaction block
24.272307634353638
current transaction is aborted, commands ignored until end of transaction block
24.27330756187439
current transaction is aborted, commands ignored until end of transaction block
24.274307250976562
current transaction is aborted, commands ignored until end of transaction block
24.275267124176025
current transaction is aborted, commands ignored until end of transaction block
24.276272535324097
current transaction is aborted, commands ignored until end of transaction block
24.27752161026001
current transaction is aborted, commands ignored until end of transaction block
24.278265476226807
current transaction is aborted, commands ignored until end of transaction block
24.278265476226807
current transaction is aborted, commands ignored until end of transaction block
24.279308795928955
current transaction is aborted, commands ignored until end of transaction block
24.28027582168579
current transaction is aborted, commands ignored until end of transaction block
24.281306743621826
current transaction is aborted, commands ignored until end of transaction block
24.281306743621826
current transaction is aborted, commands ignored until end of transaction block
24.28227400779724
current transaction is aborted, commands ignored until end of transaction block
24.283271551132202
current transaction is aborted, commands ignored until end of transaction block
24.284273386001587
current transaction is aborted, commands ignored until end of transaction block
24.28529715538025
current transaction is aborted, commands ignored until end of transaction block
24.286266088485718
current transaction is aborted, commands ignored until end of transaction block
24.287309408187866
current transaction is aborted, commands ignored until end of transaction block
24.28831148147583
current transaction is aborted, commands ignored until end of transaction block
24.28831148147583
current transaction is aborted, commands ignored until end of transaction block
24.289270639419556
current transaction is aborted, commands ignored until end of transaction block
24.291319608688354
current transaction is aborted, commands ignored until end of transaction block
24.292264461517334
current transaction is aborted, commands ignored until end of transaction block
24.293267965316772
current transaction is aborted, commands ignored until end of transaction block
24.293267965316772
current transaction is aborted, commands ignored until end of transaction block
24.294264554977417
current transaction is aborted, commands ignored until end of transaction block
24.29530143737793
current transaction is aborted, commands ignored until end of transaction block
24.29626989364624
current transaction is aborted, commands ignored until end of transaction block
24.29626989364624
current transaction is aborted, commands ignored until end of transaction block
24.298267364501953
current transaction is aborted, commands ignored until end of transaction block
24.299270629882812
current transaction is aborted, commands ignored until end of transaction block
24.299800395965576
current transaction is aborted, commands ignored until end of transaction block
24.300275087356567
current transaction is aborted, commands ignored until end of transaction block
24.301270484924316
current transaction is aborted, commands ignored until end of transaction block
24.301270484924316
current transaction is aborted, commands ignored until end of transaction block
24.302693367004395
current transaction is aborted, commands ignored until end of transaction block
24.303481340408325
current transaction is aborted, commands ignored until end of transaction block
24.30527663230896
current transaction is aborted, commands ignored until end of transaction block
24.305386304855347
current transaction is aborted, commands ignored until end of transaction block
24.306265830993652
current transaction is aborted, commands ignored until end of transaction block
24.307274341583252
current transaction is aborted, commands ignored until end of transaction block
24.307274341583252
current transaction is aborted, commands ignored until end of transaction block
24.308263301849365
current transaction is aborted, commands ignored until end of transaction block
24.309260606765747
current transaction is aborted, commands ignored until end of transaction block
24.309260606765747
current transaction is aborted, commands ignored until end of transaction block
24.310543060302734
current transaction is aborted, commands ignored until end of transaction block
24.312263011932373
current transaction is aborted, commands ignored until end of transaction block
24.312263011932373
current transaction is aborted, commands ignored until end of transaction block
24.313417196273804
current transaction is aborted, commands ignored until end of transaction block
24.313560962677002
current transaction is aborted, commands ignored until end of transaction block
24.31427550315857
current transaction is aborted, commands ignored until end of transaction block
24.31427550315857
current transaction is aborted, commands ignored until end of transaction block
24.315293788909912
current transaction is aborted, commands ignored until end of transaction block
24.315293788909912
current transaction is aborted, commands ignored until end of transaction block
24.31626296043396
current transaction is aborted, commands ignored until end of transaction block
24.31626296043396
current transaction is aborted, commands ignored until end of transaction block
24.317264556884766
current transaction is aborted, commands ignored until end of transaction block
24.317264556884766
current transaction is aborted, commands ignored until end of transaction block
24.318795919418335
current transaction is aborted, commands ignored until end of transaction block
24.319265127182007
current transaction is aborted, commands ignored until end of transaction block
24.320302724838257
current transaction is aborted, commands ignored until end of transaction block
24.320302724838257
current transaction is aborted, commands ignored until end of transaction block
24.321471691131592
current transaction is aborted, commands ignored until end of transaction block
24.321471691131592
current transaction is aborted, commands ignored until end of transaction block
24.32230257987976
current transaction is aborted, commands ignored until end of transaction block
24.32230257987976
current transaction is aborted, commands ignored until end of transaction block
24.323347806930542
current transaction is aborted, commands ignored until end of transaction block
24.323347806930542
current transaction is aborted, commands ignored until end of transaction block
24.324268341064453
current transaction is aborted, commands ignored until end of transaction block
24.325273275375366
current transaction is aborted, commands ignored until end of transaction block
24.325273275375366
current transaction is aborted, commands ignored until end of transaction block
24.326266288757324
current transaction is aborted, commands ignored until end of transaction block
24.326266288757324
current transaction is aborted, commands ignored until end of transaction block
24.327264547348022
current transaction is aborted, commands ignored until end of transaction block
24.327264547348022
current transaction is aborted, commands ignored until end of transaction block
24.328260898590088
current transaction is aborted, commands ignored until end of transaction block
24.328731298446655
current transaction is aborted, commands ignored until end of transaction block
24.32929515838623
current transaction is aborted, commands ignored until end of transaction block
24.32929515838623
current transaction is aborted, commands ignored until end of transaction block
24.330268383026123
current transaction is aborted, commands ignored until end of transaction block
24.330268383026123
current transaction is aborted, commands ignored until end of transaction block
24.331265926361084
current transaction is aborted, commands ignored until end of transaction block
24.332280158996582
current transaction is aborted, commands ignored until end of transaction block
24.332823514938354
current transaction is aborted, commands ignored until end of transaction block
24.33327603340149
current transaction is aborted, commands ignored until end of transaction block
24.33327603340149
current transaction is aborted, commands ignored until end of transaction block
24.334351539611816
current transaction is aborted, commands ignored until end of transaction block
24.334351539611816
current transaction is aborted, commands ignored until end of transaction block
24.33526873588562
current transaction is aborted, commands ignored until end of transaction block
24.33626937866211
current transaction is aborted, commands ignored until end of transaction block
24.337263107299805
current transaction is aborted, commands ignored until end of transaction block
24.338263988494873
current transaction is aborted, commands ignored until end of transaction block
24.340266227722168
current transaction is aborted, commands ignored until end of transaction block
24.341264247894287
current transaction is aborted, commands ignored until end of transaction block
24.341264247894287
current transaction is aborted, commands ignored until end of transaction block
24.342269897460938
current transaction is aborted, commands ignored until end of transaction block
24.3432719707489
current transaction is aborted, commands ignored until end of transaction block
24.34426712989807
current transaction is aborted, commands ignored until end of transaction block
24.345264196395874
current transaction is aborted, commands ignored until end of transaction block
24.345264196395874
current transaction is aborted, commands ignored until end of transaction block
24.346269369125366
current transaction is aborted, commands ignored until end of transaction block
24.346269369125366
current transaction is aborted, commands ignored until end of transaction block
24.347286701202393
current transaction is aborted, commands ignored until end of transaction block
24.348263025283813
current transaction is aborted, commands ignored until end of transaction block
24.348263025283813
current transaction is aborted, commands ignored until end of transaction block
24.349555015563965
current transaction is aborted, commands ignored until end of transaction block
24.350277423858643
current transaction is aborted, commands ignored until end of transaction block
24.351266384124756
current transaction is aborted, commands ignored until end of transaction block
24.351266384124756
current transaction is aborted, commands ignored until end of transaction block
24.353519201278687
current transaction is aborted, commands ignored until end of transaction block
24.354264736175537
current transaction is aborted, commands ignored until end of transaction block
24.35526728630066
current transaction is aborted, commands ignored until end of transaction block
24.35526728630066
current transaction is aborted, commands ignored until end of transaction block
24.356263875961304
current transaction is aborted, commands ignored until end of transaction block
24.356263875961304
current transaction is aborted, commands ignored until end of transaction block
24.357273817062378
current transaction is aborted, commands ignored until end of transaction block
24.358266592025757
current transaction is aborted, commands ignored until end of transaction block
24.358266592025757
current transaction is aborted, commands ignored until end of transaction block
24.359267711639404
current transaction is aborted, commands ignored until end of transaction block
24.36026954650879
current transaction is aborted, commands ignored until end of transaction block
24.360491514205933
current transaction is aborted, commands ignored until end of transaction block
24.361287355422974
current transaction is aborted, commands ignored until end of transaction block
24.362262725830078
current transaction is aborted, commands ignored until end of transaction block
24.362262725830078
current transaction is aborted, commands ignored until end of transaction block
24.363268852233887
current transaction is aborted, commands ignored until end of transaction block
24.364267587661743
current transaction is aborted, commands ignored until end of transaction block
24.364267587661743
current transaction is aborted, commands ignored until end of transaction block
24.365272283554077
current transaction is aborted, commands ignored until end of transaction block
24.365272283554077
current transaction is aborted, commands ignored until end of transaction block
24.36626887321472
current transaction is aborted, commands ignored until end of transaction block
24.36733651161194
current transaction is aborted, commands ignored until end of transaction block
24.3683443069458
current transaction is aborted, commands ignored until end of transaction block
24.36950135231018
current transaction is aborted, commands ignored until end of transaction block
24.370266675949097
current transaction is aborted, commands ignored until end of transaction block
24.370266675949097
current transaction is aborted, commands ignored until end of transaction block
24.37126612663269
current transaction is aborted, commands ignored until end of transaction block
24.372283220291138
current transaction is aborted, commands ignored until end of transaction block
24.373265266418457
current transaction is aborted, commands ignored until end of transaction block
24.373265266418457
current transaction is aborted, commands ignored until end of transaction block
24.374531269073486
current transaction is aborted, commands ignored until end of transaction block
24.375267028808594
current transaction is aborted, commands ignored until end of transaction block
24.375267028808594
current transaction is aborted, commands ignored until end of transaction block
24.37630867958069
current transaction is aborted, commands ignored until end of transaction block
24.377264499664307
current transaction is aborted, commands ignored until end of transaction block
24.377588033676147
current transaction is aborted, commands ignored until end of transaction block
24.378268718719482
current transaction is aborted, commands ignored until end of transaction block
24.37926459312439
current transaction is aborted, commands ignored until end of transaction block
24.37926459312439
current transaction is aborted, commands ignored until end of transaction block
24.38027596473694
current transaction is aborted, commands ignored until end of transaction block
24.381266832351685
current transaction is aborted, commands ignored until end of transaction block
24.381266832351685
current transaction is aborted, commands ignored until end of transaction block
24.38231611251831
current transaction is aborted, commands ignored until end of transaction block
24.38326644897461
current transaction is aborted, commands ignored until end of transaction block
24.38326644897461
current transaction is aborted, commands ignored until end of transaction block
24.384267330169678
current transaction is aborted, commands ignored until end of transaction block
24.384267330169678
current transaction is aborted, commands ignored until end of transaction block
24.385271310806274
current transaction is aborted, commands ignored until end of transaction block
24.386261224746704
current transaction is aborted, commands ignored until end of transaction block
24.38827633857727
current transaction is aborted, commands ignored until end of transaction block
24.390289783477783
current transaction is aborted, commands ignored until end of transaction block
24.390424489974976
current transaction is aborted, commands ignored until end of transaction block
24.391268253326416
current transaction is aborted, commands ignored until end of transaction block
24.392277002334595
current transaction is aborted, commands ignored until end of transaction block
24.393264532089233
current transaction is aborted, commands ignored until end of transaction block
24.394267082214355
current transaction is aborted, commands ignored until end of transaction block
24.39526891708374
current transaction is aborted, commands ignored until end of transaction block
24.39626431465149
current transaction is aborted, commands ignored until end of transaction block
24.39626431465149
current transaction is aborted, commands ignored until end of transaction block
24.397277355194092
current transaction is aborted, commands ignored until end of transaction block
24.398301124572754
current transaction is aborted, commands ignored until end of transaction block
24.398301124572754
current transaction is aborted, commands ignored until end of transaction block
24.39927649497986
current transaction is aborted, commands ignored until end of transaction block
24.39927649497986
current transaction is aborted, commands ignored until end of transaction block
24.40026354789734
current transaction is aborted, commands ignored until end of transaction block
24.40026354789734
current transaction is aborted, commands ignored until end of transaction block
24.40126633644104
current transaction is aborted, commands ignored until end of transaction block
24.40226912498474
current transaction is aborted, commands ignored until end of transaction block
24.40226912498474
current transaction is aborted, commands ignored until end of transaction block
24.40326762199402
current transaction is aborted, commands ignored until end of transaction block
24.404285192489624
current transaction is aborted, commands ignored until end of transaction block
24.40626883506775
current transaction is aborted, commands ignored until end of transaction block
24.407264471054077
current transaction is aborted, commands ignored until end of transaction block
24.408264636993408
current transaction is aborted, commands ignored until end of transaction block
24.40926504135132
current transaction is aborted, commands ignored until end of transaction block
24.4102725982666
current transaction is aborted, commands ignored until end of transaction block
24.4102725982666
current transaction is aborted, commands ignored until end of transaction block
24.411263942718506
current transaction is aborted, commands ignored until end of transaction block
24.412269830703735
current transaction is aborted, commands ignored until end of transaction block
24.412269830703735
current transaction is aborted, commands ignored until end of transaction block
24.413270235061646
current transaction is aborted, commands ignored until end of transaction block
24.414266347885132
current transaction is aborted, commands ignored until end of transaction block
24.414266347885132
current transaction is aborted, commands ignored until end of transaction block
24.415263652801514
current transaction is aborted, commands ignored until end of transaction block
24.415263652801514
current transaction is aborted, commands ignored until end of transaction block
24.416313648223877
current transaction is aborted, commands ignored until end of transaction block
24.417290210723877
current transaction is aborted, commands ignored until end of transaction block
24.418302297592163
current transaction is aborted, commands ignored until end of transaction block
24.418302297592163
current transaction is aborted, commands ignored until end of transaction block
24.41928005218506
current transaction is aborted, commands ignored until end of transaction block
24.41928005218506
current transaction is aborted, commands ignored until end of transaction block
24.420263290405273
current transaction is aborted, commands ignored until end of transaction block
24.420263290405273
current transaction is aborted, commands ignored until end of transaction block
24.42133617401123
current transaction is aborted, commands ignored until end of transaction block
24.42326593399048
current transaction is aborted, commands ignored until end of transaction block
24.424264907836914
current transaction is aborted, commands ignored until end of transaction block
24.425271034240723
current transaction is aborted, commands ignored until end of transaction block
24.426262855529785
current transaction is aborted, commands ignored until end of transaction block
24.427265882492065
current transaction is aborted, commands ignored until end of transaction block
24.428264617919922
current transaction is aborted, commands ignored until end of transaction block
24.428264617919922
current transaction is aborted, commands ignored until end of transaction block
24.429263591766357
current transaction is aborted, commands ignored until end of transaction block
24.430266857147217
current transaction is aborted, commands ignored until end of transaction block
24.430266857147217
current transaction is aborted, commands ignored until end of transaction block
24.431330680847168
current transaction is aborted, commands ignored until end of transaction block
24.432266235351562
current transaction is aborted, commands ignored until end of transaction block
24.432266235351562
current transaction is aborted, commands ignored until end of transaction block
24.43336820602417
current transaction is aborted, commands ignored until end of transaction block
24.434596300125122
current transaction is aborted, commands ignored until end of transaction block
24.435264587402344
current transaction is aborted, commands ignored until end of transaction block
24.435264587402344
current transaction is aborted, commands ignored until end of transaction block
24.436267137527466
current transaction is aborted, commands ignored until end of transaction block
24.437270164489746
current transaction is aborted, commands ignored until end of transaction block
24.437270164489746
current transaction is aborted, commands ignored until end of transaction block
24.438339233398438
current transaction is aborted, commands ignored until end of transaction block
24.438339233398438
current transaction is aborted, commands ignored until end of transaction block
24.439265251159668
current transaction is aborted, commands ignored until end of transaction block
24.439265251159668
current transaction is aborted, commands ignored until end of transaction block
24.44026470184326
current transaction is aborted, commands ignored until end of transaction block
24.44026470184326
current transaction is aborted, commands ignored until end of transaction block
24.441266298294067
current transaction is aborted, commands ignored until end of transaction block
24.441266298294067
current transaction is aborted, commands ignored until end of transaction block
24.442265033721924
current transaction is aborted, commands ignored until end of transaction block
24.443268060684204
current transaction is aborted, commands ignored until end of transaction block
24.444278240203857
current transaction is aborted, commands ignored until end of transaction block
24.444278240203857
current transaction is aborted, commands ignored until end of transaction block
24.445504426956177
current transaction is aborted, commands ignored until end of transaction block
24.44626522064209
current transaction is aborted, commands ignored until end of transaction block
24.44626522064209
current transaction is aborted, commands ignored until end of transaction block
24.447299003601074
current transaction is aborted, commands ignored until end of transaction block
24.447299003601074
current transaction is aborted, commands ignored until end of transaction block
24.447299003601074
current transaction is aborted, commands ignored until end of transaction block
24.44830298423767
current transaction is aborted, commands ignored until end of transaction block
24.44926691055298
current transaction is aborted, commands ignored until end of transaction block
24.44926691055298
current transaction is aborted, commands ignored until end of transaction block
24.450302362442017
current transaction is aborted, commands ignored until end of transaction block
24.451273441314697
current transaction is aborted, commands ignored until end of transaction block
24.452264070510864
current transaction is aborted, commands ignored until end of transaction block
24.45326566696167
current transaction is aborted, commands ignored until end of transaction block
24.45432710647583
current transaction is aborted, commands ignored until end of transaction block
24.45432710647583
current transaction is aborted, commands ignored until end of transaction block
24.455410480499268
current transaction is aborted, commands ignored until end of transaction block
24.456264972686768
current transaction is aborted, commands ignored until end of transaction block
24.456264972686768
current transaction is aborted, commands ignored until end of transaction block
24.45730710029602
current transaction is aborted, commands ignored until end of transaction block
24.45826745033264
current transaction is aborted, commands ignored until end of transaction block
24.45826745033264
current transaction is aborted, commands ignored until end of transaction block
24.45927095413208
current transaction is aborted, commands ignored until end of transaction block
24.45927095413208
current transaction is aborted, commands ignored until end of transaction block
24.460320234298706
current transaction is aborted, commands ignored until end of transaction block
24.461265325546265
current transaction is aborted, commands ignored until end of transaction block
24.46226978302002
current transaction is aborted, commands ignored until end of transaction block
24.463269233703613
current transaction is aborted, commands ignored until end of transaction block
24.46426486968994
current transaction is aborted, commands ignored until end of transaction block
24.46426486968994
current transaction is aborted, commands ignored until end of transaction block
24.465310096740723
current transaction is aborted, commands ignored until end of transaction block
24.466302394866943
current transaction is aborted, commands ignored until end of transaction block
24.466302394866943
current transaction is aborted, commands ignored until end of transaction block
24.46737766265869
current transaction is aborted, commands ignored until end of transaction block
24.46737766265869
current transaction is aborted, commands ignored until end of transaction block
24.468266487121582
current transaction is aborted, commands ignored until end of transaction block
24.469262838363647
current transaction is aborted, commands ignored until end of transaction block
24.469262838363647
current transaction is aborted, commands ignored until end of transaction block
24.471274614334106
current transaction is aborted, commands ignored until end of transaction block
24.471274614334106
current transaction is aborted, commands ignored until end of transaction block
24.472267627716064
current transaction is aborted, commands ignored until end of transaction block
24.47329545021057
current transaction is aborted, commands ignored until end of transaction block
24.47380566596985
current transaction is aborted, commands ignored until end of transaction block
24.474266052246094
current transaction is aborted, commands ignored until end of transaction block
24.475269556045532
current transaction is aborted, commands ignored until end of transaction block
24.475269556045532
current transaction is aborted, commands ignored until end of transaction block
24.47630739212036
current transaction is aborted, commands ignored until end of transaction block
24.47676944732666
current transaction is aborted, commands ignored until end of transaction block
24.477269172668457
current transaction is aborted, commands ignored until end of transaction block
24.478262662887573
current transaction is aborted, commands ignored until end of transaction block
24.47984790802002
current transaction is aborted, commands ignored until end of transaction block
24.480265378952026
current transaction is aborted, commands ignored until end of transaction block
24.481266021728516
current transaction is aborted, commands ignored until end of transaction block
24.481266021728516
current transaction is aborted, commands ignored until end of transaction block
24.48230266571045
current transaction is aborted, commands ignored until end of transaction block
24.48230266571045
current transaction is aborted, commands ignored until end of transaction block
24.483264446258545
current transaction is aborted, commands ignored until end of transaction block
24.483264446258545
current transaction is aborted, commands ignored until end of transaction block
24.48427438735962
current transaction is aborted, commands ignored until end of transaction block
24.48427438735962
current transaction is aborted, commands ignored until end of transaction block
24.485286951065063
current transaction is aborted, commands ignored until end of transaction block
24.485286951065063
current transaction is aborted, commands ignored until end of transaction block
24.486345529556274
current transaction is aborted, commands ignored until end of transaction block
24.486345529556274
current transaction is aborted, commands ignored until end of transaction block
24.48731780052185
current transaction is aborted, commands ignored until end of transaction block
24.48731780052185
current transaction is aborted, commands ignored until end of transaction block
24.48844575881958
current transaction is aborted, commands ignored until end of transaction block
24.48940110206604
current transaction is aborted, commands ignored until end of transaction block
24.48940110206604
current transaction is aborted, commands ignored until end of transaction block
24.49029779434204
current transaction is aborted, commands ignored until end of transaction block
24.491427183151245
current transaction is aborted, commands ignored until end of transaction block
24.492267370224
current transaction is aborted, commands ignored until end of transaction block
24.493558406829834
current transaction is aborted, commands ignored until end of transaction block
24.494266748428345
current transaction is aborted, commands ignored until end of transaction block
24.495264053344727
current transaction is aborted, commands ignored until end of transaction block
24.495264053344727
current transaction is aborted, commands ignored until end of transaction block
24.496264934539795
current transaction is aborted, commands ignored until end of transaction block
24.496264934539795
current transaction is aborted, commands ignored until end of transaction block
24.497262239456177
current transaction is aborted, commands ignored until end of transaction block
24.497262239456177
current transaction is aborted, commands ignored until end of transaction block
24.49827790260315
current transaction is aborted, commands ignored until end of transaction block
24.499279975891113
current transaction is aborted, commands ignored until end of transaction block
24.500271797180176
current transaction is aborted, commands ignored until end of transaction block
24.50131320953369
current transaction is aborted, commands ignored until end of transaction block
24.50131320953369
current transaction is aborted, commands ignored until end of transaction block
24.502268075942993
current transaction is aborted, commands ignored until end of transaction block
24.502268075942993
current transaction is aborted, commands ignored until end of transaction block
24.503270387649536
current transaction is aborted, commands ignored until end of transaction block
24.503270387649536
current transaction is aborted, commands ignored until end of transaction block
24.504263162612915
current transaction is aborted, commands ignored until end of transaction block
24.505268812179565
current transaction is aborted, commands ignored until end of transaction block
24.506273984909058
current transaction is aborted, commands ignored until end of transaction block
24.506273984909058
current transaction is aborted, commands ignored until end of transaction block
24.507265090942383
current transaction is aborted, commands ignored until end of transaction block
24.50843858718872
current transaction is aborted, commands ignored until end of transaction block
24.509269952774048
current transaction is aborted, commands ignored until end of transaction block
24.510263919830322
current transaction is aborted, commands ignored until end of transaction block
24.510263919830322
current transaction is aborted, commands ignored until end of transaction block
24.51126194000244
current transaction is aborted, commands ignored until end of transaction block
24.512269735336304
current transaction is aborted, commands ignored until end of transaction block
24.51326608657837
current transaction is aborted, commands ignored until end of transaction block
24.51326608657837
current transaction is aborted, commands ignored until end of transaction block
24.514355897903442
current transaction is aborted, commands ignored until end of transaction block
24.51526403427124
current transaction is aborted, commands ignored until end of transaction block
24.51526403427124
current transaction is aborted, commands ignored until end of transaction block
24.516319036483765
current transaction is aborted, commands ignored until end of transaction block
24.516319036483765
current transaction is aborted, commands ignored until end of transaction block
24.51745629310608
current transaction is aborted, commands ignored until end of transaction block
24.518266916275024
current transaction is aborted, commands ignored until end of transaction block
24.51926326751709
current transaction is aborted, commands ignored until end of transaction block
24.51926326751709
current transaction is aborted, commands ignored until end of transaction block
24.5205295085907
current transaction is aborted, commands ignored until end of transaction block
24.521265506744385
current transaction is aborted, commands ignored until end of transaction block
24.521265506744385
current transaction is aborted, commands ignored until end of transaction block
24.522271633148193
current transaction is aborted, commands ignored until end of transaction block
24.522271633148193
current transaction is aborted, commands ignored until end of transaction block
24.523464918136597
current transaction is aborted, commands ignored until end of transaction block
24.52427339553833
current transaction is aborted, commands ignored until end of transaction block
24.52529525756836
current transaction is aborted, commands ignored until end of transaction block
24.52529525756836
current transaction is aborted, commands ignored until end of transaction block
24.526269912719727
current transaction is aborted, commands ignored until end of transaction block
24.527273416519165
current transaction is aborted, commands ignored until end of transaction block
24.528355836868286
current transaction is aborted, commands ignored until end of transaction block
24.529263496398926
current transaction is aborted, commands ignored until end of transaction block
24.5302631855011
current transaction is aborted, commands ignored until end of transaction block
24.5302631855011
current transaction is aborted, commands ignored until end of transaction block
24.531270742416382
current transaction is aborted, commands ignored until end of transaction block
24.531270742416382
current transaction is aborted, commands ignored until end of transaction block
24.533276796340942
current transaction is aborted, commands ignored until end of transaction block
24.534315586090088
current transaction is aborted, commands ignored until end of transaction block
24.53526496887207
current transaction is aborted, commands ignored until end of transaction block
24.53526496887207
current transaction is aborted, commands ignored until end of transaction block
24.536622047424316
current transaction is aborted, commands ignored until end of transaction block
24.537299156188965
current transaction is aborted, commands ignored until end of transaction block
24.537299156188965
current transaction is aborted, commands ignored until end of transaction block
24.538269758224487
current transaction is aborted, commands ignored until end of transaction block
24.538269758224487
current transaction is aborted, commands ignored until end of transaction block
24.53926706314087
current transaction is aborted, commands ignored until end of transaction block
24.540271759033203
current transaction is aborted, commands ignored until end of transaction block
24.540271759033203
current transaction is aborted, commands ignored until end of transaction block
24.541272163391113
current transaction is aborted, commands ignored until end of transaction block
24.541272163391113
current transaction is aborted, commands ignored until end of transaction block
24.542263507843018
current transaction is aborted, commands ignored until end of transaction block
24.542263507843018
current transaction is aborted, commands ignored until end of transaction block
24.54326820373535
current transaction is aborted, commands ignored until end of transaction block
24.544273376464844
current transaction is aborted, commands ignored until end of transaction block
24.544273376464844
current transaction is aborted, commands ignored until end of transaction block
24.546458959579468
current transaction is aborted, commands ignored until end of transaction block
24.5472629070282
current transaction is aborted, commands ignored until end of transaction block
24.548274278640747
current transaction is aborted, commands ignored until end of transaction block
24.548274278640747
current transaction is aborted, commands ignored until end of transaction block
24.549271821975708
current transaction is aborted, commands ignored until end of transaction block
24.549271821975708
current transaction is aborted, commands ignored until end of transaction block
24.55030655860901
current transaction is aborted, commands ignored until end of transaction block
24.55030655860901
current transaction is aborted, commands ignored until end of transaction block
24.551405429840088
current transaction is aborted, commands ignored until end of transaction block
24.551405429840088
current transaction is aborted, commands ignored until end of transaction block
24.55226492881775
current transaction is aborted, commands ignored until end of transaction block
24.55226492881775
current transaction is aborted, commands ignored until end of transaction block
24.553267240524292
current transaction is aborted, commands ignored until end of transaction block
24.55426788330078
current transaction is aborted, commands ignored until end of transaction block
24.554369926452637
current transaction is aborted, commands ignored until end of transaction block
24.555306434631348
current transaction is aborted, commands ignored until end of transaction block
24.556314945220947
current transaction is aborted, commands ignored until end of transaction block
24.55727481842041
current transaction is aborted, commands ignored until end of transaction block
24.55727481842041
current transaction is aborted, commands ignored until end of transaction block
24.558265447616577
current transaction is aborted, commands ignored until end of transaction block
24.558265447616577
current transaction is aborted, commands ignored until end of transaction block
24.55926537513733
current transaction is aborted, commands ignored until end of transaction block
24.56026601791382
current transaction is aborted, commands ignored until end of transaction block
24.56126379966736
current transaction is aborted, commands ignored until end of transaction block
24.562283277511597
current transaction is aborted, commands ignored until end of transaction block
24.563488960266113
current transaction is aborted, commands ignored until end of transaction block
24.5644109249115
current transaction is aborted, commands ignored until end of transaction block
24.56526279449463
current transaction is aborted, commands ignored until end of transaction block
24.56526279449463
current transaction is aborted, commands ignored until end of transaction block
24.566267013549805
current transaction is aborted, commands ignored until end of transaction block
24.5672709941864
current transaction is aborted, commands ignored until end of transaction block
24.568276405334473
current transaction is aborted, commands ignored until end of transaction block
24.56926703453064
current transaction is aborted, commands ignored until end of transaction block
24.56926703453064
current transaction is aborted, commands ignored until end of transaction block
24.570272207260132
current transaction is aborted, commands ignored until end of transaction block
24.57126498222351
current transaction is aborted, commands ignored until end of transaction block
24.57126498222351
current transaction is aborted, commands ignored until end of transaction block
24.572316884994507
current transaction is aborted, commands ignored until end of transaction block
24.572316884994507
current transaction is aborted, commands ignored until end of transaction block
24.573265075683594
current transaction is aborted, commands ignored until end of transaction block
24.57426428794861
current transaction is aborted, commands ignored until end of transaction block
24.57426428794861
current transaction is aborted, commands ignored until end of transaction block
24.57626485824585
current transaction is aborted, commands ignored until end of transaction block
24.577266931533813
current transaction is aborted, commands ignored until end of transaction block
24.57826566696167
current transaction is aborted, commands ignored until end of transaction block
24.57826566696167
current transaction is aborted, commands ignored until end of transaction block
24.57926630973816
current transaction is aborted, commands ignored until end of transaction block
24.58026647567749
current transaction is aborted, commands ignored until end of transaction block
24.58126449584961
current transaction is aborted, commands ignored until end of transaction block
24.58227229118347
current transaction is aborted, commands ignored until end of transaction block
24.58227229118347
current transaction is aborted, commands ignored until end of transaction block
24.583324909210205
current transaction is aborted, commands ignored until end of transaction block
24.58459758758545
current transaction is aborted, commands ignored until end of transaction block
24.585264444351196
current transaction is aborted, commands ignored until end of transaction block
24.585264444351196
current transaction is aborted, commands ignored until end of transaction block
24.586270570755005
current transaction is aborted, commands ignored until end of transaction block
24.587269067764282
current transaction is aborted, commands ignored until end of transaction block
24.588645935058594
current transaction is aborted, commands ignored until end of transaction block
24.589263677597046
current transaction is aborted, commands ignored until end of transaction block
24.59027647972107
current transaction is aborted, commands ignored until end of transaction block
24.591312646865845
current transaction is aborted, commands ignored until end of transaction block
24.591312646865845
current transaction is aborted, commands ignored until end of transaction block
24.59230375289917
current transaction is aborted, commands ignored until end of transaction block
24.59230375289917
current transaction is aborted, commands ignored until end of transaction block
24.593265295028687
current transaction is aborted, commands ignored until end of transaction block
24.593265295028687
current transaction is aborted, commands ignored until end of transaction block
24.594303131103516
current transaction is aborted, commands ignored until end of transaction block
24.59526515007019
current transaction is aborted, commands ignored until end of transaction block
24.59526515007019
current transaction is aborted, commands ignored until end of transaction block
24.596271276474
current transaction is aborted, commands ignored until end of transaction block
24.596271276474
current transaction is aborted, commands ignored until end of transaction block
24.597274780273438
current transaction is aborted, commands ignored until end of transaction block
24.598267555236816
current transaction is aborted, commands ignored until end of transaction block
24.599266052246094
current transaction is aborted, commands ignored until end of transaction block
24.600274085998535
current transaction is aborted, commands ignored until end of transaction block
24.600274085998535
current transaction is aborted, commands ignored until end of transaction block
24.601266384124756
current transaction is aborted, commands ignored until end of transaction block
24.602263927459717
current transaction is aborted, commands ignored until end of transaction block
24.603288173675537
current transaction is aborted, commands ignored until end of transaction block
24.60426640510559
current transaction is aborted, commands ignored until end of transaction block
24.605305194854736
current transaction is aborted, commands ignored until end of transaction block
24.605305194854736
current transaction is aborted, commands ignored until end of transaction block
24.606303215026855
current transaction is aborted, commands ignored until end of transaction block
24.606303215026855
current transaction is aborted, commands ignored until end of transaction block
24.607303619384766
current transaction is aborted, commands ignored until end of transaction block
24.607303619384766
current transaction is aborted, commands ignored until end of transaction block
24.608337879180908
current transaction is aborted, commands ignored until end of transaction block
24.608337879180908
current transaction is aborted, commands ignored until end of transaction block
24.609266757965088
current transaction is aborted, commands ignored until end of transaction block
24.609266757965088
current transaction is aborted, commands ignored until end of transaction block
24.610271215438843
current transaction is aborted, commands ignored until end of transaction block
24.611303567886353
current transaction is aborted, commands ignored until end of transaction block
24.612266302108765
current transaction is aborted, commands ignored until end of transaction block
24.612266302108765
current transaction is aborted, commands ignored until end of transaction block
24.613263607025146
current transaction is aborted, commands ignored until end of transaction block
24.61426568031311
current transaction is aborted, commands ignored until end of transaction block
24.615479469299316
current transaction is aborted, commands ignored until end of transaction block
24.61626696586609
current transaction is aborted, commands ignored until end of transaction block
24.618273973464966
current transaction is aborted, commands ignored until end of transaction block
24.62026572227478
current transaction is aborted, commands ignored until end of transaction block
24.62026572227478
current transaction is aborted, commands ignored until end of transaction block
24.621265649795532
current transaction is aborted, commands ignored until end of transaction block
24.621265649795532
current transaction is aborted, commands ignored until end of transaction block
24.622269868850708
current transaction is aborted, commands ignored until end of transaction block
24.623266220092773
current transaction is aborted, commands ignored until end of transaction block
24.62428069114685
current transaction is aborted, commands ignored until end of transaction block
24.625269174575806
current transaction is aborted, commands ignored until end of transaction block
24.625269174575806
current transaction is aborted, commands ignored until end of transaction block
24.62627649307251
current transaction is aborted, commands ignored until end of transaction block
24.62627649307251
current transaction is aborted, commands ignored until end of transaction block
24.62726378440857
current transaction is aborted, commands ignored until end of transaction block
24.628264904022217
current transaction is aborted, commands ignored until end of transaction block
24.629261255264282
current transaction is aborted, commands ignored until end of transaction block
24.629261255264282
current transaction is aborted, commands ignored until end of transaction block
24.630271673202515
current transaction is aborted, commands ignored until end of transaction block
24.630271673202515
current transaction is aborted, commands ignored until end of transaction block
24.63126301765442
current transaction is aborted, commands ignored until end of transaction block
24.63126301765442
current transaction is aborted, commands ignored until end of transaction block
24.632263898849487
current transaction is aborted, commands ignored until end of transaction block
24.632263898849487
current transaction is aborted, commands ignored until end of transaction block
24.633267402648926
current transaction is aborted, commands ignored until end of transaction block
24.634271383285522
current transaction is aborted, commands ignored until end of transaction block
24.635268926620483
current transaction is aborted, commands ignored until end of transaction block
24.635268926620483
current transaction is aborted, commands ignored until end of transaction block
24.63626217842102
current transaction is aborted, commands ignored until end of transaction block
24.637264013290405
current transaction is aborted, commands ignored until end of transaction block
24.638288259506226
current transaction is aborted, commands ignored until end of transaction block
24.638288259506226
current transaction is aborted, commands ignored until end of transaction block
24.63926339149475
current transaction is aborted, commands ignored until end of transaction block
24.63926339149475
current transaction is aborted, commands ignored until end of transaction block
24.640387296676636
current transaction is aborted, commands ignored until end of transaction block
24.640387296676636
current transaction is aborted, commands ignored until end of transaction block
24.641265630722046
current transaction is aborted, commands ignored until end of transaction block
24.64226484298706
current transaction is aborted, commands ignored until end of transaction block
24.64226484298706
current transaction is aborted, commands ignored until end of transaction block
24.643342971801758
current transaction is aborted, commands ignored until end of transaction block
24.64427351951599
current transaction is aborted, commands ignored until end of transaction block
24.645288944244385
current transaction is aborted, commands ignored until end of transaction block
24.647265672683716
current transaction is aborted, commands ignored until end of transaction block
24.648274183273315
current transaction is aborted, commands ignored until end of transaction block
24.64926838874817
current transaction is aborted, commands ignored until end of transaction block
24.64926838874817
current transaction is aborted, commands ignored until end of transaction block
24.6502685546875
current transaction is aborted, commands ignored until end of transaction block
24.651265621185303
current transaction is aborted, commands ignored until end of transaction block
24.651265621185303
current transaction is aborted, commands ignored until end of transaction block
24.65227246284485
current transaction is aborted, commands ignored until end of transaction block
24.653265237808228
current transaction is aborted, commands ignored until end of transaction block
24.65426468849182
current transaction is aborted, commands ignored until end of transaction block
24.65426468849182
current transaction is aborted, commands ignored until end of transaction block
24.655269384384155
current transaction is aborted, commands ignored until end of transaction block
24.655269384384155
current transaction is aborted, commands ignored until end of transaction block
24.65626287460327
current transaction is aborted, commands ignored until end of transaction block
24.65626287460327
current transaction is aborted, commands ignored until end of transaction block
24.65726137161255
current transaction is aborted, commands ignored until end of transaction block
24.65726137161255
current transaction is aborted, commands ignored until end of transaction block
24.65826654434204
current transaction is aborted, commands ignored until end of transaction block
24.66027331352234
current transaction is aborted, commands ignored until end of transaction block
24.661437273025513
current transaction is aborted, commands ignored until end of transaction block
24.662264585494995
current transaction is aborted, commands ignored until end of transaction block
24.662264585494995
current transaction is aborted, commands ignored until end of transaction block
24.663267850875854
current transaction is aborted, commands ignored until end of transaction block
24.664264917373657
current transaction is aborted, commands ignored until end of transaction block
24.665265321731567
current transaction is aborted, commands ignored until end of transaction block
24.66626787185669
current transaction is aborted, commands ignored until end of transaction block
24.667317628860474
current transaction is aborted, commands ignored until end of transaction block
24.668700218200684
current transaction is aborted, commands ignored until end of transaction block
24.66926598548889
current transaction is aborted, commands ignored until end of transaction block
24.670268535614014
current transaction is aborted, commands ignored until end of transaction block
24.670268535614014
current transaction is aborted, commands ignored until end of transaction block
24.671303272247314
current transaction is aborted, commands ignored until end of transaction block
24.672268867492676
current transaction is aborted, commands ignored until end of transaction block
24.673279285430908
current transaction is aborted, commands ignored until end of transaction block
24.674272060394287
current transaction is aborted, commands ignored until end of transaction block
24.67526364326477
current transaction is aborted, commands ignored until end of transaction block
24.676262617111206
current transaction is aborted, commands ignored until end of transaction block
24.676262617111206
current transaction is aborted, commands ignored until end of transaction block
24.677263259887695
current transaction is aborted, commands ignored until end of transaction block
24.678349018096924
current transaction is aborted, commands ignored until end of transaction block
24.67926836013794
current transaction is aborted, commands ignored until end of transaction block
24.67926836013794
current transaction is aborted, commands ignored until end of transaction block
24.680262804031372
current transaction is aborted, commands ignored until end of transaction block
24.680262804031372
current transaction is aborted, commands ignored until end of transaction block
24.681262493133545
current transaction is aborted, commands ignored until end of transaction block
24.6822726726532
current transaction is aborted, commands ignored until end of transaction block
24.6822726726532
current transaction is aborted, commands ignored until end of transaction block
24.68326997756958
current transaction is aborted, commands ignored until end of transaction block
24.6842679977417
current transaction is aborted, commands ignored until end of transaction block
24.685483932495117
current transaction is aborted, commands ignored until end of transaction block
24.686277389526367
current transaction is aborted, commands ignored until end of transaction block
24.687262773513794
current transaction is aborted, commands ignored until end of transaction block
24.687262773513794
current transaction is aborted, commands ignored until end of transaction block
24.68826937675476
current transaction is aborted, commands ignored until end of transaction block
24.68826937675476
current transaction is aborted, commands ignored until end of transaction block
24.6892671585083
current transaction is aborted, commands ignored until end of transaction block
24.690263748168945
current transaction is aborted, commands ignored until end of transaction block
24.690263748168945
current transaction is aborted, commands ignored until end of transaction block
24.691262006759644
current transaction is aborted, commands ignored until end of transaction block
24.692278623580933
current transaction is aborted, commands ignored until end of transaction block
24.693263292312622
current transaction is aborted, commands ignored until end of transaction block
24.693263292312622
current transaction is aborted, commands ignored until end of transaction block
24.695266723632812
current transaction is aborted, commands ignored until end of transaction block
24.696271657943726
current transaction is aborted, commands ignored until end of transaction block
24.696271657943726
current transaction is aborted, commands ignored until end of transaction block
24.697264194488525
current transaction is aborted, commands ignored until end of transaction block
24.69926881790161
current transaction is aborted, commands ignored until end of transaction block
24.700266122817993
current transaction is aborted, commands ignored until end of transaction block
24.70140051841736
current transaction is aborted, commands ignored until end of transaction block
24.70226788520813
current transaction is aborted, commands ignored until end of transaction block
24.703264474868774
current transaction is aborted, commands ignored until end of transaction block
24.703264474868774
current transaction is aborted, commands ignored until end of transaction block
24.704263925552368
current transaction is aborted, commands ignored until end of transaction block
24.704263925552368
current transaction is aborted, commands ignored until end of transaction block
24.70526385307312
current transaction is aborted, commands ignored until end of transaction block
24.70526385307312
current transaction is aborted, commands ignored until end of transaction block
24.706265211105347
current transaction is aborted, commands ignored until end of transaction block
24.707529306411743
current transaction is aborted, commands ignored until end of transaction block
24.70826745033264
current transaction is aborted, commands ignored until end of transaction block
24.709270238876343
current transaction is aborted, commands ignored until end of transaction block
24.710280656814575
current transaction is aborted, commands ignored until end of transaction block
24.71127438545227
current transaction is aborted, commands ignored until end of transaction block
24.71127438545227
current transaction is aborted, commands ignored until end of transaction block
24.71251153945923
current transaction is aborted, commands ignored until end of transaction block
24.713261604309082
current transaction is aborted, commands ignored until end of transaction block
24.714715242385864
current transaction is aborted, commands ignored until end of transaction block
24.7152681350708
current transaction is aborted, commands ignored until end of transaction block
24.71636152267456
current transaction is aborted, commands ignored until end of transaction block
24.71636152267456
current transaction is aborted, commands ignored until end of transaction block
24.717262983322144
current transaction is aborted, commands ignored until end of transaction block
24.717262983322144
current transaction is aborted, commands ignored until end of transaction block
24.71826410293579
current transaction is aborted, commands ignored until end of transaction block
24.71826410293579
current transaction is aborted, commands ignored until end of transaction block
24.719263792037964
current transaction is aborted, commands ignored until end of transaction block
24.719263792037964
current transaction is aborted, commands ignored until end of transaction block
24.720262050628662
current transaction is aborted, commands ignored until end of transaction block
24.721272706985474
current transaction is aborted, commands ignored until end of transaction block
24.72226309776306
current transaction is aborted, commands ignored until end of transaction block
24.723278760910034
current transaction is aborted, commands ignored until end of transaction block
24.724273443222046
current transaction is aborted, commands ignored until end of transaction block
24.72526264190674
current transaction is aborted, commands ignored until end of transaction block
24.72526264190674
current transaction is aborted, commands ignored until end of transaction block
24.726262092590332
current transaction is aborted, commands ignored until end of transaction block
24.727269172668457
current transaction is aborted, commands ignored until end of transaction block
24.729264974594116
current transaction is aborted, commands ignored until end of transaction block
24.730335235595703
current transaction is aborted, commands ignored until end of transaction block
24.73126459121704
current transaction is aborted, commands ignored until end of transaction block
24.73126459121704
current transaction is aborted, commands ignored until end of transaction block
24.732301235198975
current transaction is aborted, commands ignored until end of transaction block
24.732301235198975
current transaction is aborted, commands ignored until end of transaction block
24.733263731002808
current transaction is aborted, commands ignored until end of transaction block
24.733263731002808
current transaction is aborted, commands ignored until end of transaction block
24.734304904937744
current transaction is aborted, commands ignored until end of transaction block
24.734304904937744
current transaction is aborted, commands ignored until end of transaction block
24.735309600830078
current transaction is aborted, commands ignored until end of transaction block
24.736266136169434
current transaction is aborted, commands ignored until end of transaction block
24.736266136169434
current transaction is aborted, commands ignored until end of transaction block
24.737271785736084
current transaction is aborted, commands ignored until end of transaction block
24.738282680511475
current transaction is aborted, commands ignored until end of transaction block
24.73929452896118
current transaction is aborted, commands ignored until end of transaction block
24.7403507232666
current transaction is aborted, commands ignored until end of transaction block
24.74128007888794
current transaction is aborted, commands ignored until end of transaction block
24.74128007888794
current transaction is aborted, commands ignored until end of transaction block
24.742274045944214
current transaction is aborted, commands ignored until end of transaction block
24.74326515197754
current transaction is aborted, commands ignored until end of transaction block
24.744266033172607
current transaction is aborted, commands ignored until end of transaction block
24.74527096748352
current transaction is aborted, commands ignored until end of transaction block
24.74627184867859
current transaction is aborted, commands ignored until end of transaction block
24.747267723083496
current transaction is aborted, commands ignored until end of transaction block
24.748271465301514
current transaction is aborted, commands ignored until end of transaction block
24.74928116798401
current transaction is aborted, commands ignored until end of transaction block
24.75026297569275
current transaction is aborted, commands ignored until end of transaction block
24.75026297569275
current transaction is aborted, commands ignored until end of transaction block
24.751269340515137
current transaction is aborted, commands ignored until end of transaction block
24.75227427482605
current transaction is aborted, commands ignored until end of transaction block
24.75326156616211
current transaction is aborted, commands ignored until end of transaction block
24.75326156616211
current transaction is aborted, commands ignored until end of transaction block
24.754363775253296
current transaction is aborted, commands ignored until end of transaction block
24.755268335342407
current transaction is aborted, commands ignored until end of transaction block
24.756537675857544
current transaction is aborted, commands ignored until end of transaction block
24.757264137268066
current transaction is aborted, commands ignored until end of transaction block
24.75826358795166
current transaction is aborted, commands ignored until end of transaction block
24.75826358795166
current transaction is aborted, commands ignored until end of transaction block
24.759264707565308
current transaction is aborted, commands ignored until end of transaction block
24.760267972946167
current transaction is aborted, commands ignored until end of transaction block
24.760267972946167
current transaction is aborted, commands ignored until end of transaction block
24.761327505111694
current transaction is aborted, commands ignored until end of transaction block
24.763293027877808
current transaction is aborted, commands ignored until end of transaction block
24.764264345169067
current transaction is aborted, commands ignored until end of transaction block
24.765267848968506
current transaction is aborted, commands ignored until end of transaction block
24.766262769699097
current transaction is aborted, commands ignored until end of transaction block
24.766262769699097
current transaction is aborted, commands ignored until end of transaction block
24.767264127731323
current transaction is aborted, commands ignored until end of transaction block
24.767264127731323
current transaction is aborted, commands ignored until end of transaction block
24.768264055252075
current transaction is aborted, commands ignored until end of transaction block
24.76948642730713
current transaction is aborted, commands ignored until end of transaction block
24.770284414291382
current transaction is aborted, commands ignored until end of transaction block
24.770284414291382
current transaction is aborted, commands ignored until end of transaction block
24.771262407302856
current transaction is aborted, commands ignored until end of transaction block
24.772270441055298
current transaction is aborted, commands ignored until end of transaction block
24.773261785507202
current transaction is aborted, commands ignored until end of transaction block
24.773261785507202
current transaction is aborted, commands ignored until end of transaction block
24.774260997772217
current transaction is aborted, commands ignored until end of transaction block
24.774260997772217
current transaction is aborted, commands ignored until end of transaction block
24.775262355804443
current transaction is aborted, commands ignored until end of transaction block
24.776267528533936
current transaction is aborted, commands ignored until end of transaction block
24.776267528533936
current transaction is aborted, commands ignored until end of transaction block
24.7772696018219
current transaction is aborted, commands ignored until end of transaction block
24.778262615203857
current transaction is aborted, commands ignored until end of transaction block
24.779263734817505
current transaction is aborted, commands ignored until end of transaction block
24.779306173324585
current transaction is aborted, commands ignored until end of transaction block
24.780263423919678
current transaction is aborted, commands ignored until end of transaction block
24.780263423919678
current transaction is aborted, commands ignored until end of transaction block
24.780263423919678
current transaction is aborted, commands ignored until end of transaction block
24.78126358985901
current transaction is aborted, commands ignored until end of transaction block
24.782262086868286
current transaction is aborted, commands ignored until end of transaction block
24.782262086868286
current transaction is aborted, commands ignored until end of transaction block
24.784263134002686
current transaction is aborted, commands ignored until end of transaction block
24.784263134002686
current transaction is aborted, commands ignored until end of transaction block
24.78550410270691
current transaction is aborted, commands ignored until end of transaction block
24.78550410270691
current transaction is aborted, commands ignored until end of transaction block
24.7862606048584
current transaction is aborted, commands ignored until end of transaction block
24.78726363182068
current transaction is aborted, commands ignored until end of transaction block
24.78726363182068
current transaction is aborted, commands ignored until end of transaction block
24.788296937942505
current transaction is aborted, commands ignored until end of transaction block
24.789266347885132
current transaction is aborted, commands ignored until end of transaction block
24.789266347885132
current transaction is aborted, commands ignored until end of transaction block
24.790262460708618
current transaction is aborted, commands ignored until end of transaction block
24.790262460708618
current transaction is aborted, commands ignored until end of transaction block
24.792264461517334
current transaction is aborted, commands ignored until end of transaction block
24.792264461517334
current transaction is aborted, commands ignored until end of transaction block
24.79326033592224
current transaction is aborted, commands ignored until end of transaction block
24.79326033592224
current transaction is aborted, commands ignored until end of transaction block
24.794262170791626
current transaction is aborted, commands ignored until end of transaction block
24.794262170791626
current transaction is aborted, commands ignored until end of transaction block
24.795267820358276
current transaction is aborted, commands ignored until end of transaction block
24.795267820358276
current transaction is aborted, commands ignored until end of transaction block
24.796262979507446
current transaction is aborted, commands ignored until end of transaction block
24.796262979507446
current transaction is aborted, commands ignored until end of transaction block
24.797265768051147
current transaction is aborted, commands ignored until end of transaction block
24.798272371292114
current transaction is aborted, commands ignored until end of transaction block
24.799267053604126
current transaction is aborted, commands ignored until end of transaction block
24.800267696380615
current transaction is aborted, commands ignored until end of transaction block
24.800267696380615
current transaction is aborted, commands ignored until end of transaction block
24.80126452445984
current transaction is aborted, commands ignored until end of transaction block
24.80226993560791
current transaction is aborted, commands ignored until end of transaction block
24.80328059196472
current transaction is aborted, commands ignored until end of transaction block
24.804276704788208
current transaction is aborted, commands ignored until end of transaction block
24.805267810821533
current transaction is aborted, commands ignored until end of transaction block
24.805267810821533
current transaction is aborted, commands ignored until end of transaction block
24.806280374526978
current transaction is aborted, commands ignored until end of transaction block
24.806280374526978
current transaction is aborted, commands ignored until end of transaction block
24.807271480560303
current transaction is aborted, commands ignored until end of transaction block
24.808265447616577
current transaction is aborted, commands ignored until end of transaction block
24.80856418609619
current transaction is aborted, commands ignored until end of transaction block
24.80926513671875
current transaction is aborted, commands ignored until end of transaction block
24.80926513671875
current transaction is aborted, commands ignored until end of transaction block
24.810263872146606
current transaction is aborted, commands ignored until end of transaction block
24.810263872146606
current transaction is aborted, commands ignored until end of transaction block
24.811267852783203
current transaction is aborted, commands ignored until end of transaction block
24.812270402908325
current transaction is aborted, commands ignored until end of transaction block
24.813292980194092
current transaction is aborted, commands ignored until end of transaction block
24.814265966415405
current transaction is aborted, commands ignored until end of transaction block
24.815265655517578
current transaction is aborted, commands ignored until end of transaction block
24.815265655517578
current transaction is aborted, commands ignored until end of transaction block
24.816263914108276
current transaction is aborted, commands ignored until end of transaction block
24.817265033721924
current transaction is aborted, commands ignored until end of transaction block
24.81830644607544
current transaction is aborted, commands ignored until end of transaction block
24.81936240196228
current transaction is aborted, commands ignored until end of transaction block
24.82026219367981
current transaction is aborted, commands ignored until end of transaction block
24.821264266967773
current transaction is aborted, commands ignored until end of transaction block
24.821264266967773
current transaction is aborted, commands ignored until end of transaction block
24.822264909744263
current transaction is aborted, commands ignored until end of transaction block
24.822264909744263
current transaction is aborted, commands ignored until end of transaction block
24.823342323303223
current transaction is aborted, commands ignored until end of transaction block
24.824264526367188
current transaction is aborted, commands ignored until end of transaction block
24.825273275375366
current transaction is aborted, commands ignored until end of transaction block
24.825273275375366
current transaction is aborted, commands ignored until end of transaction block
24.826314449310303
current transaction is aborted, commands ignored until end of transaction block
24.826314449310303
current transaction is aborted, commands ignored until end of transaction block
24.82727313041687
current transaction is aborted, commands ignored until end of transaction block
24.82827401161194
current transaction is aborted, commands ignored until end of transaction block
24.82926297187805
current transaction is aborted, commands ignored until end of transaction block
24.830262422561646
current transaction is aborted, commands ignored until end of transaction block
24.830262422561646
current transaction is aborted, commands ignored until end of transaction block
24.83134365081787
current transaction is aborted, commands ignored until end of transaction block
24.83134365081787
current transaction is aborted, commands ignored until end of transaction block
24.833268880844116
current transaction is aborted, commands ignored until end of transaction block
24.833268880844116
current transaction is aborted, commands ignored until end of transaction block
24.834635019302368
current transaction is aborted, commands ignored until end of transaction block
24.83526372909546
current transaction is aborted, commands ignored until end of transaction block
24.83526372909546
current transaction is aborted, commands ignored until end of transaction block
24.836266040802002
current transaction is aborted, commands ignored until end of transaction block
24.836266040802002
current transaction is aborted, commands ignored until end of transaction block
24.837265253067017
current transaction is aborted, commands ignored until end of transaction block
24.838265657424927
current transaction is aborted, commands ignored until end of transaction block
24.839264392852783
current transaction is aborted, commands ignored until end of transaction block
24.84027361869812
current transaction is aborted, commands ignored until end of transaction block
24.84027361869812
current transaction is aborted, commands ignored until end of transaction block
24.841264486312866
current transaction is aborted, commands ignored until end of transaction block
24.841264486312866
current transaction is aborted, commands ignored until end of transaction block
24.84246039390564
current transaction is aborted, commands ignored until end of transaction block
24.84246039390564
current transaction is aborted, commands ignored until end of transaction block
24.843266487121582
current transaction is aborted, commands ignored until end of transaction block
24.843266487121582
current transaction is aborted, commands ignored until end of transaction block
24.844263792037964
current transaction is aborted, commands ignored until end of transaction block
24.845266819000244
current transaction is aborted, commands ignored until end of transaction block
24.845266819000244
current transaction is aborted, commands ignored until end of transaction block
24.846263885498047
current transaction is aborted, commands ignored until end of transaction block
24.846263885498047
current transaction is aborted, commands ignored until end of transaction block
24.847267150878906
current transaction is aborted, commands ignored until end of transaction block
24.847267150878906
current transaction is aborted, commands ignored until end of transaction block
24.848499298095703
current transaction is aborted, commands ignored until end of transaction block
24.84927272796631
current transaction is aborted, commands ignored until end of transaction block
24.84927272796631
current transaction is aborted, commands ignored until end of transaction block
24.85026502609253
current transaction is aborted, commands ignored until end of transaction block
24.85026502609253
current transaction is aborted, commands ignored until end of transaction block
24.85131525993347
current transaction is aborted, commands ignored until end of transaction block
24.85226607322693
current transaction is aborted, commands ignored until end of transaction block
24.854336738586426
current transaction is aborted, commands ignored until end of transaction block
24.855266571044922
current transaction is aborted, commands ignored until end of transaction block
24.8554265499115
current transaction is aborted, commands ignored until end of transaction block
24.85681676864624
current transaction is aborted, commands ignored until end of transaction block
24.857268810272217
current transaction is aborted, commands ignored until end of transaction block
24.858263969421387
current transaction is aborted, commands ignored until end of transaction block
24.85926628112793
current transaction is aborted, commands ignored until end of transaction block
24.86028552055359
current transaction is aborted, commands ignored until end of transaction block
24.861262798309326
current transaction is aborted, commands ignored until end of transaction block
24.862266302108765
current transaction is aborted, commands ignored until end of transaction block
24.862266302108765
current transaction is aborted, commands ignored until end of transaction block
24.86326789855957
current transaction is aborted, commands ignored until end of transaction block
24.86426615715027
current transaction is aborted, commands ignored until end of transaction block
24.86426615715027
current transaction is aborted, commands ignored until end of transaction block
24.86526346206665
current transaction is aborted, commands ignored until end of transaction block
24.866368055343628
current transaction is aborted, commands ignored until end of transaction block
24.86727285385132
current transaction is aborted, commands ignored until end of transaction block
24.868469953536987
current transaction is aborted, commands ignored until end of transaction block
24.870361804962158
current transaction is aborted, commands ignored until end of transaction block
24.870361804962158
current transaction is aborted, commands ignored until end of transaction block
24.871270179748535
current transaction is aborted, commands ignored until end of transaction block
24.872262954711914
current transaction is aborted, commands ignored until end of transaction block
24.873365879058838
current transaction is aborted, commands ignored until end of transaction block
24.874297857284546
current transaction is aborted, commands ignored until end of transaction block
24.875264644622803
current transaction is aborted, commands ignored until end of transaction block
24.876264810562134
current transaction is aborted, commands ignored until end of transaction block
24.876264810562134
current transaction is aborted, commands ignored until end of transaction block
24.877267122268677
current transaction is aborted, commands ignored until end of transaction block
24.878265857696533
current transaction is aborted, commands ignored until end of transaction block
24.87926173210144
current transaction is aborted, commands ignored until end of transaction block
24.880350589752197
current transaction is aborted, commands ignored until end of transaction block
24.881381511688232
current transaction is aborted, commands ignored until end of transaction block
24.882267236709595
current transaction is aborted, commands ignored until end of transaction block
24.882267236709595
current transaction is aborted, commands ignored until end of transaction block
24.88333249092102
current transaction is aborted, commands ignored until end of transaction block
24.88426637649536
current transaction is aborted, commands ignored until end of transaction block
24.88426637649536
current transaction is aborted, commands ignored until end of transaction block
24.885263442993164
current transaction is aborted, commands ignored until end of transaction block
24.886263132095337
current transaction is aborted, commands ignored until end of transaction block
24.88731861114502
current transaction is aborted, commands ignored until end of transaction block
24.888492107391357
current transaction is aborted, commands ignored until end of transaction block
24.88926672935486
current transaction is aborted, commands ignored until end of transaction block
24.89038634300232
current transaction is aborted, commands ignored until end of transaction block
24.89126467704773
current transaction is aborted, commands ignored until end of transaction block
24.89126467704773
current transaction is aborted, commands ignored until end of transaction block
24.89232349395752
current transaction is aborted, commands ignored until end of transaction block
24.893593311309814
current transaction is aborted, commands ignored until end of transaction block
24.894341707229614
current transaction is aborted, commands ignored until end of transaction block
24.89648461341858
current transaction is aborted, commands ignored until end of transaction block
24.897263288497925
current transaction is aborted, commands ignored until end of transaction block
24.898263454437256
current transaction is aborted, commands ignored until end of transaction block
24.89926767349243
current transaction is aborted, commands ignored until end of transaction block
24.90026354789734
current transaction is aborted, commands ignored until end of transaction block
24.900421619415283
current transaction is aborted, commands ignored until end of transaction block
24.90139603614807
current transaction is aborted, commands ignored until end of transaction block
24.902263402938843
current transaction is aborted, commands ignored until end of transaction block
24.902263402938843
current transaction is aborted, commands ignored until end of transaction block
24.90326237678528
current transaction is aborted, commands ignored until end of transaction block
24.9052631855011
current transaction is aborted, commands ignored until end of transaction block
24.9052631855011
current transaction is aborted, commands ignored until end of transaction block
24.906264066696167
current transaction is aborted, commands ignored until end of transaction block
24.907262325286865
current transaction is aborted, commands ignored until end of transaction block
24.90854287147522
current transaction is aborted, commands ignored until end of transaction block
24.910265684127808
current transaction is aborted, commands ignored until end of transaction block
24.910265684127808
current transaction is aborted, commands ignored until end of transaction block
24.91126775741577
current transaction is aborted, commands ignored until end of transaction block
24.91226553916931
current transaction is aborted, commands ignored until end of transaction block
24.91226553916931
current transaction is aborted, commands ignored until end of transaction block
24.913264513015747
current transaction is aborted, commands ignored until end of transaction block
24.91450834274292
current transaction is aborted, commands ignored until end of transaction block
24.916263818740845
current transaction is aborted, commands ignored until end of transaction block
24.917264461517334
current transaction is aborted, commands ignored until end of transaction block
24.91826367378235
current transaction is aborted, commands ignored until end of transaction block
24.918412923812866
current transaction is aborted, commands ignored until end of transaction block
24.91926598548889
current transaction is aborted, commands ignored until end of transaction block
24.920353889465332
current transaction is aborted, commands ignored until end of transaction block
24.920382976531982
current transaction is aborted, commands ignored until end of transaction block
24.922266721725464
current transaction is aborted, commands ignored until end of transaction block
24.924487829208374
current transaction is aborted, commands ignored until end of transaction block
24.925583839416504
current transaction is aborted, commands ignored until end of transaction block
24.926363229751587
current transaction is aborted, commands ignored until end of transaction block
24.927598476409912
current transaction is aborted, commands ignored until end of transaction block
24.928633213043213
current transaction is aborted, commands ignored until end of transaction block
24.929466009140015
current transaction is aborted, commands ignored until end of transaction block
24.930264711380005
current transaction is aborted, commands ignored until end of transaction block
24.930264711380005
current transaction is aborted, commands ignored until end of transaction block
24.931264877319336
current transaction is aborted, commands ignored until end of transaction block
24.93253254890442
current transaction is aborted, commands ignored until end of transaction block
24.93326449394226
current transaction is aborted, commands ignored until end of transaction block
24.934958696365356
current transaction is aborted, commands ignored until end of transaction block
24.936287879943848
current transaction is aborted, commands ignored until end of transaction block
24.937276601791382
current transaction is aborted, commands ignored until end of transaction block
24.938263654708862
current transaction is aborted, commands ignored until end of transaction block
24.93948268890381
current transaction is aborted, commands ignored until end of transaction block
24.940263986587524
current transaction is aborted, commands ignored until end of transaction block
24.940263986587524
current transaction is aborted, commands ignored until end of transaction block
24.941264629364014
current transaction is aborted, commands ignored until end of transaction block
24.942473888397217
current transaction is aborted, commands ignored until end of transaction block
24.94327163696289
current transaction is aborted, commands ignored until end of transaction block
24.944275379180908
current transaction is aborted, commands ignored until end of transaction block
24.945274591445923
current transaction is aborted, commands ignored until end of transaction block
24.946280241012573
current transaction is aborted, commands ignored until end of transaction block
24.947264671325684
current transaction is aborted, commands ignored until end of transaction block
24.947935581207275
current transaction is aborted, commands ignored until end of transaction block
24.948264360427856
current transaction is aborted, commands ignored until end of transaction block
24.949265003204346
current transaction is aborted, commands ignored until end of transaction block
24.951273918151855
current transaction is aborted, commands ignored until end of transaction block
24.9532630443573
current transaction is aborted, commands ignored until end of transaction block
24.954461812973022
current transaction is aborted, commands ignored until end of transaction block
24.954512357711792
current transaction is aborted, commands ignored until end of transaction block
24.955263137817383
current transaction is aborted, commands ignored until end of transaction block
24.956263303756714
current transaction is aborted, commands ignored until end of transaction block
24.956263303756714
current transaction is aborted, commands ignored until end of transaction block
24.95726203918457
current transaction is aborted, commands ignored until end of transaction block
24.95726203918457
current transaction is aborted, commands ignored until end of transaction block
24.958261728286743
current transaction is aborted, commands ignored until end of transaction block
24.958261728286743
current transaction is aborted, commands ignored until end of transaction block
24.959262132644653
current transaction is aborted, commands ignored until end of transaction block
24.959262132644653
current transaction is aborted, commands ignored until end of transaction block
24.960269927978516
current transaction is aborted, commands ignored until end of transaction block
24.961276054382324
current transaction is aborted, commands ignored until end of transaction block
24.96326470375061
current transaction is aborted, commands ignored until end of transaction block
24.964285612106323
current transaction is aborted, commands ignored until end of transaction block
24.965264081954956
current transaction is aborted, commands ignored until end of transaction block
24.966267108917236
current transaction is aborted, commands ignored until end of transaction block
24.966267108917236
current transaction is aborted, commands ignored until end of transaction block
24.96726417541504
current transaction is aborted, commands ignored until end of transaction block
24.96726417541504
current transaction is aborted, commands ignored until end of transaction block
24.968263626098633
current transaction is aborted, commands ignored until end of transaction block
24.968263626098633
current transaction is aborted, commands ignored until end of transaction block
24.970264196395874
current transaction is aborted, commands ignored until end of transaction block
24.970264196395874
current transaction is aborted, commands ignored until end of transaction block
24.97126317024231
current transaction is aborted, commands ignored until end of transaction block
24.97227144241333
current transaction is aborted, commands ignored until end of transaction block
24.973261833190918
current transaction is aborted, commands ignored until end of transaction block
24.973261833190918
current transaction is aborted, commands ignored until end of transaction block
24.974262714385986
current transaction is aborted, commands ignored until end of transaction block
24.974262714385986
current transaction is aborted, commands ignored until end of transaction block
24.975263595581055
current transaction is aborted, commands ignored until end of transaction block
24.975263595581055
current transaction is aborted, commands ignored until end of transaction block
24.97626256942749
current transaction is aborted, commands ignored until end of transaction block
24.97726535797119
current transaction is aborted, commands ignored until end of transaction block
24.97826862335205
current transaction is aborted, commands ignored until end of transaction block
24.980297803878784
current transaction is aborted, commands ignored until end of transaction block
24.981278657913208
current transaction is aborted, commands ignored until end of transaction block
24.983336448669434
current transaction is aborted, commands ignored until end of transaction block
24.983336448669434
current transaction is aborted, commands ignored until end of transaction block
24.98526692390442
current transaction is aborted, commands ignored until end of transaction block
24.986289262771606
current transaction is aborted, commands ignored until end of transaction block
24.98640251159668
current transaction is aborted, commands ignored until end of transaction block
24.987296104431152
current transaction is aborted, commands ignored until end of transaction block
24.98826265335083
current transaction is aborted, commands ignored until end of transaction block
24.98826265335083
current transaction is aborted, commands ignored until end of transaction block
24.989264011383057
current transaction is aborted, commands ignored until end of transaction block
24.989264011383057
current transaction is aborted, commands ignored until end of transaction block
24.990262269973755
current transaction is aborted, commands ignored until end of transaction block
24.991315603256226
current transaction is aborted, commands ignored until end of transaction block
24.991315603256226
current transaction is aborted, commands ignored until end of transaction block
24.992266178131104
current transaction is aborted, commands ignored until end of transaction block
24.992266178131104
current transaction is aborted, commands ignored until end of transaction block
24.994288206100464
current transaction is aborted, commands ignored until end of transaction block
24.994288206100464
current transaction is aborted, commands ignored until end of transaction block
24.995355367660522
current transaction is aborted, commands ignored until end of transaction block
24.996267318725586
current transaction is aborted, commands ignored until end of transaction block
24.996267318725586
current transaction is aborted, commands ignored until end of transaction block
24.997262239456177
current transaction is aborted, commands ignored until end of transaction block
24.997262239456177
current transaction is aborted, commands ignored until end of transaction block
24.998265504837036
current transaction is aborted, commands ignored until end of transaction block
24.998265504837036
current transaction is aborted, commands ignored until end of transaction block
24.99926471710205
current transaction is aborted, commands ignored until end of transaction block
25.000568628311157
current transaction is aborted, commands ignored until end of transaction block
25.001261949539185
current transaction is aborted, commands ignored until end of transaction block
25.002268314361572
current transaction is aborted, commands ignored until end of transaction block
25.002268314361572
current transaction is aborted, commands ignored until end of transaction block
25.003270864486694
current transaction is aborted, commands ignored until end of transaction block
25.003270864486694
current transaction is aborted, commands ignored until end of transaction block
25.00426197052002
current transaction is aborted, commands ignored until end of transaction block
25.005262851715088
current transaction is aborted, commands ignored until end of transaction block
25.005262851715088
current transaction is aborted, commands ignored until end of transaction block
25.006484270095825
current transaction is aborted, commands ignored until end of transaction block
25.007564067840576
current transaction is aborted, commands ignored until end of transaction block
25.008422374725342
current transaction is aborted, commands ignored until end of transaction block
25.00926399230957
current transaction is aborted, commands ignored until end of transaction block
25.010314464569092
current transaction is aborted, commands ignored until end of transaction block
25.0112624168396
current transaction is aborted, commands ignored until end of transaction block
25.01132845878601
current transaction is aborted, commands ignored until end of transaction block
25.012263774871826
current transaction is aborted, commands ignored until end of transaction block
25.014278650283813
current transaction is aborted, commands ignored until end of transaction block
25.014278650283813
current transaction is aborted, commands ignored until end of transaction block
25.015288829803467
current transaction is aborted, commands ignored until end of transaction block
25.015288829803467
current transaction is aborted, commands ignored until end of transaction block
25.016265392303467
current transaction is aborted, commands ignored until end of transaction block
25.016265392303467
current transaction is aborted, commands ignored until end of transaction block
25.017263412475586
current transaction is aborted, commands ignored until end of transaction block
25.017263412475586
current transaction is aborted, commands ignored until end of transaction block
25.018261909484863
current transaction is aborted, commands ignored until end of transaction block
25.01927399635315
current transaction is aborted, commands ignored until end of transaction block
25.020264863967896
current transaction is aborted, commands ignored until end of transaction block
25.020264863967896
current transaction is aborted, commands ignored until end of transaction block
25.02127432823181
current transaction is aborted, commands ignored until end of transaction block
25.02127432823181
current transaction is aborted, commands ignored until end of transaction block
25.022271156311035
current transaction is aborted, commands ignored until end of transaction block
25.023308038711548
current transaction is aborted, commands ignored until end of transaction block
25.024264097213745
current transaction is aborted, commands ignored until end of transaction block
25.02539849281311
current transaction is aborted, commands ignored until end of transaction block
25.02626943588257
current transaction is aborted, commands ignored until end of transaction block
25.02729344367981
current transaction is aborted, commands ignored until end of transaction block
25.029266119003296
current transaction is aborted, commands ignored until end of transaction block
25.029266119003296
current transaction is aborted, commands ignored until end of transaction block
25.030280828475952
current transaction is aborted, commands ignored until end of transaction block
25.03135871887207
current transaction is aborted, commands ignored until end of transaction block
25.032264947891235
current transaction is aborted, commands ignored until end of transaction block
25.033267974853516
current transaction is aborted, commands ignored until end of transaction block
25.034592151641846
current transaction is aborted, commands ignored until end of transaction block
25.03526759147644
current transaction is aborted, commands ignored until end of transaction block
25.03626251220703
current transaction is aborted, commands ignored until end of transaction block
25.0372633934021
current transaction is aborted, commands ignored until end of transaction block
25.0372633934021
current transaction is aborted, commands ignored until end of transaction block
25.038533687591553
current transaction is aborted, commands ignored until end of transaction block
25.0392644405365
current transaction is aborted, commands ignored until end of transaction block
25.0392644405365
current transaction is aborted, commands ignored until end of transaction block
25.040263175964355
current transaction is aborted, commands ignored until end of transaction block
25.041268348693848
current transaction is aborted, commands ignored until end of transaction block
25.04228663444519
current transaction is aborted, commands ignored until end of transaction block
25.04228663444519
current transaction is aborted, commands ignored until end of transaction block
25.043437957763672
current transaction is aborted, commands ignored until end of transaction block
25.044262647628784
current transaction is aborted, commands ignored until end of transaction block
25.044262647628784
current transaction is aborted, commands ignored until end of transaction block
25.045264959335327
current transaction is aborted, commands ignored until end of transaction block
25.045440196990967
current transaction is aborted, commands ignored until end of transaction block
25.046260833740234
current transaction is aborted, commands ignored until end of transaction block
25.046260833740234
current transaction is aborted, commands ignored until end of transaction block
25.047266721725464
current transaction is aborted, commands ignored until end of transaction block
25.048476457595825
current transaction is aborted, commands ignored until end of transaction block
25.049262285232544
current transaction is aborted, commands ignored until end of transaction block
25.049262285232544
current transaction is aborted, commands ignored until end of transaction block
25.05026340484619
current transaction is aborted, commands ignored until end of transaction block
25.05026340484619
current transaction is aborted, commands ignored until end of transaction block
25.05126118659973
current transaction is aborted, commands ignored until end of transaction block
25.05126118659973
current transaction is aborted, commands ignored until end of transaction block
25.052263259887695
current transaction is aborted, commands ignored until end of transaction block
25.052428483963013
current transaction is aborted, commands ignored until end of transaction block
25.05326199531555
current transaction is aborted, commands ignored until end of transaction block
25.05326199531555
current transaction is aborted, commands ignored until end of transaction block
25.05463433265686
current transaction is aborted, commands ignored until end of transaction block
25.055262804031372
current transaction is aborted, commands ignored until end of transaction block
25.055262804031372
current transaction is aborted, commands ignored until end of transaction block
25.056262016296387
current transaction is aborted, commands ignored until end of transaction block
25.05726194381714
current transaction is aborted, commands ignored until end of transaction block
25.058262586593628
current transaction is aborted, commands ignored until end of transaction block
25.058262586593628
current transaction is aborted, commands ignored until end of transaction block
25.059267044067383
current transaction is aborted, commands ignored until end of transaction block
25.06033706665039
current transaction is aborted, commands ignored until end of transaction block
25.061288356781006
current transaction is aborted, commands ignored until end of transaction block
25.061288356781006
current transaction is aborted, commands ignored until end of transaction block
25.062263011932373
current transaction is aborted, commands ignored until end of transaction block
25.06327176094055
current transaction is aborted, commands ignored until end of transaction block
25.06327176094055
current transaction is aborted, commands ignored until end of transaction block
25.064292669296265
current transaction is aborted, commands ignored until end of transaction block
25.064292669296265
current transaction is aborted, commands ignored until end of transaction block
25.06527352333069
current transaction is aborted, commands ignored until end of transaction block
25.06527352333069
current transaction is aborted, commands ignored until end of transaction block
25.06626319885254
current transaction is aborted, commands ignored until end of transaction block
25.06626319885254
current transaction is aborted, commands ignored until end of transaction block
25.06726360321045
current transaction is aborted, commands ignored until end of transaction block
25.068264484405518
current transaction is aborted, commands ignored until end of transaction block
25.068264484405518
current transaction is aborted, commands ignored until end of transaction block
25.068264484405518
current transaction is aborted, commands ignored until end of transaction block
25.069262742996216
current transaction is aborted, commands ignored until end of transaction block
25.069262742996216
current transaction is aborted, commands ignored until end of transaction block
25.070287942886353
current transaction is aborted, commands ignored until end of transaction block
25.071017742156982
current transaction is aborted, commands ignored until end of transaction block
25.071274757385254
current transaction is aborted, commands ignored until end of transaction block
25.072373867034912
current transaction is aborted, commands ignored until end of transaction block
25.073262214660645
current transaction is aborted, commands ignored until end of transaction block
25.073262214660645
current transaction is aborted, commands ignored until end of transaction block
25.074262380599976
current transaction is aborted, commands ignored until end of transaction block
25.07548999786377
current transaction is aborted, commands ignored until end of transaction block
25.076263904571533
current transaction is aborted, commands ignored until end of transaction block
25.076263904571533
current transaction is aborted, commands ignored until end of transaction block
25.076263904571533
current transaction is aborted, commands ignored until end of transaction block
25.077306509017944
current transaction is aborted, commands ignored until end of transaction block
25.078320026397705
current transaction is aborted, commands ignored until end of transaction block
25.078320026397705
current transaction is aborted, commands ignored until end of transaction block
25.07926321029663
current transaction is aborted, commands ignored until end of transaction block
25.07926321029663
current transaction is aborted, commands ignored until end of transaction block
25.080263137817383
current transaction is aborted, commands ignored until end of transaction block
25.080263137817383
current transaction is aborted, commands ignored until end of transaction block
25.081262350082397
current transaction is aborted, commands ignored until end of transaction block
25.081262350082397
current transaction is aborted, commands ignored until end of transaction block
25.082275390625
current transaction is aborted, commands ignored until end of transaction block
25.0832622051239
current transaction is aborted, commands ignored until end of transaction block
25.0832622051239
current transaction is aborted, commands ignored until end of transaction block
25.084261178970337
current transaction is aborted, commands ignored until end of transaction block
25.084261178970337
current transaction is aborted, commands ignored until end of transaction block
25.08526086807251
current transaction is aborted, commands ignored until end of transaction block
25.08526086807251
current transaction is aborted, commands ignored until end of transaction block
25.086294889450073
current transaction is aborted, commands ignored until end of transaction block
25.086294889450073
current transaction is aborted, commands ignored until end of transaction block
25.087263107299805
current transaction is aborted, commands ignored until end of transaction block
25.088263034820557
current transaction is aborted, commands ignored until end of transaction block
25.088263034820557
current transaction is aborted, commands ignored until end of transaction block
25.08927011489868
current transaction is aborted, commands ignored until end of transaction block
25.09026336669922
current transaction is aborted, commands ignored until end of transaction block
25.09026336669922
current transaction is aborted, commands ignored until end of transaction block
25.091352224349976
current transaction is aborted, commands ignored until end of transaction block
25.091352224349976
current transaction is aborted, commands ignored until end of transaction block
25.092260599136353
current transaction is aborted, commands ignored until end of transaction block
25.093263864517212
current transaction is aborted, commands ignored until end of transaction block
25.093263864517212
current transaction is aborted, commands ignored until end of transaction block
25.093263864517212
current transaction is aborted, commands ignored until end of transaction block
25.094263553619385
current transaction is aborted, commands ignored until end of transaction block
25.094263553619385
current transaction is aborted, commands ignored until end of transaction block
25.095265865325928
current transaction is aborted, commands ignored until end of transaction block
25.095265865325928
current transaction is aborted, commands ignored until end of transaction block
25.096262454986572
current transaction is aborted, commands ignored until end of transaction block
25.097272634506226
current transaction is aborted, commands ignored until end of transaction block
25.097272634506226
current transaction is aborted, commands ignored until end of transaction block
25.098336219787598
current transaction is aborted, commands ignored until end of transaction block
25.098336219787598
current transaction is aborted, commands ignored until end of transaction block
25.099288940429688
current transaction is aborted, commands ignored until end of transaction block
25.099288940429688
current transaction is aborted, commands ignored until end of transaction block
25.100261926651
current transaction is aborted, commands ignored until end of transaction block
25.100321531295776
current transaction is aborted, commands ignored until end of transaction block
25.100321531295776
current transaction is aborted, commands ignored until end of transaction block
25.10126280784607
current transaction is aborted, commands ignored until end of transaction block
25.10126280784607
current transaction is aborted, commands ignored until end of transaction block
25.102264404296875
current transaction is aborted, commands ignored until end of transaction block
25.10326075553894
current transaction is aborted, commands ignored until end of transaction block
25.10326075553894
current transaction is aborted, commands ignored until end of transaction block
25.104263067245483
current transaction is aborted, commands ignored until end of transaction block
25.104263067245483
current transaction is aborted, commands ignored until end of transaction block
25.104263067245483
current transaction is aborted, commands ignored until end of transaction block
25.105271339416504
current transaction is aborted, commands ignored until end of transaction block
25.105271339416504
current transaction is aborted, commands ignored until end of transaction block
25.106287240982056
current transaction is aborted, commands ignored until end of transaction block
25.107263565063477
current transaction is aborted, commands ignored until end of transaction block
25.107263565063477
current transaction is aborted, commands ignored until end of transaction block
25.10826301574707
current transaction is aborted, commands ignored until end of transaction block
25.10826301574707
current transaction is aborted, commands ignored until end of transaction block
25.10927391052246
current transaction is aborted, commands ignored until end of transaction block
25.10927391052246
current transaction is aborted, commands ignored until end of transaction block
25.110278129577637
current transaction is aborted, commands ignored until end of transaction block
25.110278129577637
current transaction is aborted, commands ignored until end of transaction block
25.111263751983643
current transaction is aborted, commands ignored until end of transaction block
25.111263751983643
current transaction is aborted, commands ignored until end of transaction block
25.112268924713135
current transaction is aborted, commands ignored until end of transaction block
25.112268924713135
current transaction is aborted, commands ignored until end of transaction block
25.113268613815308
current transaction is aborted, commands ignored until end of transaction block
25.11426329612732
current transaction is aborted, commands ignored until end of transaction block
25.11426329612732
current transaction is aborted, commands ignored until end of transaction block
25.115267276763916
current transaction is aborted, commands ignored until end of transaction block
25.115267276763916
current transaction is aborted, commands ignored until end of transaction block
25.11626386642456
current transaction is aborted, commands ignored until end of transaction block
25.117375373840332
current transaction is aborted, commands ignored until end of transaction block
25.117375373840332
current transaction is aborted, commands ignored until end of transaction block
25.118263959884644
current transaction is aborted, commands ignored until end of transaction block
25.119261741638184
current transaction is aborted, commands ignored until end of transaction block
25.119628190994263
current transaction is aborted, commands ignored until end of transaction block
25.119628190994263
current transaction is aborted, commands ignored until end of transaction block
25.12026834487915
current transaction is aborted, commands ignored until end of transaction block
25.121318340301514
current transaction is aborted, commands ignored until end of transaction block
25.12226128578186
current transaction is aborted, commands ignored until end of transaction block
25.12226128578186
current transaction is aborted, commands ignored until end of transaction block
25.12326955795288
current transaction is aborted, commands ignored until end of transaction block
25.124269723892212
current transaction is aborted, commands ignored until end of transaction block
25.125262022018433
current transaction is aborted, commands ignored until end of transaction block
25.12532353401184
current transaction is aborted, commands ignored until end of transaction block
25.126264810562134
current transaction is aborted, commands ignored until end of transaction block
25.126264810562134
current transaction is aborted, commands ignored until end of transaction block
25.127268314361572
current transaction is aborted, commands ignored until end of transaction block
25.12827444076538
current transaction is aborted, commands ignored until end of transaction block
25.12827444076538
current transaction is aborted, commands ignored until end of transaction block
25.129263401031494
current transaction is aborted, commands ignored until end of transaction block
25.130264282226562
current transaction is aborted, commands ignored until end of transaction block
25.130264282226562
current transaction is aborted, commands ignored until end of transaction block
25.131275177001953
current transaction is aborted, commands ignored until end of transaction block
25.13226294517517
current transaction is aborted, commands ignored until end of transaction block
25.13226294517517
current transaction is aborted, commands ignored until end of transaction block
25.133265018463135
current transaction is aborted, commands ignored until end of transaction block
25.133265018463135
current transaction is aborted, commands ignored until end of transaction block
25.13430690765381
current transaction is aborted, commands ignored until end of transaction block
25.13430690765381
current transaction is aborted, commands ignored until end of transaction block
25.135522603988647
current transaction is aborted, commands ignored until end of transaction block
25.135522603988647
current transaction is aborted, commands ignored until end of transaction block
25.136290550231934
current transaction is aborted, commands ignored until end of transaction block
25.138261556625366
current transaction is aborted, commands ignored until end of transaction block
25.13926672935486
current transaction is aborted, commands ignored until end of transaction block
25.14026379585266
current transaction is aborted, commands ignored until end of transaction block
25.14226484298706
current transaction is aborted, commands ignored until end of transaction block
25.14226484298706
current transaction is aborted, commands ignored until end of transaction block
25.1432626247406
current transaction is aborted, commands ignored until end of transaction block
25.1452693939209
current transaction is aborted, commands ignored until end of transaction block
25.1452693939209
current transaction is aborted, commands ignored until end of transaction block
25.146273612976074
current transaction is aborted, commands ignored until end of transaction block
25.147265195846558
current transaction is aborted, commands ignored until end of transaction block
25.147265195846558
current transaction is aborted, commands ignored until end of transaction block
25.148443937301636
current transaction is aborted, commands ignored until end of transaction block
25.149262189865112
current transaction is aborted, commands ignored until end of transaction block
25.149262189865112
current transaction is aborted, commands ignored until end of transaction block
25.150262117385864
current transaction is aborted, commands ignored until end of transaction block
25.15126347541809
current transaction is aborted, commands ignored until end of transaction block
25.15126347541809
current transaction is aborted, commands ignored until end of transaction block
25.15226149559021
current transaction is aborted, commands ignored until end of transaction block
25.153263807296753
current transaction is aborted, commands ignored until end of transaction block
25.153263807296753
current transaction is aborted, commands ignored until end of transaction block
25.154268264770508
current transaction is aborted, commands ignored until end of transaction block
25.15550923347473
current transaction is aborted, commands ignored until end of transaction block
25.156264066696167
current transaction is aborted, commands ignored until end of transaction block
25.156264066696167
current transaction is aborted, commands ignored until end of transaction block
25.15726637840271
current transaction is aborted, commands ignored until end of transaction block
25.15842318534851
current transaction is aborted, commands ignored until end of transaction block
25.159269094467163
current transaction is aborted, commands ignored until end of transaction block
25.159269094467163
current transaction is aborted, commands ignored until end of transaction block
25.16026496887207
current transaction is aborted, commands ignored until end of transaction block
25.16026496887207
current transaction is aborted, commands ignored until end of transaction block
25.16126823425293
current transaction is aborted, commands ignored until end of transaction block
25.16243290901184
current transaction is aborted, commands ignored until end of transaction block
25.16243290901184
current transaction is aborted, commands ignored until end of transaction block
25.163296699523926
current transaction is aborted, commands ignored until end of transaction block
25.164263248443604
current transaction is aborted, commands ignored until end of transaction block
25.16531991958618
current transaction is aborted, commands ignored until end of transaction block
25.16627025604248
current transaction is aborted, commands ignored until end of transaction block
25.16727638244629
current transaction is aborted, commands ignored until end of transaction block
25.16826844215393
current transaction is aborted, commands ignored until end of transaction block
25.169262886047363
current transaction is aborted, commands ignored until end of transaction block
25.169262886047363
current transaction is aborted, commands ignored until end of transaction block
25.17026686668396
current transaction is aborted, commands ignored until end of transaction block
25.172279357910156
current transaction is aborted, commands ignored until end of transaction block
25.17326855659485
current transaction is aborted, commands ignored until end of transaction block
25.17326855659485
current transaction is aborted, commands ignored until end of transaction block
25.17447018623352
current transaction is aborted, commands ignored until end of transaction block
25.175267219543457
current transaction is aborted, commands ignored until end of transaction block
25.175267219543457
current transaction is aborted, commands ignored until end of transaction block
25.176263332366943
current transaction is aborted, commands ignored until end of transaction block
25.177263021469116
current transaction is aborted, commands ignored until end of transaction block
25.177263021469116
current transaction is aborted, commands ignored until end of transaction block
25.177263021469116
current transaction is aborted, commands ignored until end of transaction block
25.178265810012817
current transaction is aborted, commands ignored until end of transaction block
25.17926287651062
current transaction is aborted, commands ignored until end of transaction block
25.17926287651062
current transaction is aborted, commands ignored until end of transaction block
25.18026566505432
current transaction is aborted, commands ignored until end of transaction block
25.181265592575073
current transaction is aborted, commands ignored until end of transaction block
25.182273864746094
current transaction is aborted, commands ignored until end of transaction block
25.182456254959106
current transaction is aborted, commands ignored until end of transaction block
25.1844265460968
current transaction is aborted, commands ignored until end of transaction block
25.185269832611084
current transaction is aborted, commands ignored until end of transaction block
25.186266660690308
current transaction is aborted, commands ignored until end of transaction block
25.18752884864807
current transaction is aborted, commands ignored until end of transaction block
25.18752884864807
current transaction is aborted, commands ignored until end of transaction block
25.188263177871704
current transaction is aborted, commands ignored until end of transaction block
25.188263177871704
current transaction is aborted, commands ignored until end of transaction block
25.18936324119568
current transaction is aborted, commands ignored until end of transaction block
25.190263986587524
current transaction is aborted, commands ignored until end of transaction block
25.190263986587524
current transaction is aborted, commands ignored until end of transaction block
25.191260814666748
current transaction is aborted, commands ignored until end of transaction block
25.192262649536133
current transaction is aborted, commands ignored until end of transaction block
25.192262649536133
current transaction is aborted, commands ignored until end of transaction block
25.193466663360596
current transaction is aborted, commands ignored until end of transaction block
25.193466663360596
current transaction is aborted, commands ignored until end of transaction block
25.195273399353027
current transaction is aborted, commands ignored until end of transaction block
25.195273399353027
current transaction is aborted, commands ignored until end of transaction block
25.197266817092896
current transaction is aborted, commands ignored until end of transaction block
25.198270320892334
current transaction is aborted, commands ignored until end of transaction block
25.19926691055298
current transaction is aborted, commands ignored until end of transaction block
25.201268672943115
current transaction is aborted, commands ignored until end of transaction block
25.202573776245117
current transaction is aborted, commands ignored until end of transaction block
25.203271865844727
current transaction is aborted, commands ignored until end of transaction block
25.204766035079956
current transaction is aborted, commands ignored until end of transaction block
25.205615758895874
current transaction is aborted, commands ignored until end of transaction block
25.206331491470337
current transaction is aborted, commands ignored until end of transaction block
25.207273483276367
current transaction is aborted, commands ignored until end of transaction block
25.208263635635376
current transaction is aborted, commands ignored until end of transaction block
25.208263635635376
current transaction is aborted, commands ignored until end of transaction block
25.20926833152771
current transaction is aborted, commands ignored until end of transaction block
25.210301876068115
current transaction is aborted, commands ignored until end of transaction block
25.21127414703369
current transaction is aborted, commands ignored until end of transaction block
25.21127414703369
current transaction is aborted, commands ignored until end of transaction block
25.212260723114014
current transaction is aborted, commands ignored until end of transaction block
25.213263034820557
current transaction is aborted, commands ignored until end of transaction block
25.215263605117798
current transaction is aborted, commands ignored until end of transaction block
25.215263605117798
current transaction is aborted, commands ignored until end of transaction block
25.216265439987183
current transaction is aborted, commands ignored until end of transaction block
25.216265439987183
current transaction is aborted, commands ignored until end of transaction block
25.21730375289917
current transaction is aborted, commands ignored until end of transaction block
25.21730375289917
current transaction is aborted, commands ignored until end of transaction block
25.218286514282227
current transaction is aborted, commands ignored until end of transaction block
25.218286514282227
current transaction is aborted, commands ignored until end of transaction block
25.219262838363647
current transaction is aborted, commands ignored until end of transaction block
25.219262838363647
current transaction is aborted, commands ignored until end of transaction block
25.22026562690735
current transaction is aborted, commands ignored until end of transaction block
25.221527099609375
current transaction is aborted, commands ignored until end of transaction block
25.222265243530273
current transaction is aborted, commands ignored until end of transaction block
25.222265243530273
current transaction is aborted, commands ignored until end of transaction block
25.22326159477234
current transaction is aborted, commands ignored until end of transaction block
25.22326159477234
current transaction is aborted, commands ignored until end of transaction block
25.22426438331604
current transaction is aborted, commands ignored until end of transaction block
25.22426438331604
current transaction is aborted, commands ignored until end of transaction block
25.225263595581055
current transaction is aborted, commands ignored until end of transaction block
25.225263595581055
current transaction is aborted, commands ignored until end of transaction block
25.226263761520386
current transaction is aborted, commands ignored until end of transaction block
25.226263761520386
current transaction is aborted, commands ignored until end of transaction block
25.227263689041138
current transaction is aborted, commands ignored until end of transaction block
25.228504180908203
current transaction is aborted, commands ignored until end of transaction block
25.229262828826904
current transaction is aborted, commands ignored until end of transaction block
25.229262828826904
current transaction is aborted, commands ignored until end of transaction block
25.229262828826904
current transaction is aborted, commands ignored until end of transaction block
25.230263471603394
current transaction is aborted, commands ignored until end of transaction block
25.230263471603394
current transaction is aborted, commands ignored until end of transaction block
25.232264280319214
current transaction is aborted, commands ignored until end of transaction block
25.232264280319214
current transaction is aborted, commands ignored until end of transaction block
25.233264684677124
current transaction is aborted, commands ignored until end of transaction block
25.234293699264526
current transaction is aborted, commands ignored until end of transaction block
25.235265016555786
current transaction is aborted, commands ignored until end of transaction block
25.236276149749756
current transaction is aborted, commands ignored until end of transaction block
25.236276149749756
current transaction is aborted, commands ignored until end of transaction block
25.237489700317383
current transaction is aborted, commands ignored until end of transaction block
25.237489700317383
current transaction is aborted, commands ignored until end of transaction block
25.238264560699463
current transaction is aborted, commands ignored until end of transaction block
25.239272356033325
current transaction is aborted, commands ignored until end of transaction block
25.240267992019653
current transaction is aborted, commands ignored until end of transaction block
25.240267992019653
current transaction is aborted, commands ignored until end of transaction block
25.241265296936035
current transaction is aborted, commands ignored until end of transaction block
25.24227786064148
current transaction is aborted, commands ignored until end of transaction block
25.243268489837646
current transaction is aborted, commands ignored until end of transaction block
25.243268489837646
current transaction is aborted, commands ignored until end of transaction block
25.244268894195557
current transaction is aborted, commands ignored until end of transaction block
25.244268894195557
current transaction is aborted, commands ignored until end of transaction block
25.24526858329773
current transaction is aborted, commands ignored until end of transaction block
25.24526858329773
current transaction is aborted, commands ignored until end of transaction block
25.24526858329773
current transaction is aborted, commands ignored until end of transaction block
25.24626135826111
current transaction is aborted, commands ignored until end of transaction block
25.24626135826111
current transaction is aborted, commands ignored until end of transaction block
25.247265100479126
current transaction is aborted, commands ignored until end of transaction block
25.248265981674194
current transaction is aborted, commands ignored until end of transaction block
25.2492733001709
current transaction is aborted, commands ignored until end of transaction block
25.2492733001709
current transaction is aborted, commands ignored until end of transaction block
25.250272512435913
current transaction is aborted, commands ignored until end of transaction block
25.251270532608032
current transaction is aborted, commands ignored until end of transaction block
25.251270532608032
current transaction is aborted, commands ignored until end of transaction block
25.252283096313477
current transaction is aborted, commands ignored until end of transaction block
25.252283096313477
current transaction is aborted, commands ignored until end of transaction block
25.253263473510742
current transaction is aborted, commands ignored until end of transaction block
25.253263473510742
current transaction is aborted, commands ignored until end of transaction block
25.254331827163696
current transaction is aborted, commands ignored until end of transaction block
25.254331827163696
current transaction is aborted, commands ignored until end of transaction block
25.255263805389404
current transaction is aborted, commands ignored until end of transaction block
25.25657820701599
current transaction is aborted, commands ignored until end of transaction block
25.25657820701599
current transaction is aborted, commands ignored until end of transaction block
25.25727367401123
current transaction is aborted, commands ignored until end of transaction block
25.258264780044556
current transaction is aborted, commands ignored until end of transaction block
25.2592613697052
current transaction is aborted, commands ignored until end of transaction block
25.2592613697052
current transaction is aborted, commands ignored until end of transaction block
25.2592613697052
current transaction is aborted, commands ignored until end of transaction block
25.26026153564453
current transaction is aborted, commands ignored until end of transaction block
25.26126194000244
current transaction is aborted, commands ignored until end of transaction block
25.26126194000244
current transaction is aborted, commands ignored until end of transaction block
25.26226234436035
current transaction is aborted, commands ignored until end of transaction block
25.26226234436035
current transaction is aborted, commands ignored until end of transaction block
25.263264179229736
current transaction is aborted, commands ignored until end of transaction block
25.263264179229736
current transaction is aborted, commands ignored until end of transaction block
25.26438856124878
current transaction is aborted, commands ignored until end of transaction block
25.265270948410034
current transaction is aborted, commands ignored until end of transaction block
25.265270948410034
current transaction is aborted, commands ignored until end of transaction block
25.26626682281494
current transaction is aborted, commands ignored until end of transaction block
25.26626682281494
current transaction is aborted, commands ignored until end of transaction block
25.26726460456848
current transaction is aborted, commands ignored until end of transaction block
25.268267154693604
current transaction is aborted, commands ignored until end of transaction block
25.269266605377197
current transaction is aborted, commands ignored until end of transaction block
25.269266605377197
current transaction is aborted, commands ignored until end of transaction block
25.27026343345642
current transaction is aborted, commands ignored until end of transaction block
25.27026343345642
current transaction is aborted, commands ignored until end of transaction block
25.271350860595703
current transaction is aborted, commands ignored until end of transaction block
25.27226161956787
current transaction is aborted, commands ignored until end of transaction block
25.27226161956787
current transaction is aborted, commands ignored until end of transaction block
25.273260593414307
current transaction is aborted, commands ignored until end of transaction block
25.27340579032898
current transaction is aborted, commands ignored until end of transaction block
25.274263858795166
current transaction is aborted, commands ignored until end of transaction block
25.274263858795166
current transaction is aborted, commands ignored until end of transaction block
25.275270462036133
current transaction is aborted, commands ignored until end of transaction block
25.275270462036133
current transaction is aborted, commands ignored until end of transaction block
25.276268482208252
current transaction is aborted, commands ignored until end of transaction block
25.277275562286377
current transaction is aborted, commands ignored until end of transaction block
25.278268098831177
current transaction is aborted, commands ignored until end of transaction block
25.278268098831177
current transaction is aborted, commands ignored until end of transaction block
25.278268098831177
current transaction is aborted, commands ignored until end of transaction block
25.279263019561768
current transaction is aborted, commands ignored until end of transaction block
25.280264139175415
current transaction is aborted, commands ignored until end of transaction block
25.280264139175415
current transaction is aborted, commands ignored until end of transaction block
25.280264139175415
current transaction is aborted, commands ignored until end of transaction block
25.281312465667725
current transaction is aborted, commands ignored until end of transaction block
25.281312465667725
current transaction is aborted, commands ignored until end of transaction block
25.282265663146973
current transaction is aborted, commands ignored until end of transaction block
25.283266305923462
current transaction is aborted, commands ignored until end of transaction block
25.284298181533813
current transaction is aborted, commands ignored until end of transaction block
25.285268783569336
current transaction is aborted, commands ignored until end of transaction block
25.2862708568573
current transaction is aborted, commands ignored until end of transaction block
25.2862708568573
current transaction is aborted, commands ignored until end of transaction block
25.287262678146362
current transaction is aborted, commands ignored until end of transaction block
25.287262678146362
current transaction is aborted, commands ignored until end of transaction block
25.288263082504272
current transaction is aborted, commands ignored until end of transaction block
25.288263082504272
current transaction is aborted, commands ignored until end of transaction block
25.28932547569275
current transaction is aborted, commands ignored until end of transaction block
25.290262699127197
current transaction is aborted, commands ignored until end of transaction block
25.290262699127197
current transaction is aborted, commands ignored until end of transaction block
25.29130482673645
current transaction is aborted, commands ignored until end of transaction block
25.29130482673645
current transaction is aborted, commands ignored until end of transaction block
25.29226303100586
current transaction is aborted, commands ignored until end of transaction block
25.29226303100586
current transaction is aborted, commands ignored until end of transaction block
25.293370246887207
current transaction is aborted, commands ignored until end of transaction block
25.29426598548889
current transaction is aborted, commands ignored until end of transaction block
25.29426598548889
current transaction is aborted, commands ignored until end of transaction block
25.295260906219482
current transaction is aborted, commands ignored until end of transaction block
25.295260906219482
current transaction is aborted, commands ignored until end of transaction block
25.296308040618896
current transaction is aborted, commands ignored until end of transaction block
25.296308040618896
current transaction is aborted, commands ignored until end of transaction block
25.297266483306885
current transaction is aborted, commands ignored until end of transaction block
25.2982656955719
current transaction is aborted, commands ignored until end of transaction block
25.2982656955719
current transaction is aborted, commands ignored until end of transaction block
25.299264430999756
current transaction is aborted, commands ignored until end of transaction block
25.299264430999756
current transaction is aborted, commands ignored until end of transaction block
25.299264430999756
current transaction is aborted, commands ignored until end of transaction block
25.300264596939087
current transaction is aborted, commands ignored until end of transaction block
25.301262855529785
current transaction is aborted, commands ignored until end of transaction block
25.301262855529785
current transaction is aborted, commands ignored until end of transaction block
25.3022620677948
current transaction is aborted, commands ignored until end of transaction block
25.303263902664185
current transaction is aborted, commands ignored until end of transaction block
25.303263902664185
current transaction is aborted, commands ignored until end of transaction block
25.304267168045044
current transaction is aborted, commands ignored until end of transaction block
25.30526566505432
current transaction is aborted, commands ignored until end of transaction block
25.30626344680786
current transaction is aborted, commands ignored until end of transaction block
25.30626344680786
current transaction is aborted, commands ignored until end of transaction block
25.307265520095825
current transaction is aborted, commands ignored until end of transaction block
25.307265520095825
current transaction is aborted, commands ignored until end of transaction block
25.308266639709473
current transaction is aborted, commands ignored until end of transaction block
25.308266639709473
current transaction is aborted, commands ignored until end of transaction block
25.309266090393066
current transaction is aborted, commands ignored until end of transaction block
25.309266090393066
current transaction is aborted, commands ignored until end of transaction block
25.31026577949524
current transaction is aborted, commands ignored until end of transaction block
25.311298370361328
current transaction is aborted, commands ignored until end of transaction block
25.312267303466797
current transaction is aborted, commands ignored until end of transaction block
25.313297510147095
current transaction is aborted, commands ignored until end of transaction block
25.314262866973877
current transaction is aborted, commands ignored until end of transaction block
25.314262866973877
current transaction is aborted, commands ignored until end of transaction block
25.31526231765747
current transaction is aborted, commands ignored until end of transaction block
25.31526231765747
current transaction is aborted, commands ignored until end of transaction block
25.316267013549805
current transaction is aborted, commands ignored until end of transaction block
25.317267656326294
current transaction is aborted, commands ignored until end of transaction block
25.317267656326294
current transaction is aborted, commands ignored until end of transaction block
25.318267583847046
current transaction is aborted, commands ignored until end of transaction block
25.318267583847046
current transaction is aborted, commands ignored until end of transaction block
25.319266080856323
current transaction is aborted, commands ignored until end of transaction block
25.319266080856323
current transaction is aborted, commands ignored until end of transaction block
25.32026505470276
current transaction is aborted, commands ignored until end of transaction block
25.32026505470276
current transaction is aborted, commands ignored until end of transaction block
25.32127332687378
current transaction is aborted, commands ignored until end of transaction block
25.32127332687378
current transaction is aborted, commands ignored until end of transaction block
25.322267055511475
current transaction is aborted, commands ignored until end of transaction block
25.322267055511475
current transaction is aborted, commands ignored until end of transaction block
25.32366919517517
current transaction is aborted, commands ignored until end of transaction block
25.324275970458984
current transaction is aborted, commands ignored until end of transaction block
25.32526683807373
current transaction is aborted, commands ignored until end of transaction block
25.326265573501587
current transaction is aborted, commands ignored until end of transaction block
25.326265573501587
current transaction is aborted, commands ignored until end of transaction block
25.3274085521698
current transaction is aborted, commands ignored until end of transaction block
25.328269004821777
current transaction is aborted, commands ignored until end of transaction block
25.328269004821777
current transaction is aborted, commands ignored until end of transaction block
25.329262733459473
current transaction is aborted, commands ignored until end of transaction block
25.33026933670044
current transaction is aborted, commands ignored until end of transaction block
25.331266403198242
current transaction is aborted, commands ignored until end of transaction block
25.33231496810913
current transaction is aborted, commands ignored until end of transaction block
25.33328080177307
current transaction is aborted, commands ignored until end of transaction block
25.33328080177307
current transaction is aborted, commands ignored until end of transaction block
25.334480047225952
current transaction is aborted, commands ignored until end of transaction block
25.335435152053833
current transaction is aborted, commands ignored until end of transaction block
25.335435152053833
current transaction is aborted, commands ignored until end of transaction block
25.33626365661621
current transaction is aborted, commands ignored until end of transaction block
25.33726692199707
current transaction is aborted, commands ignored until end of transaction block
25.33726692199707
current transaction is aborted, commands ignored until end of transaction block
25.338263750076294
current transaction is aborted, commands ignored until end of transaction block
25.338263750076294
current transaction is aborted, commands ignored until end of transaction block
25.340278148651123
current transaction is aborted, commands ignored until end of transaction block
25.341275453567505
current transaction is aborted, commands ignored until end of transaction block
25.342264652252197
current transaction is aborted, commands ignored until end of transaction block
25.343266010284424
current transaction is aborted, commands ignored until end of transaction block
25.343266010284424
current transaction is aborted, commands ignored until end of transaction block
25.344334840774536
current transaction is aborted, commands ignored until end of transaction block
25.345266103744507
current transaction is aborted, commands ignored until end of transaction block
25.34643578529358
current transaction is aborted, commands ignored until end of transaction block
25.34726619720459
current transaction is aborted, commands ignored until end of transaction block
25.348264455795288
current transaction is aborted, commands ignored until end of transaction block
25.348264455795288
current transaction is aborted, commands ignored until end of transaction block
25.350278854370117
current transaction is aborted, commands ignored until end of transaction block
25.35126566886902
current transaction is aborted, commands ignored until end of transaction block
25.35126566886902
current transaction is aborted, commands ignored until end of transaction block
25.353275060653687
current transaction is aborted, commands ignored until end of transaction block
25.35426902770996
current transaction is aborted, commands ignored until end of transaction block
25.35526466369629
current transaction is aborted, commands ignored until end of transaction block
25.35526466369629
current transaction is aborted, commands ignored until end of transaction block
25.356263637542725
current transaction is aborted, commands ignored until end of transaction block
25.35726284980774
current transaction is aborted, commands ignored until end of transaction block
25.35726284980774
current transaction is aborted, commands ignored until end of transaction block
25.35826849937439
current transaction is aborted, commands ignored until end of transaction block
25.359400272369385
current transaction is aborted, commands ignored until end of transaction block
25.361266136169434
current transaction is aborted, commands ignored until end of transaction block
25.361266136169434
current transaction is aborted, commands ignored until end of transaction block
25.36326551437378
current transaction is aborted, commands ignored until end of transaction block
25.36326551437378
current transaction is aborted, commands ignored until end of transaction block
25.36426615715027
current transaction is aborted, commands ignored until end of transaction block
25.36526393890381
current transaction is aborted, commands ignored until end of transaction block
25.36644196510315
current transaction is aborted, commands ignored until end of transaction block
25.36827063560486
current transaction is aborted, commands ignored until end of transaction block
25.3692626953125
current transaction is aborted, commands ignored until end of transaction block
25.37026858329773
current transaction is aborted, commands ignored until end of transaction block
25.37026858329773
current transaction is aborted, commands ignored until end of transaction block
25.37026858329773
current transaction is aborted, commands ignored until end of transaction block
25.371264219284058
current transaction is aborted, commands ignored until end of transaction block
25.372263431549072
current transaction is aborted, commands ignored until end of transaction block
25.372263431549072
current transaction is aborted, commands ignored until end of transaction block
25.37326407432556
current transaction is aborted, commands ignored until end of transaction block
25.374263048171997
current transaction is aborted, commands ignored until end of transaction block
25.374263048171997
current transaction is aborted, commands ignored until end of transaction block
25.37526798248291
current transaction is aborted, commands ignored until end of transaction block
25.376272678375244
current transaction is aborted, commands ignored until end of transaction block
25.376272678375244
current transaction is aborted, commands ignored until end of transaction block
25.37726378440857
current transaction is aborted, commands ignored until end of transaction block
25.37726378440857
current transaction is aborted, commands ignored until end of transaction block
25.378293991088867
current transaction is aborted, commands ignored until end of transaction block
25.378293991088867
current transaction is aborted, commands ignored until end of transaction block
25.37926721572876
current transaction is aborted, commands ignored until end of transaction block
25.37926721572876
current transaction is aborted, commands ignored until end of transaction block
25.381475925445557
current transaction is aborted, commands ignored until end of transaction block
25.382266759872437
current transaction is aborted, commands ignored until end of transaction block
25.382266759872437
current transaction is aborted, commands ignored until end of transaction block
25.383274793624878
current transaction is aborted, commands ignored until end of transaction block
25.383274793624878
current transaction is aborted, commands ignored until end of transaction block
25.38434362411499
current transaction is aborted, commands ignored until end of transaction block
25.38434362411499
current transaction is aborted, commands ignored until end of transaction block
25.38526439666748
current transaction is aborted, commands ignored until end of transaction block
25.38526439666748
current transaction is aborted, commands ignored until end of transaction block
25.38626480102539
current transaction is aborted, commands ignored until end of transaction block
25.38626480102539
current transaction is aborted, commands ignored until end of transaction block
25.387269020080566
current transaction is aborted, commands ignored until end of transaction block
25.38827109336853
current transaction is aborted, commands ignored until end of transaction block
25.38926362991333
current transaction is aborted, commands ignored until end of transaction block
25.390270471572876
current transaction is aborted, commands ignored until end of transaction block
25.39126491546631
current transaction is aborted, commands ignored until end of transaction block
25.39186692237854
current transaction is aborted, commands ignored until end of transaction block
25.39226245880127
current transaction is aborted, commands ignored until end of transaction block
25.39226245880127
current transaction is aborted, commands ignored until end of transaction block
25.39326000213623
current transaction is aborted, commands ignored until end of transaction block
25.39426589012146
current transaction is aborted, commands ignored until end of transaction block
25.394630670547485
current transaction is aborted, commands ignored until end of transaction block
25.39577317237854
current transaction is aborted, commands ignored until end of transaction block
25.39626407623291
current transaction is aborted, commands ignored until end of transaction block
25.39626407623291
current transaction is aborted, commands ignored until end of transaction block
25.39726233482361
current transaction is aborted, commands ignored until end of transaction block
25.39726233482361
current transaction is aborted, commands ignored until end of transaction block
25.398263692855835
current transaction is aborted, commands ignored until end of transaction block
25.39833402633667
current transaction is aborted, commands ignored until end of transaction block
25.39833402633667
current transaction is aborted, commands ignored until end of transaction block
25.399261236190796
current transaction is aborted, commands ignored until end of transaction block
25.399261236190796
current transaction is aborted, commands ignored until end of transaction block
25.400264739990234
current transaction is aborted, commands ignored until end of transaction block
25.400264739990234
current transaction is aborted, commands ignored until end of transaction block
25.401272535324097
current transaction is aborted, commands ignored until end of transaction block
25.401272535324097
current transaction is aborted, commands ignored until end of transaction block
25.402263879776
current transaction is aborted, commands ignored until end of transaction block
25.402263879776
current transaction is aborted, commands ignored until end of transaction block
25.40326189994812
current transaction is aborted, commands ignored until end of transaction block
25.40326189994812
current transaction is aborted, commands ignored until end of transaction block
25.404274940490723
current transaction is aborted, commands ignored until end of transaction block
25.404274940490723
current transaction is aborted, commands ignored until end of transaction block
25.405264377593994
current transaction is aborted, commands ignored until end of transaction block
25.405264377593994
current transaction is aborted, commands ignored until end of transaction block
25.405264377593994
current transaction is aborted, commands ignored until end of transaction block
25.406262636184692
current transaction is aborted, commands ignored until end of transaction block
25.407262563705444
current transaction is aborted, commands ignored until end of transaction block
25.407262563705444
current transaction is aborted, commands ignored until end of transaction block
25.409263372421265
current transaction is aborted, commands ignored until end of transaction block
25.409263372421265
current transaction is aborted, commands ignored until end of transaction block
25.410267114639282
current transaction is aborted, commands ignored until end of transaction block
25.410267114639282
current transaction is aborted, commands ignored until end of transaction block
25.411260843276978
current transaction is aborted, commands ignored until end of transaction block
25.411260843276978
current transaction is aborted, commands ignored until end of transaction block
25.4122633934021
current transaction is aborted, commands ignored until end of transaction block
25.4122633934021
current transaction is aborted, commands ignored until end of transaction block
25.413362979888916
current transaction is aborted, commands ignored until end of transaction block
25.413362979888916
current transaction is aborted, commands ignored until end of transaction block
25.414340257644653
current transaction is aborted, commands ignored until end of transaction block
25.414340257644653
current transaction is aborted, commands ignored until end of transaction block
25.415273904800415
current transaction is aborted, commands ignored until end of transaction block
25.415273904800415
current transaction is aborted, commands ignored until end of transaction block
25.416263580322266
current transaction is aborted, commands ignored until end of transaction block
25.417267084121704
current transaction is aborted, commands ignored until end of transaction block
25.417267084121704
current transaction is aborted, commands ignored until end of transaction block
25.417267084121704
current transaction is aborted, commands ignored until end of transaction block
25.41826367378235
current transaction is aborted, commands ignored until end of transaction block
25.41826367378235
current transaction is aborted, commands ignored until end of transaction block
25.419260501861572
current transaction is aborted, commands ignored until end of transaction block
25.419260501861572
current transaction is aborted, commands ignored until end of transaction block
25.420260429382324
current transaction is aborted, commands ignored until end of transaction block
25.420260429382324
current transaction is aborted, commands ignored until end of transaction block
25.420260429382324
current transaction is aborted, commands ignored until end of transaction block
25.421262741088867
current transaction is aborted, commands ignored until end of transaction block
25.42263102531433
current transaction is aborted, commands ignored until end of transaction block
25.423266410827637
current transaction is aborted, commands ignored until end of transaction block
25.423266410827637
current transaction is aborted, commands ignored until end of transaction block
25.424428462982178
current transaction is aborted, commands ignored until end of transaction block
25.425299644470215
current transaction is aborted, commands ignored until end of transaction block
25.42543649673462
current transaction is aborted, commands ignored until end of transaction block
25.426496505737305
current transaction is aborted, commands ignored until end of transaction block
25.426496505737305
current transaction is aborted, commands ignored until end of transaction block
25.427260875701904
current transaction is aborted, commands ignored until end of transaction block
25.427260875701904
current transaction is aborted, commands ignored until end of transaction block
25.42826819419861
current transaction is aborted, commands ignored until end of transaction block
25.42826819419861
current transaction is aborted, commands ignored until end of transaction block
25.429268836975098
current transaction is aborted, commands ignored until end of transaction block
25.429268836975098
current transaction is aborted, commands ignored until end of transaction block
25.430402040481567
current transaction is aborted, commands ignored until end of transaction block
25.430402040481567
current transaction is aborted, commands ignored until end of transaction block
25.431267976760864
current transaction is aborted, commands ignored until end of transaction block
25.43235754966736
current transaction is aborted, commands ignored until end of transaction block
25.43235754966736
current transaction is aborted, commands ignored until end of transaction block
25.433263301849365
current transaction is aborted, commands ignored until end of transaction block
25.433263301849365
current transaction is aborted, commands ignored until end of transaction block
25.434264659881592
current transaction is aborted, commands ignored until end of transaction block
25.434264659881592
current transaction is aborted, commands ignored until end of transaction block
25.434264659881592
current transaction is aborted, commands ignored until end of transaction block
25.43526315689087
current transaction is aborted, commands ignored until end of transaction block
25.43526315689087
current transaction is aborted, commands ignored until end of transaction block
25.436266660690308
current transaction is aborted, commands ignored until end of transaction block
25.437314748764038
current transaction is aborted, commands ignored until end of transaction block
25.437314748764038
current transaction is aborted, commands ignored until end of transaction block
25.438263416290283
current transaction is aborted, commands ignored until end of transaction block
25.438263416290283
current transaction is aborted, commands ignored until end of transaction block
25.439342975616455
current transaction is aborted, commands ignored until end of transaction block
25.439342975616455
current transaction is aborted, commands ignored until end of transaction block
25.440265417099
current transaction is aborted, commands ignored until end of transaction block
25.440265417099
current transaction is aborted, commands ignored until end of transaction block
25.440265417099
current transaction is aborted, commands ignored until end of transaction block
25.441298723220825
current transaction is aborted, commands ignored until end of transaction block
25.441298723220825
current transaction is aborted, commands ignored until end of transaction block
25.442265510559082
current transaction is aborted, commands ignored until end of transaction block
25.44329047203064
current transaction is aborted, commands ignored until end of transaction block
25.444304943084717
current transaction is aborted, commands ignored until end of transaction block
25.444304943084717
current transaction is aborted, commands ignored until end of transaction block
25.444304943084717
current transaction is aborted, commands ignored until end of transaction block
25.445266008377075
current transaction is aborted, commands ignored until end of transaction block
25.445266008377075
current transaction is aborted, commands ignored until end of transaction block
25.446424961090088
current transaction is aborted, commands ignored until end of transaction block
25.446424961090088
current transaction is aborted, commands ignored until end of transaction block
25.447301864624023
current transaction is aborted, commands ignored until end of transaction block
25.447301864624023
current transaction is aborted, commands ignored until end of transaction block
25.448302507400513
current transaction is aborted, commands ignored until end of transaction block
25.448302507400513
current transaction is aborted, commands ignored until end of transaction block
25.44926381111145
current transaction is aborted, commands ignored until end of transaction block
25.44926381111145
current transaction is aborted, commands ignored until end of transaction block
25.45028281211853
current transaction is aborted, commands ignored until end of transaction block
25.451263189315796
current transaction is aborted, commands ignored until end of transaction block
25.451263189315796
current transaction is aborted, commands ignored until end of transaction block
25.452460527420044
current transaction is aborted, commands ignored until end of transaction block
25.452460527420044
current transaction is aborted, commands ignored until end of transaction block
25.453301429748535
current transaction is aborted, commands ignored until end of transaction block
25.453301429748535
current transaction is aborted, commands ignored until end of transaction block
25.454267740249634
current transaction is aborted, commands ignored until end of transaction block
25.454267740249634
current transaction is aborted, commands ignored until end of transaction block
25.455304861068726
current transaction is aborted, commands ignored until end of transaction block
25.455304861068726
current transaction is aborted, commands ignored until end of transaction block
25.45626664161682
current transaction is aborted, commands ignored until end of transaction block
25.45626664161682
current transaction is aborted, commands ignored until end of transaction block
25.457453966140747
current transaction is aborted, commands ignored until end of transaction block
25.458266496658325
current transaction is aborted, commands ignored until end of transaction block
25.458266496658325
current transaction is aborted, commands ignored until end of transaction block
25.45931077003479
current transaction is aborted, commands ignored until end of transaction block
25.45931077003479
current transaction is aborted, commands ignored until end of transaction block
25.460302591323853
current transaction is aborted, commands ignored until end of transaction block
25.460302591323853
current transaction is aborted, commands ignored until end of transaction block
25.461297750473022
current transaction is aborted, commands ignored until end of transaction block
25.461297750473022
current transaction is aborted, commands ignored until end of transaction block
25.46230173110962
current transaction is aborted, commands ignored until end of transaction block
25.46230173110962
current transaction is aborted, commands ignored until end of transaction block
25.46326518058777
current transaction is aborted, commands ignored until end of transaction block
25.46326518058777
current transaction is aborted, commands ignored until end of transaction block
25.464810848236084
current transaction is aborted, commands ignored until end of transaction block
25.46526789665222
current transaction is aborted, commands ignored until end of transaction block
25.46526789665222
current transaction is aborted, commands ignored until end of transaction block
25.466265439987183
current transaction is aborted, commands ignored until end of transaction block
25.466265439987183
current transaction is aborted, commands ignored until end of transaction block
25.467334508895874
current transaction is aborted, commands ignored until end of transaction block
25.467334508895874
current transaction is aborted, commands ignored until end of transaction block
25.46830153465271
current transaction is aborted, commands ignored until end of transaction block
25.46830153465271
current transaction is aborted, commands ignored until end of transaction block
25.469656467437744
current transaction is aborted, commands ignored until end of transaction block
25.470265865325928
current transaction is aborted, commands ignored until end of transaction block
25.470265865325928
current transaction is aborted, commands ignored until end of transaction block
25.471315383911133
current transaction is aborted, commands ignored until end of transaction block
25.471315383911133
current transaction is aborted, commands ignored until end of transaction block
25.472294569015503
current transaction is aborted, commands ignored until end of transaction block
25.472294569015503
current transaction is aborted, commands ignored until end of transaction block
25.473299264907837
current transaction is aborted, commands ignored until end of transaction block
25.473299264907837
current transaction is aborted, commands ignored until end of transaction block
25.474300622940063
current transaction is aborted, commands ignored until end of transaction block
25.474300622940063
current transaction is aborted, commands ignored until end of transaction block
25.475406885147095
current transaction is aborted, commands ignored until end of transaction block
25.475406885147095
current transaction is aborted, commands ignored until end of transaction block
25.476340532302856
current transaction is aborted, commands ignored until end of transaction block
25.476340532302856
current transaction is aborted, commands ignored until end of transaction block
25.47831439971924
current transaction is aborted, commands ignored until end of transaction block
25.47831439971924
current transaction is aborted, commands ignored until end of transaction block
25.479264497756958
current transaction is aborted, commands ignored until end of transaction block
25.48030686378479
current transaction is aborted, commands ignored until end of transaction block
25.48030686378479
current transaction is aborted, commands ignored until end of transaction block
25.481264352798462
current transaction is aborted, commands ignored until end of transaction block
25.481264352798462
current transaction is aborted, commands ignored until end of transaction block
25.482303380966187
current transaction is aborted, commands ignored until end of transaction block
25.482303380966187
current transaction is aborted, commands ignored until end of transaction block
25.48336124420166
current transaction is aborted, commands ignored until end of transaction block
25.484275341033936
current transaction is aborted, commands ignored until end of transaction block
25.485270023345947
current transaction is aborted, commands ignored until end of transaction block
25.486275911331177
current transaction is aborted, commands ignored until end of transaction block
25.487430095672607
current transaction is aborted, commands ignored until end of transaction block
25.488271713256836
current transaction is aborted, commands ignored until end of transaction block
25.488271713256836
current transaction is aborted, commands ignored until end of transaction block
25.48926854133606
current transaction is aborted, commands ignored until end of transaction block
25.49057126045227
current transaction is aborted, commands ignored until end of transaction block
25.491272449493408
current transaction is aborted, commands ignored until end of transaction block
25.491272449493408
current transaction is aborted, commands ignored until end of transaction block
25.492267608642578
current transaction is aborted, commands ignored until end of transaction block
25.493419647216797
current transaction is aborted, commands ignored until end of transaction block
25.494269371032715
current transaction is aborted, commands ignored until end of transaction block
25.494269371032715
current transaction is aborted, commands ignored until end of transaction block
25.495290279388428
current transaction is aborted, commands ignored until end of transaction block
25.496268272399902
current transaction is aborted, commands ignored until end of transaction block
25.49727177619934
current transaction is aborted, commands ignored until end of transaction block
25.498265504837036
current transaction is aborted, commands ignored until end of transaction block
25.498265504837036
current transaction is aborted, commands ignored until end of transaction block
25.49935507774353
current transaction is aborted, commands ignored until end of transaction block
25.500316619873047
current transaction is aborted, commands ignored until end of transaction block
25.5012686252594
current transaction is aborted, commands ignored until end of transaction block
25.5012686252594
current transaction is aborted, commands ignored until end of transaction block
25.50226926803589
current transaction is aborted, commands ignored until end of transaction block
25.503273248672485
current transaction is aborted, commands ignored until end of transaction block
25.504273653030396
current transaction is aborted, commands ignored until end of transaction block
25.505274772644043
current transaction is aborted, commands ignored until end of transaction block
25.50628089904785
current transaction is aborted, commands ignored until end of transaction block
25.50727391242981
current transaction is aborted, commands ignored until end of transaction block
25.5082745552063
current transaction is aborted, commands ignored until end of transaction block
25.50944471359253
current transaction is aborted, commands ignored until end of transaction block
25.51048231124878
current transaction is aborted, commands ignored until end of transaction block
25.51148748397827
current transaction is aborted, commands ignored until end of transaction block
25.513484716415405
current transaction is aborted, commands ignored until end of transaction block
25.51447367668152
current transaction is aborted, commands ignored until end of transaction block
25.516470432281494
current transaction is aborted, commands ignored until end of transaction block
25.517473459243774
current transaction is aborted, commands ignored until end of transaction block
25.51847267150879
current transaction is aborted, commands ignored until end of transaction block
25.520470142364502
current transaction is aborted, commands ignored until end of transaction block
25.520470142364502
current transaction is aborted, commands ignored until end of transaction block
25.52247929573059
current transaction is aborted, commands ignored until end of transaction block
25.523521423339844
current transaction is aborted, commands ignored until end of transaction block
25.52546763420105
current transaction is aborted, commands ignored until end of transaction block
25.526476860046387
current transaction is aborted, commands ignored until end of transaction block
25.528472423553467
current transaction is aborted, commands ignored until end of transaction block
25.529473066329956
current transaction is aborted, commands ignored until end of transaction block
25.529473066329956
current transaction is aborted, commands ignored until end of transaction block
25.530519485473633
current transaction is aborted, commands ignored until end of transaction block
25.531612873077393
current transaction is aborted, commands ignored until end of transaction block
25.532482385635376
current transaction is aborted, commands ignored until end of transaction block
25.533472299575806
current transaction is aborted, commands ignored until end of transaction block
25.534470319747925
current transaction is aborted, commands ignored until end of transaction block
25.536474227905273
current transaction is aborted, commands ignored until end of transaction block
25.538475513458252
current transaction is aborted, commands ignored until end of transaction block
25.538475513458252
current transaction is aborted, commands ignored until end of transaction block
25.540472984313965
current transaction is aborted, commands ignored until end of transaction block
25.54146909713745
current transaction is aborted, commands ignored until end of transaction block
25.542471170425415
current transaction is aborted, commands ignored until end of transaction block
25.543477773666382
current transaction is aborted, commands ignored until end of transaction block
25.54556632041931
current transaction is aborted, commands ignored until end of transaction block
25.54646944999695
current transaction is aborted, commands ignored until end of transaction block
25.548536777496338
current transaction is aborted, commands ignored until end of transaction block
25.549471855163574
current transaction is aborted, commands ignored until end of transaction block
25.550477266311646
current transaction is aborted, commands ignored until end of transaction block
25.55154299736023
current transaction is aborted, commands ignored until end of transaction block
25.553467988967896
current transaction is aborted, commands ignored until end of transaction block
25.55447006225586
current transaction is aborted, commands ignored until end of transaction block
25.556604623794556
current transaction is aborted, commands ignored until end of transaction block
25.556654691696167
current transaction is aborted, commands ignored until end of transaction block
25.55765652656555
current transaction is aborted, commands ignored until end of transaction block
25.55867338180542
current transaction is aborted, commands ignored until end of transaction block
25.559922695159912
current transaction is aborted, commands ignored until end of transaction block
25.559922695159912
current transaction is aborted, commands ignored until end of transaction block
25.562140226364136
current transaction is aborted, commands ignored until end of transaction block
25.562692642211914
current transaction is aborted, commands ignored until end of transaction block
25.5636887550354
current transaction is aborted, commands ignored until end of transaction block
25.56468915939331
current transaction is aborted, commands ignored until end of transaction block
25.56468915939331
current transaction is aborted, commands ignored until end of transaction block
25.566309213638306
current transaction is aborted, commands ignored until end of transaction block
25.56686234474182
current transaction is aborted, commands ignored until end of transaction block
25.56875491142273
current transaction is aborted, commands ignored until end of transaction block
25.56875491142273
current transaction is aborted, commands ignored until end of transaction block
25.570812225341797
current transaction is aborted, commands ignored until end of transaction block
25.57179069519043
current transaction is aborted, commands ignored until end of transaction block
25.57370162010193
current transaction is aborted, commands ignored until end of transaction block
25.57370162010193
current transaction is aborted, commands ignored until end of transaction block
25.57578134536743
current transaction is aborted, commands ignored until end of transaction block
25.576727867126465
current transaction is aborted, commands ignored until end of transaction block
25.577767610549927
current transaction is aborted, commands ignored until end of transaction block
25.578731775283813
current transaction is aborted, commands ignored until end of transaction block
25.579729795455933
current transaction is aborted, commands ignored until end of transaction block
25.58072853088379
current transaction is aborted, commands ignored until end of transaction block
25.58173108100891
current transaction is aborted, commands ignored until end of transaction block
25.583011150360107
current transaction is aborted, commands ignored until end of transaction block
25.583731412887573
current transaction is aborted, commands ignored until end of transaction block
25.5857355594635
current transaction is aborted, commands ignored until end of transaction block
25.586732625961304
current transaction is aborted, commands ignored until end of transaction block
25.58772587776184
current transaction is aborted, commands ignored until end of transaction block
25.588732481002808
current transaction is aborted, commands ignored until end of transaction block
25.58972477912903
current transaction is aborted, commands ignored until end of transaction block
25.591745138168335
current transaction is aborted, commands ignored until end of transaction block
25.591745138168335
current transaction is aborted, commands ignored until end of transaction block
25.593726873397827
current transaction is aborted, commands ignored until end of transaction block
25.59473443031311
current transaction is aborted, commands ignored until end of transaction block
25.595808267593384
current transaction is aborted, commands ignored until end of transaction block
25.596732139587402
current transaction is aborted, commands ignored until end of transaction block
25.59773302078247
current transaction is aborted, commands ignored until end of transaction block
25.598750829696655
current transaction is aborted, commands ignored until end of transaction block
25.599732398986816
current transaction is aborted, commands ignored until end of transaction block
25.60076665878296
current transaction is aborted, commands ignored until end of transaction block
25.601731538772583
current transaction is aborted, commands ignored until end of transaction block
25.602739095687866
current transaction is aborted, commands ignored until end of transaction block
25.60372805595398
current transaction is aborted, commands ignored until end of transaction block
25.60372805595398
current transaction is aborted, commands ignored until end of transaction block
25.605332612991333
current transaction is aborted, commands ignored until end of transaction block
25.605751514434814
current transaction is aborted, commands ignored until end of transaction block
25.606734037399292
current transaction is aborted, commands ignored until end of transaction block
25.607725858688354
current transaction is aborted, commands ignored until end of transaction block
25.607725858688354
current transaction is aborted, commands ignored until end of transaction block
25.608726978302002
current transaction is aborted, commands ignored until end of transaction block
25.60972547531128
current transaction is aborted, commands ignored until end of transaction block
25.611029863357544
current transaction is aborted, commands ignored until end of transaction block
25.611774682998657
current transaction is aborted, commands ignored until end of transaction block
25.612736463546753
current transaction is aborted, commands ignored until end of transaction block
25.613770246505737
current transaction is aborted, commands ignored until end of transaction block
25.614862203598022
current transaction is aborted, commands ignored until end of transaction block
25.615735054016113
current transaction is aborted, commands ignored until end of transaction block
25.616734504699707
current transaction is aborted, commands ignored until end of transaction block
25.617839097976685
current transaction is aborted, commands ignored until end of transaction block
25.618725299835205
current transaction is aborted, commands ignored until end of transaction block
25.619730949401855
current transaction is aborted, commands ignored until end of transaction block
25.620353937149048
current transaction is aborted, commands ignored until end of transaction block
25.62072491645813
current transaction is aborted, commands ignored until end of transaction block
25.62181305885315
current transaction is aborted, commands ignored until end of transaction block
25.622724533081055
current transaction is aborted, commands ignored until end of transaction block
25.623730421066284
current transaction is aborted, commands ignored until end of transaction block
25.623730421066284
current transaction is aborted, commands ignored until end of transaction block
25.625004529953003
current transaction is aborted, commands ignored until end of transaction block
25.625734567642212
current transaction is aborted, commands ignored until end of transaction block
25.628113985061646
current transaction is aborted, commands ignored until end of transaction block
25.628723621368408
current transaction is aborted, commands ignored until end of transaction block
25.629734992980957
current transaction is aborted, commands ignored until end of transaction block
25.63075876235962
current transaction is aborted, commands ignored until end of transaction block
25.63174295425415
current transaction is aborted, commands ignored until end of transaction block
25.63273549079895
current transaction is aborted, commands ignored until end of transaction block
25.63273549079895
current transaction is aborted, commands ignored until end of transaction block
25.633737802505493
current transaction is aborted, commands ignored until end of transaction block
25.63473343849182
current transaction is aborted, commands ignored until end of transaction block
25.63573169708252
current transaction is aborted, commands ignored until end of transaction block
25.63673210144043
current transaction is aborted, commands ignored until end of transaction block
25.637733459472656
current transaction is aborted, commands ignored until end of transaction block
25.637733459472656
current transaction is aborted, commands ignored until end of transaction block
25.63873791694641
current transaction is aborted, commands ignored until end of transaction block
25.64073419570923
current transaction is aborted, commands ignored until end of transaction block
25.64176845550537
current transaction is aborted, commands ignored until end of transaction block
25.642767906188965
current transaction is aborted, commands ignored until end of transaction block
25.642767906188965
current transaction is aborted, commands ignored until end of transaction block
25.64372944831848
current transaction is aborted, commands ignored until end of transaction block
25.6447331905365
current transaction is aborted, commands ignored until end of transaction block
25.6462345123291
current transaction is aborted, commands ignored until end of transaction block
25.646770000457764
current transaction is aborted, commands ignored until end of transaction block
25.64781165122986
current transaction is aborted, commands ignored until end of transaction block
25.648731231689453
current transaction is aborted, commands ignored until end of transaction block
25.648731231689453
current transaction is aborted, commands ignored until end of transaction block
25.649770736694336
current transaction is aborted, commands ignored until end of transaction block
25.651726722717285
current transaction is aborted, commands ignored until end of transaction block
25.652732372283936
current transaction is aborted, commands ignored until end of transaction block
25.652732372283936
current transaction is aborted, commands ignored until end of transaction block
25.654730319976807
current transaction is aborted, commands ignored until end of transaction block
25.654805183410645
current transaction is aborted, commands ignored until end of transaction block
25.656017541885376
current transaction is aborted, commands ignored until end of transaction block
25.656741857528687
current transaction is aborted, commands ignored until end of transaction block
25.65773582458496
current transaction is aborted, commands ignored until end of transaction block
25.65873122215271
current transaction is aborted, commands ignored until end of transaction block
25.659733295440674
current transaction is aborted, commands ignored until end of transaction block
25.659733295440674
current transaction is aborted, commands ignored until end of transaction block
25.66073513031006
current transaction is aborted, commands ignored until end of transaction block
25.661846160888672
current transaction is aborted, commands ignored until end of transaction block
25.662736177444458
current transaction is aborted, commands ignored until end of transaction block
25.66376829147339
current transaction is aborted, commands ignored until end of transaction block
25.664725065231323
current transaction is aborted, commands ignored until end of transaction block
25.665778875350952
current transaction is aborted, commands ignored until end of transaction block
25.666739463806152
current transaction is aborted, commands ignored until end of transaction block
25.666739463806152
current transaction is aborted, commands ignored until end of transaction block
25.668132781982422
current transaction is aborted, commands ignored until end of transaction block
25.668771505355835
current transaction is aborted, commands ignored until end of transaction block
25.669769048690796
current transaction is aborted, commands ignored until end of transaction block
25.669769048690796
current transaction is aborted, commands ignored until end of transaction block
25.670770168304443
current transaction is aborted, commands ignored until end of transaction block
25.672745943069458
current transaction is aborted, commands ignored until end of transaction block
25.673726558685303
current transaction is aborted, commands ignored until end of transaction block
25.674726486206055
current transaction is aborted, commands ignored until end of transaction block
25.675730228424072
current transaction is aborted, commands ignored until end of transaction block
25.676100254058838
current transaction is aborted, commands ignored until end of transaction block
25.67676830291748
current transaction is aborted, commands ignored until end of transaction block
25.67773199081421
current transaction is aborted, commands ignored until end of transaction block
25.678796768188477
current transaction is aborted, commands ignored until end of transaction block
25.67973780632019
current transaction is aborted, commands ignored until end of transaction block
25.679800033569336
current transaction is aborted, commands ignored until end of transaction block
25.681751251220703
current transaction is aborted, commands ignored until end of transaction block
25.682730674743652
current transaction is aborted, commands ignored until end of transaction block
25.68374013900757
current transaction is aborted, commands ignored until end of transaction block
25.684723377227783
current transaction is aborted, commands ignored until end of transaction block
25.68573760986328
current transaction is aborted, commands ignored until end of transaction block
25.68573760986328
current transaction is aborted, commands ignored until end of transaction block
25.687840223312378
current transaction is aborted, commands ignored until end of transaction block
25.688735246658325
current transaction is aborted, commands ignored until end of transaction block
25.689727306365967
current transaction is aborted, commands ignored until end of transaction block
25.690726280212402
current transaction is aborted, commands ignored until end of transaction block
25.691723108291626
current transaction is aborted, commands ignored until end of transaction block
25.692782163619995
current transaction is aborted, commands ignored until end of transaction block
25.69380235671997
current transaction is aborted, commands ignored until end of transaction block
25.69472575187683
current transaction is aborted, commands ignored until end of transaction block
25.696526050567627
current transaction is aborted, commands ignored until end of transaction block
25.69772958755493
current transaction is aborted, commands ignored until end of transaction block
25.69772958755493
current transaction is aborted, commands ignored until end of transaction block
25.69873881340027
current transaction is aborted, commands ignored until end of transaction block
25.700733184814453
current transaction is aborted, commands ignored until end of transaction block
25.70172643661499
current transaction is aborted, commands ignored until end of transaction block
25.703724145889282
current transaction is aborted, commands ignored until end of transaction block
25.70473027229309
current transaction is aborted, commands ignored until end of transaction block
25.705726623535156
current transaction is aborted, commands ignored until end of transaction block
25.706746578216553
current transaction is aborted, commands ignored until end of transaction block
25.707731246948242
current transaction is aborted, commands ignored until end of transaction block
25.708921670913696
current transaction is aborted, commands ignored until end of transaction block
25.709734439849854
current transaction is aborted, commands ignored until end of transaction block
25.710727214813232
current transaction is aborted, commands ignored until end of transaction block
25.71172785758972
current transaction is aborted, commands ignored until end of transaction block
25.71272563934326
current transaction is aborted, commands ignored until end of transaction block
25.71272563934326
current transaction is aborted, commands ignored until end of transaction block
25.714739322662354
current transaction is aborted, commands ignored until end of transaction block
25.715728521347046
current transaction is aborted, commands ignored until end of transaction block
25.716726779937744
current transaction is aborted, commands ignored until end of transaction block
25.71686577796936
current transaction is aborted, commands ignored until end of transaction block
25.717767477035522
current transaction is aborted, commands ignored until end of transaction block
25.71883773803711
current transaction is aborted, commands ignored until end of transaction block
25.719744205474854
current transaction is aborted, commands ignored until end of transaction block
25.72174048423767
current transaction is aborted, commands ignored until end of transaction block
25.722731351852417
current transaction is aborted, commands ignored until end of transaction block
25.72373056411743
current transaction is aborted, commands ignored until end of transaction block
25.72373056411743
current transaction is aborted, commands ignored until end of transaction block
25.724767446517944
current transaction is aborted, commands ignored until end of transaction block
25.725895166397095
current transaction is aborted, commands ignored until end of transaction block
25.726775646209717
current transaction is aborted, commands ignored until end of transaction block
25.72772526741028
current transaction is aborted, commands ignored until end of transaction block
25.729735136032104
current transaction is aborted, commands ignored until end of transaction block
25.729735136032104
current transaction is aborted, commands ignored until end of transaction block
25.731076955795288
current transaction is aborted, commands ignored until end of transaction block
25.731772661209106
current transaction is aborted, commands ignored until end of transaction block
25.73272943496704
current transaction is aborted, commands ignored until end of transaction block
25.73372745513916
current transaction is aborted, commands ignored until end of transaction block
25.734727144241333
current transaction is aborted, commands ignored until end of transaction block
25.73572874069214
current transaction is aborted, commands ignored until end of transaction block
25.73673105239868
current transaction is aborted, commands ignored until end of transaction block
25.738230228424072
current transaction is aborted, commands ignored until end of transaction block
25.738736629486084
current transaction is aborted, commands ignored until end of transaction block
25.739845037460327
current transaction is aborted, commands ignored until end of transaction block
25.740741729736328
current transaction is aborted, commands ignored until end of transaction block
25.741731643676758
current transaction is aborted, commands ignored until end of transaction block
25.742740154266357
current transaction is aborted, commands ignored until end of transaction block
25.744738340377808
current transaction is aborted, commands ignored until end of transaction block
25.745734453201294
current transaction is aborted, commands ignored until end of transaction block
25.746726989746094
current transaction is aborted, commands ignored until end of transaction block
25.747830152511597
current transaction is aborted, commands ignored until end of transaction block
25.749746799468994
current transaction is aborted, commands ignored until end of transaction block
25.75172734260559
current transaction is aborted, commands ignored until end of transaction block
25.75172734260559
current transaction is aborted, commands ignored until end of transaction block
25.75378704071045
current transaction is aborted, commands ignored until end of transaction block
25.75474238395691
current transaction is aborted, commands ignored until end of transaction block
25.75698685646057
current transaction is aborted, commands ignored until end of transaction block
25.757723569869995
current transaction is aborted, commands ignored until end of transaction block
25.758755683898926
current transaction is aborted, commands ignored until end of transaction block
25.758755683898926
current transaction is aborted, commands ignored until end of transaction block
25.75972580909729
current transaction is aborted, commands ignored until end of transaction block
25.75972580909729
current transaction is aborted, commands ignored until end of transaction block
25.76072382926941
current transaction is aborted, commands ignored until end of transaction block
25.76072382926941
current transaction is aborted, commands ignored until end of transaction block
25.761725187301636
current transaction is aborted, commands ignored until end of transaction block
25.762848377227783
current transaction is aborted, commands ignored until end of transaction block
25.762848377227783
current transaction is aborted, commands ignored until end of transaction block
25.7637300491333
current transaction is aborted, commands ignored until end of transaction block
25.764730215072632
current transaction is aborted, commands ignored until end of transaction block
25.7657253742218
current transaction is aborted, commands ignored until end of transaction block
25.7657253742218
current transaction is aborted, commands ignored until end of transaction block
25.76676034927368
current transaction is aborted, commands ignored until end of transaction block
25.767722606658936
current transaction is aborted, commands ignored until end of transaction block
25.76872754096985
current transaction is aborted, commands ignored until end of transaction block
25.76872754096985
current transaction is aborted, commands ignored until end of transaction block
25.770727396011353
current transaction is aborted, commands ignored until end of transaction block
25.770727396011353
current transaction is aborted, commands ignored until end of transaction block
25.771779775619507
current transaction is aborted, commands ignored until end of transaction block
25.771779775619507
current transaction is aborted, commands ignored until end of transaction block
25.77281904220581
current transaction is aborted, commands ignored until end of transaction block
25.77281904220581
current transaction is aborted, commands ignored until end of transaction block
25.77372407913208
current transaction is aborted, commands ignored until end of transaction block
25.77372407913208
current transaction is aborted, commands ignored until end of transaction block
25.774723291397095
current transaction is aborted, commands ignored until end of transaction block
25.774723291397095
current transaction is aborted, commands ignored until end of transaction block
25.775723695755005
current transaction is aborted, commands ignored until end of transaction block
25.7770516872406
current transaction is aborted, commands ignored until end of transaction block
25.777721643447876
current transaction is aborted, commands ignored until end of transaction block
25.77872610092163
current transaction is aborted, commands ignored until end of transaction block
25.77872610092163
current transaction is aborted, commands ignored until end of transaction block
25.779728174209595
current transaction is aborted, commands ignored until end of transaction block
25.78073740005493
current transaction is aborted, commands ignored until end of transaction block
25.78073740005493
current transaction is aborted, commands ignored until end of transaction block
25.7818021774292
current transaction is aborted, commands ignored until end of transaction block
25.782731533050537
current transaction is aborted, commands ignored until end of transaction block
25.78372836112976
current transaction is aborted, commands ignored until end of transaction block
25.7847261428833
current transaction is aborted, commands ignored until end of transaction block
25.7847261428833
current transaction is aborted, commands ignored until end of transaction block
25.785724639892578
current transaction is aborted, commands ignored until end of transaction block
25.785724639892578
current transaction is aborted, commands ignored until end of transaction block
25.786766529083252
current transaction is aborted, commands ignored until end of transaction block
25.786766529083252
current transaction is aborted, commands ignored until end of transaction block
25.787724018096924
current transaction is aborted, commands ignored until end of transaction block
25.787724018096924
current transaction is aborted, commands ignored until end of transaction block
25.788722038269043
current transaction is aborted, commands ignored until end of transaction block
25.788722038269043
current transaction is aborted, commands ignored until end of transaction block
25.789726972579956
current transaction is aborted, commands ignored until end of transaction block
25.790731191635132
current transaction is aborted, commands ignored until end of transaction block
25.790731191635132
current transaction is aborted, commands ignored until end of transaction block
25.791728019714355
current transaction is aborted, commands ignored until end of transaction block
25.791728019714355
current transaction is aborted, commands ignored until end of transaction block
25.79272985458374
current transaction is aborted, commands ignored until end of transaction block
25.79373550415039
current transaction is aborted, commands ignored until end of transaction block
25.79473090171814
current transaction is aborted, commands ignored until end of transaction block
25.795721769332886
current transaction is aborted, commands ignored until end of transaction block
25.795721769332886
current transaction is aborted, commands ignored until end of transaction block
25.79672598838806
current transaction is aborted, commands ignored until end of transaction block
25.798743963241577
current transaction is aborted, commands ignored until end of transaction block
25.798743963241577
current transaction is aborted, commands ignored until end of transaction block
25.799724340438843
current transaction is aborted, commands ignored until end of transaction block
25.800767183303833
current transaction is aborted, commands ignored until end of transaction block
25.800767183303833
current transaction is aborted, commands ignored until end of transaction block
25.80172610282898
current transaction is aborted, commands ignored until end of transaction block
25.80172610282898
current transaction is aborted, commands ignored until end of transaction block
25.802767753601074
current transaction is aborted, commands ignored until end of transaction block
25.803797721862793
current transaction is aborted, commands ignored until end of transaction block
25.804728507995605
current transaction is aborted, commands ignored until end of transaction block
25.805726051330566
current transaction is aborted, commands ignored until end of transaction block
25.80583930015564
current transaction is aborted, commands ignored until end of transaction block
25.806736946105957
current transaction is aborted, commands ignored until end of transaction block
25.807733058929443
current transaction is aborted, commands ignored until end of transaction block
25.808732748031616
current transaction is aborted, commands ignored until end of transaction block
25.808732748031616
current transaction is aborted, commands ignored until end of transaction block
25.809730052947998
current transaction is aborted, commands ignored until end of transaction block
25.810725212097168
current transaction is aborted, commands ignored until end of transaction block
25.81198763847351
current transaction is aborted, commands ignored until end of transaction block
25.812732934951782
current transaction is aborted, commands ignored until end of transaction block
25.81377363204956
current transaction is aborted, commands ignored until end of transaction block
25.81377363204956
current transaction is aborted, commands ignored until end of transaction block
25.81473135948181
current transaction is aborted, commands ignored until end of transaction block
25.81576657295227
current transaction is aborted, commands ignored until end of transaction block
25.816726207733154
current transaction is aborted, commands ignored until end of transaction block
25.816726207733154
current transaction is aborted, commands ignored until end of transaction block
25.81773090362549
current transaction is aborted, commands ignored until end of transaction block
25.81773090362549
current transaction is aborted, commands ignored until end of transaction block
25.818732976913452
current transaction is aborted, commands ignored until end of transaction block
25.819824695587158
current transaction is aborted, commands ignored until end of transaction block
25.819824695587158
current transaction is aborted, commands ignored until end of transaction block
25.820764780044556
current transaction is aborted, commands ignored until end of transaction block
25.8217294216156
current transaction is aborted, commands ignored until end of transaction block
25.8217294216156
current transaction is aborted, commands ignored until end of transaction block
25.822730541229248
current transaction is aborted, commands ignored until end of transaction block
25.823724508285522
current transaction is aborted, commands ignored until end of transaction block
25.823724508285522
current transaction is aborted, commands ignored until end of transaction block
25.824727773666382
current transaction is aborted, commands ignored until end of transaction block
25.824727773666382
current transaction is aborted, commands ignored until end of transaction block
25.82573103904724
current transaction is aborted, commands ignored until end of transaction block
25.826733112335205
current transaction is aborted, commands ignored until end of transaction block
25.826733112335205
current transaction is aborted, commands ignored until end of transaction block
25.827734231948853
current transaction is aborted, commands ignored until end of transaction block
25.827734231948853
current transaction is aborted, commands ignored until end of transaction block
25.828731060028076
current transaction is aborted, commands ignored until end of transaction block
25.828731060028076
current transaction is aborted, commands ignored until end of transaction block
25.829730987548828
current transaction is aborted, commands ignored until end of transaction block
25.830723524093628
current transaction is aborted, commands ignored until end of transaction block
25.83182191848755
current transaction is aborted, commands ignored until end of transaction block
25.83272385597229
current transaction is aborted, commands ignored until end of transaction block
25.833730459213257
current transaction is aborted, commands ignored until end of transaction block
25.834725379943848
current transaction is aborted, commands ignored until end of transaction block
25.834725379943848
current transaction is aborted, commands ignored until end of transaction block
25.835747718811035
current transaction is aborted, commands ignored until end of transaction block
25.835747718811035
current transaction is aborted, commands ignored until end of transaction block
25.836729049682617
current transaction is aborted, commands ignored until end of transaction block
25.837723970413208
current transaction is aborted, commands ignored until end of transaction block
25.837723970413208
current transaction is aborted, commands ignored until end of transaction block
25.838724613189697
current transaction is aborted, commands ignored until end of transaction block
25.83972668647766
current transaction is aborted, commands ignored until end of transaction block
25.840729236602783
current transaction is aborted, commands ignored until end of transaction block
25.841845750808716
current transaction is aborted, commands ignored until end of transaction block
25.842732191085815
current transaction is aborted, commands ignored until end of transaction block
25.842732191085815
current transaction is aborted, commands ignored until end of transaction block
25.843727827072144
current transaction is aborted, commands ignored until end of transaction block
25.843727827072144
current transaction is aborted, commands ignored until end of transaction block
25.844722986221313
current transaction is aborted, commands ignored until end of transaction block
25.844722986221313
current transaction is aborted, commands ignored until end of transaction block
25.845725297927856
current transaction is aborted, commands ignored until end of transaction block
25.847732305526733
current transaction is aborted, commands ignored until end of transaction block
25.847732305526733
current transaction is aborted, commands ignored until end of transaction block
25.84872579574585
current transaction is aborted, commands ignored until end of transaction block
25.849727153778076
current transaction is aborted, commands ignored until end of transaction block
25.849727153778076
current transaction is aborted, commands ignored until end of transaction block
25.850745677947998
current transaction is aborted, commands ignored until end of transaction block
25.850745677947998
current transaction is aborted, commands ignored until end of transaction block
25.851726293563843
current transaction is aborted, commands ignored until end of transaction block
25.851726293563843
current transaction is aborted, commands ignored until end of transaction block
25.852726221084595
current transaction is aborted, commands ignored until end of transaction block
25.85373306274414
current transaction is aborted, commands ignored until end of transaction block
25.85373306274414
current transaction is aborted, commands ignored until end of transaction block
25.85473108291626
current transaction is aborted, commands ignored until end of transaction block
25.85473108291626
current transaction is aborted, commands ignored until end of transaction block
25.855781078338623
current transaction is aborted, commands ignored until end of transaction block
25.856723070144653
current transaction is aborted, commands ignored until end of transaction block
25.856723070144653
current transaction is aborted, commands ignored until end of transaction block
25.857722759246826
current transaction is aborted, commands ignored until end of transaction block
25.85872793197632
current transaction is aborted, commands ignored until end of transaction block
25.859734773635864
current transaction is aborted, commands ignored until end of transaction block
25.86072874069214
current transaction is aborted, commands ignored until end of transaction block
25.86173701286316
current transaction is aborted, commands ignored until end of transaction block
25.86272692680359
current transaction is aborted, commands ignored until end of transaction block
25.86272692680359
current transaction is aborted, commands ignored until end of transaction block
25.863759756088257
current transaction is aborted, commands ignored until end of transaction block
25.863759756088257
current transaction is aborted, commands ignored until end of transaction block
25.864983081817627
current transaction is aborted, commands ignored until end of transaction block
25.864983081817627
current transaction is aborted, commands ignored until end of transaction block
25.865724563598633
current transaction is aborted, commands ignored until end of transaction block
25.866729259490967
current transaction is aborted, commands ignored until end of transaction block
25.866729259490967
current transaction is aborted, commands ignored until end of transaction block
25.86774182319641
current transaction is aborted, commands ignored until end of transaction block
25.86873483657837
current transaction is aborted, commands ignored until end of transaction block
25.86873483657837
current transaction is aborted, commands ignored until end of transaction block
25.869739770889282
current transaction is aborted, commands ignored until end of transaction block
25.870729684829712
current transaction is aborted, commands ignored until end of transaction block
25.871739387512207
current transaction is aborted, commands ignored until end of transaction block
25.872740030288696
current transaction is aborted, commands ignored until end of transaction block
25.873725414276123
current transaction is aborted, commands ignored until end of transaction block
25.874729871749878
current transaction is aborted, commands ignored until end of transaction block
25.875731706619263
current transaction is aborted, commands ignored until end of transaction block
25.875731706619263
current transaction is aborted, commands ignored until end of transaction block
25.87673020362854
current transaction is aborted, commands ignored until end of transaction block
25.877731800079346
current transaction is aborted, commands ignored until end of transaction block
25.877731800079346
current transaction is aborted, commands ignored until end of transaction block
25.87873125076294
current transaction is aborted, commands ignored until end of transaction block
25.87873125076294
current transaction is aborted, commands ignored until end of transaction block
25.879724979400635
current transaction is aborted, commands ignored until end of transaction block
25.879724979400635
current transaction is aborted, commands ignored until end of transaction block
25.88072443008423
current transaction is aborted, commands ignored until end of transaction block
25.88072443008423
current transaction is aborted, commands ignored until end of transaction block
25.88172459602356
current transaction is aborted, commands ignored until end of transaction block
25.882731437683105
current transaction is aborted, commands ignored until end of transaction block
25.88373064994812
current transaction is aborted, commands ignored until end of transaction block
25.88373064994812
current transaction is aborted, commands ignored until end of transaction block
25.88473391532898
current transaction is aborted, commands ignored until end of transaction block
25.885738849639893
current transaction is aborted, commands ignored until end of transaction block
25.886726140975952
current transaction is aborted, commands ignored until end of transaction block
25.8877272605896
current transaction is aborted, commands ignored until end of transaction block
25.8887722492218
current transaction is aborted, commands ignored until end of transaction block
25.8887722492218
current transaction is aborted, commands ignored until end of transaction block
25.890769958496094
current transaction is aborted, commands ignored until end of transaction block
25.890769958496094
current transaction is aborted, commands ignored until end of transaction block
25.891780614852905
current transaction is aborted, commands ignored until end of transaction block
25.891780614852905
current transaction is aborted, commands ignored until end of transaction block
25.89302897453308
current transaction is aborted, commands ignored until end of transaction block
25.89372968673706
current transaction is aborted, commands ignored until end of transaction block
25.894726037979126
current transaction is aborted, commands ignored until end of transaction block
25.89574694633484
current transaction is aborted, commands ignored until end of transaction block
25.896737098693848
current transaction is aborted, commands ignored until end of transaction block
25.897735118865967
current transaction is aborted, commands ignored until end of transaction block
25.898975610733032
current transaction is aborted, commands ignored until end of transaction block
25.899736404418945
current transaction is aborted, commands ignored until end of transaction block
25.900728464126587
current transaction is aborted, commands ignored until end of transaction block
25.9017391204834
current transaction is aborted, commands ignored until end of transaction block
25.9017391204834
current transaction is aborted, commands ignored until end of transaction block
25.90272831916809
current transaction is aborted, commands ignored until end of transaction block
25.903817176818848
current transaction is aborted, commands ignored until end of transaction block
25.903817176818848
current transaction is aborted, commands ignored until end of transaction block
25.904727458953857
current transaction is aborted, commands ignored until end of transaction block
25.905890464782715
current transaction is aborted, commands ignored until end of transaction block
25.905890464782715
current transaction is aborted, commands ignored until end of transaction block
25.906728744506836
current transaction is aborted, commands ignored until end of transaction block
25.90774154663086
current transaction is aborted, commands ignored until end of transaction block
25.90872597694397
current transaction is aborted, commands ignored until end of transaction block
25.90872597694397
current transaction is aborted, commands ignored until end of transaction block
25.9099702835083
current transaction is aborted, commands ignored until end of transaction block
25.910966873168945
current transaction is aborted, commands ignored until end of transaction block
25.910966873168945
current transaction is aborted, commands ignored until end of transaction block
25.911970615386963
current transaction is aborted, commands ignored until end of transaction block
25.912967920303345
current transaction is aborted, commands ignored until end of transaction block
25.913065671920776
current transaction is aborted, commands ignored until end of transaction block
25.913975954055786
current transaction is aborted, commands ignored until end of transaction block
25.914979219436646
current transaction is aborted, commands ignored until end of transaction block
25.915966510772705
current transaction is aborted, commands ignored until end of transaction block
25.91696548461914
current transaction is aborted, commands ignored until end of transaction block
25.917105674743652
current transaction is aborted, commands ignored until end of transaction block
25.918015241622925
current transaction is aborted, commands ignored until end of transaction block
25.91897177696228
current transaction is aborted, commands ignored until end of transaction block
25.91897177696228
current transaction is aborted, commands ignored until end of transaction block
25.919970512390137
current transaction is aborted, commands ignored until end of transaction block
25.920974254608154
current transaction is aborted, commands ignored until end of transaction block
25.921972036361694
current transaction is aborted, commands ignored until end of transaction block
25.922972679138184
current transaction is aborted, commands ignored until end of transaction block
25.92397713661194
current transaction is aborted, commands ignored until end of transaction block
25.924975395202637
current transaction is aborted, commands ignored until end of transaction block
25.926971435546875
current transaction is aborted, commands ignored until end of transaction block
25.927975177764893
current transaction is aborted, commands ignored until end of transaction block
25.92897057533264
current transaction is aborted, commands ignored until end of transaction block
25.92897057533264
current transaction is aborted, commands ignored until end of transaction block
25.930005311965942
current transaction is aborted, commands ignored until end of transaction block
25.930005311965942
current transaction is aborted, commands ignored until end of transaction block
25.93096661567688
current transaction is aborted, commands ignored until end of transaction block
25.931969165802002
current transaction is aborted, commands ignored until end of transaction block
25.932968139648438
current transaction is aborted, commands ignored until end of transaction block
25.932968139648438
current transaction is aborted, commands ignored until end of transaction block
25.932968139648438
current transaction is aborted, commands ignored until end of transaction block
25.934004545211792
current transaction is aborted, commands ignored until end of transaction block
25.93496561050415
current transaction is aborted, commands ignored until end of transaction block
25.93496561050415
current transaction is aborted, commands ignored until end of transaction block
25.935967683792114
current transaction is aborted, commands ignored until end of transaction block
25.93697190284729
current transaction is aborted, commands ignored until end of transaction block
25.93897247314453
current transaction is aborted, commands ignored until end of transaction block
25.93897247314453
current transaction is aborted, commands ignored until end of transaction block
25.93996834754944
current transaction is aborted, commands ignored until end of transaction block
25.93996834754944
current transaction is aborted, commands ignored until end of transaction block
25.94096803665161
current transaction is aborted, commands ignored until end of transaction block
25.941966772079468
current transaction is aborted, commands ignored until end of transaction block
25.94297981262207
current transaction is aborted, commands ignored until end of transaction block
25.94297981262207
current transaction is aborted, commands ignored until end of transaction block
25.94396471977234
current transaction is aborted, commands ignored until end of transaction block
25.945067167282104
current transaction is aborted, commands ignored until end of transaction block
25.94596791267395
current transaction is aborted, commands ignored until end of transaction block
25.946012496948242
current transaction is aborted, commands ignored until end of transaction block
25.947449445724487
current transaction is aborted, commands ignored until end of transaction block
25.947967290878296
current transaction is aborted, commands ignored until end of transaction block
25.947967290878296
current transaction is aborted, commands ignored until end of transaction block
25.948973417282104
current transaction is aborted, commands ignored until end of transaction block
25.94998860359192
current transaction is aborted, commands ignored until end of transaction block
25.950350761413574
current transaction is aborted, commands ignored until end of transaction block
25.950971841812134
current transaction is aborted, commands ignored until end of transaction block
25.951971530914307
current transaction is aborted, commands ignored until end of transaction block
25.951971530914307
current transaction is aborted, commands ignored until end of transaction block
25.952967643737793
current transaction is aborted, commands ignored until end of transaction block
25.952967643737793
current transaction is aborted, commands ignored until end of transaction block
25.953965187072754
current transaction is aborted, commands ignored until end of transaction block
25.953965187072754
current transaction is aborted, commands ignored until end of transaction block
25.954999208450317
current transaction is aborted, commands ignored until end of transaction block
25.955965757369995
current transaction is aborted, commands ignored until end of transaction block
25.955965757369995
current transaction is aborted, commands ignored until end of transaction block
25.95697283744812
current transaction is aborted, commands ignored until end of transaction block
25.95697283744812
current transaction is aborted, commands ignored until end of transaction block
25.958001375198364
current transaction is aborted, commands ignored until end of transaction block
25.958001375198364
current transaction is aborted, commands ignored until end of transaction block
25.9589684009552
current transaction is aborted, commands ignored until end of transaction block
25.95996880531311
current transaction is aborted, commands ignored until end of transaction block
25.960971117019653
current transaction is aborted, commands ignored until end of transaction block
25.962175130844116
current transaction is aborted, commands ignored until end of transaction block
25.962971448898315
current transaction is aborted, commands ignored until end of transaction block
25.963965892791748
current transaction is aborted, commands ignored until end of transaction block
25.965096712112427
current transaction is aborted, commands ignored until end of transaction block
25.96596622467041
current transaction is aborted, commands ignored until end of transaction block
25.96596622467041
current transaction is aborted, commands ignored until end of transaction block
25.96696448326111
current transaction is aborted, commands ignored until end of transaction block
25.96696448326111
current transaction is aborted, commands ignored until end of transaction block
25.96843409538269
current transaction is aborted, commands ignored until end of transaction block
25.968969106674194
current transaction is aborted, commands ignored until end of transaction block
25.968969106674194
current transaction is aborted, commands ignored until end of transaction block
25.96997308731079
current transaction is aborted, commands ignored until end of transaction block
25.970977544784546
current transaction is aborted, commands ignored until end of transaction block
25.97201156616211
current transaction is aborted, commands ignored until end of transaction block
25.973974466323853
current transaction is aborted, commands ignored until end of transaction block
25.975005865097046
current transaction is aborted, commands ignored until end of transaction block
25.97608518600464
current transaction is aborted, commands ignored until end of transaction block
25.976966619491577
current transaction is aborted, commands ignored until end of transaction block
25.97798776626587
current transaction is aborted, commands ignored until end of transaction block
25.97910451889038
current transaction is aborted, commands ignored until end of transaction block
25.97999143600464
current transaction is aborted, commands ignored until end of transaction block
25.981077909469604
current transaction is aborted, commands ignored until end of transaction block
25.981993913650513
current transaction is aborted, commands ignored until end of transaction block
25.981993913650513
current transaction is aborted, commands ignored until end of transaction block
25.98300838470459
current transaction is aborted, commands ignored until end of transaction block
25.983968257904053
current transaction is aborted, commands ignored until end of transaction block
25.983968257904053
current transaction is aborted, commands ignored until end of transaction block
25.984976530075073
current transaction is aborted, commands ignored until end of transaction block
25.9864399433136
current transaction is aborted, commands ignored until end of transaction block
25.986993074417114
current transaction is aborted, commands ignored until end of transaction block
25.986993074417114
current transaction is aborted, commands ignored until end of transaction block
25.98802161216736
current transaction is aborted, commands ignored until end of transaction block
25.988974809646606
current transaction is aborted, commands ignored until end of transaction block
25.988974809646606
current transaction is aborted, commands ignored until end of transaction block
25.989969730377197
current transaction is aborted, commands ignored until end of transaction block
25.990965843200684
current transaction is aborted, commands ignored until end of transaction block
25.992010831832886
current transaction is aborted, commands ignored until end of transaction block
25.992010831832886
current transaction is aborted, commands ignored until end of transaction block
25.99297285079956
current transaction is aborted, commands ignored until end of transaction block
25.99297285079956
current transaction is aborted, commands ignored until end of transaction block
25.994009017944336
current transaction is aborted, commands ignored until end of transaction block
25.994999408721924
current transaction is aborted, commands ignored until end of transaction block
25.994999408721924
current transaction is aborted, commands ignored until end of transaction block
25.995968341827393
current transaction is aborted, commands ignored until end of transaction block
25.995968341827393
current transaction is aborted, commands ignored until end of transaction block
25.995968341827393
current transaction is aborted, commands ignored until end of transaction block
25.997064352035522
current transaction is aborted, commands ignored until end of transaction block
25.997971296310425
current transaction is aborted, commands ignored until end of transaction block
25.997971296310425
current transaction is aborted, commands ignored until end of transaction block
25.999234676361084
current transaction is aborted, commands ignored until end of transaction block
25.999234676361084
current transaction is aborted, commands ignored until end of transaction block
25.99996304512024
current transaction is aborted, commands ignored until end of transaction block
26.001009702682495
current transaction is aborted, commands ignored until end of transaction block
26.001983642578125
current transaction is aborted, commands ignored until end of transaction block
26.002968549728394
current transaction is aborted, commands ignored until end of transaction block
26.002968549728394
current transaction is aborted, commands ignored until end of transaction block
26.00397038459778
current transaction is aborted, commands ignored until end of transaction block
26.00397038459778
current transaction is aborted, commands ignored until end of transaction block
26.004972457885742
current transaction is aborted, commands ignored until end of transaction block
26.004972457885742
current transaction is aborted, commands ignored until end of transaction block
26.007065773010254
current transaction is aborted, commands ignored until end of transaction block
26.007065773010254
current transaction is aborted, commands ignored until end of transaction block
26.00797700881958
current transaction is aborted, commands ignored until end of transaction block
26.00896382331848
current transaction is aborted, commands ignored until end of transaction block
26.00896382331848
current transaction is aborted, commands ignored until end of transaction block
26.010002374649048
current transaction is aborted, commands ignored until end of transaction block
26.010002374649048
current transaction is aborted, commands ignored until end of transaction block
26.01096749305725
current transaction is aborted, commands ignored until end of transaction block
26.01096749305725
current transaction is aborted, commands ignored until end of transaction block
26.01196551322937
current transaction is aborted, commands ignored until end of transaction block
26.01296615600586
current transaction is aborted, commands ignored until end of transaction block
26.01296615600586
current transaction is aborted, commands ignored until end of transaction block
26.01405620574951
current transaction is aborted, commands ignored until end of transaction block
26.01405620574951
current transaction is aborted, commands ignored until end of transaction block
26.014966011047363
current transaction is aborted, commands ignored until end of transaction block
26.014966011047363
current transaction is aborted, commands ignored until end of transaction block
26.01602339744568
current transaction is aborted, commands ignored until end of transaction block
26.016975164413452
current transaction is aborted, commands ignored until end of transaction block
26.016975164413452
current transaction is aborted, commands ignored until end of transaction block
26.016975164413452
current transaction is aborted, commands ignored until end of transaction block
26.01796793937683
current transaction is aborted, commands ignored until end of transaction block
26.01896858215332
current transaction is aborted, commands ignored until end of transaction block
26.01896858215332
current transaction is aborted, commands ignored until end of transaction block
26.020020723342896
current transaction is aborted, commands ignored until end of transaction block
26.020973443984985
current transaction is aborted, commands ignored until end of transaction block
26.020973443984985
current transaction is aborted, commands ignored until end of transaction block
26.022006511688232
current transaction is aborted, commands ignored until end of transaction block
26.022006511688232
current transaction is aborted, commands ignored until end of transaction block
26.02300000190735
current transaction is aborted, commands ignored until end of transaction block
26.023226022720337
current transaction is aborted, commands ignored until end of transaction block
26.023226022720337
current transaction is aborted, commands ignored until end of transaction block
26.023966550827026
current transaction is aborted, commands ignored until end of transaction block
26.023966550827026
current transaction is aborted, commands ignored until end of transaction block
26.025169849395752
current transaction is aborted, commands ignored until end of transaction block
26.025969982147217
current transaction is aborted, commands ignored until end of transaction block
26.02696418762207
current transaction is aborted, commands ignored until end of transaction block
26.02696418762207
current transaction is aborted, commands ignored until end of transaction block
26.028008460998535
current transaction is aborted, commands ignored until end of transaction block
26.028008460998535
current transaction is aborted, commands ignored until end of transaction block
26.029112815856934
current transaction is aborted, commands ignored until end of transaction block
26.029112815856934
current transaction is aborted, commands ignored until end of transaction block
26.030121088027954
current transaction is aborted, commands ignored until end of transaction block
26.030309200286865
current transaction is aborted, commands ignored until end of transaction block
26.031071662902832
current transaction is aborted, commands ignored until end of transaction block
26.031071662902832
current transaction is aborted, commands ignored until end of transaction block
26.031962156295776
current transaction is aborted, commands ignored until end of transaction block
26.031962156295776
current transaction is aborted, commands ignored until end of transaction block
26.03296709060669
current transaction is aborted, commands ignored until end of transaction block
26.03398871421814
current transaction is aborted, commands ignored until end of transaction block
26.03498077392578
current transaction is aborted, commands ignored until end of transaction block
26.03498077392578
current transaction is aborted, commands ignored until end of transaction block
26.03605580329895
current transaction is aborted, commands ignored until end of transaction block
26.03605580329895
current transaction is aborted, commands ignored until end of transaction block
26.03696918487549
current transaction is aborted, commands ignored until end of transaction block
26.03696918487549
current transaction is aborted, commands ignored until end of transaction block
26.038001537322998
current transaction is aborted, commands ignored until end of transaction block
26.038001537322998
current transaction is aborted, commands ignored until end of transaction block
26.038963556289673
current transaction is aborted, commands ignored until end of transaction block
26.038963556289673
current transaction is aborted, commands ignored until end of transaction block
26.040010929107666
current transaction is aborted, commands ignored until end of transaction block
26.041041374206543
current transaction is aborted, commands ignored until end of transaction block
26.041041374206543
current transaction is aborted, commands ignored until end of transaction block
26.041974782943726
current transaction is aborted, commands ignored until end of transaction block
26.041974782943726
current transaction is aborted, commands ignored until end of transaction block
26.04304003715515
current transaction is aborted, commands ignored until end of transaction block
26.04304003715515
current transaction is aborted, commands ignored until end of transaction block
26.044007062911987
current transaction is aborted, commands ignored until end of transaction block
26.044007062911987
current transaction is aborted, commands ignored until end of transaction block
26.045108795166016
current transaction is aborted, commands ignored until end of transaction block
26.045108795166016
current transaction is aborted, commands ignored until end of transaction block
26.04596209526062
current transaction is aborted, commands ignored until end of transaction block
26.04596209526062
current transaction is aborted, commands ignored until end of transaction block
26.04696822166443
current transaction is aborted, commands ignored until end of transaction block
26.04798173904419
current transaction is aborted, commands ignored until end of transaction block
26.04798173904419
current transaction is aborted, commands ignored until end of transaction block
26.048970222473145
current transaction is aborted, commands ignored until end of transaction block
26.048970222473145
current transaction is aborted, commands ignored until end of transaction block
26.050005197525024
current transaction is aborted, commands ignored until end of transaction block
26.050005197525024
current transaction is aborted, commands ignored until end of transaction block
26.051095962524414
current transaction is aborted, commands ignored until end of transaction block
26.051095962524414
current transaction is aborted, commands ignored until end of transaction block
26.05200433731079
current transaction is aborted, commands ignored until end of transaction block
26.05200433731079
current transaction is aborted, commands ignored until end of transaction block
26.05296754837036
current transaction is aborted, commands ignored until end of transaction block
26.05296754837036
current transaction is aborted, commands ignored until end of transaction block
26.054022789001465
current transaction is aborted, commands ignored until end of transaction block
26.054967164993286
current transaction is aborted, commands ignored until end of transaction block
26.054967164993286
current transaction is aborted, commands ignored until end of transaction block
26.05601215362549
current transaction is aborted, commands ignored until end of transaction block
26.05617904663086
current transaction is aborted, commands ignored until end of transaction block
26.056967735290527
current transaction is aborted, commands ignored until end of transaction block
26.056967735290527
current transaction is aborted, commands ignored until end of transaction block
26.058003902435303
current transaction is aborted, commands ignored until end of transaction block
26.058003902435303
current transaction is aborted, commands ignored until end of transaction block
26.05896759033203
current transaction is aborted, commands ignored until end of transaction block
26.05896759033203
current transaction is aborted, commands ignored until end of transaction block
26.05997395515442
current transaction is aborted, commands ignored until end of transaction block
26.05997395515442
current transaction is aborted, commands ignored until end of transaction block
26.06102752685547
current transaction is aborted, commands ignored until end of transaction block
26.06224775314331
current transaction is aborted, commands ignored until end of transaction block
26.062967538833618
current transaction is aborted, commands ignored until end of transaction block
26.062967538833618
current transaction is aborted, commands ignored until end of transaction block
26.062967538833618
current transaction is aborted, commands ignored until end of transaction block
26.064006328582764
current transaction is aborted, commands ignored until end of transaction block
26.064006328582764
current transaction is aborted, commands ignored until end of transaction block
26.065001964569092
current transaction is aborted, commands ignored until end of transaction block
26.065001964569092
current transaction is aborted, commands ignored until end of transaction block
26.0660297870636
current transaction is aborted, commands ignored until end of transaction block
26.0660297870636
current transaction is aborted, commands ignored until end of transaction block
26.0669686794281
current transaction is aborted, commands ignored until end of transaction block
26.067967414855957
current transaction is aborted, commands ignored until end of transaction block
26.067967414855957
current transaction is aborted, commands ignored until end of transaction block
26.069141387939453
current transaction is aborted, commands ignored until end of transaction block
26.069141387939453
current transaction is aborted, commands ignored until end of transaction block
26.070003747940063
current transaction is aborted, commands ignored until end of transaction block
26.070003747940063
current transaction is aborted, commands ignored until end of transaction block
26.071004152297974
current transaction is aborted, commands ignored until end of transaction block
26.071004152297974
current transaction is aborted, commands ignored until end of transaction block
26.072007417678833
current transaction is aborted, commands ignored until end of transaction block
26.072007417678833
current transaction is aborted, commands ignored until end of transaction block
26.073004007339478
current transaction is aborted, commands ignored until end of transaction block
26.073004007339478
current transaction is aborted, commands ignored until end of transaction block
26.073970794677734
current transaction is aborted, commands ignored until end of transaction block
26.073970794677734
current transaction is aborted, commands ignored until end of transaction block
26.07596707344055
current transaction is aborted, commands ignored until end of transaction block
26.07596707344055
current transaction is aborted, commands ignored until end of transaction block
26.077005624771118
current transaction is aborted, commands ignored until end of transaction block
26.078005075454712
current transaction is aborted, commands ignored until end of transaction block
26.078005075454712
current transaction is aborted, commands ignored until end of transaction block
26.079010009765625
current transaction is aborted, commands ignored until end of transaction block
26.079010009765625
current transaction is aborted, commands ignored until end of transaction block
26.08011031150818
current transaction is aborted, commands ignored until end of transaction block
26.08011031150818
current transaction is aborted, commands ignored until end of transaction block
26.080970764160156
current transaction is aborted, commands ignored until end of transaction block
26.081998348236084
current transaction is aborted, commands ignored until end of transaction block
26.08296489715576
current transaction is aborted, commands ignored until end of transaction block
26.083967685699463
current transaction is aborted, commands ignored until end of transaction block
26.083967685699463
current transaction is aborted, commands ignored until end of transaction block
26.083967685699463
current transaction is aborted, commands ignored until end of transaction block
26.08504605293274
current transaction is aborted, commands ignored until end of transaction block
26.08504605293274
current transaction is aborted, commands ignored until end of transaction block
26.086007118225098
current transaction is aborted, commands ignored until end of transaction block
26.08696699142456
current transaction is aborted, commands ignored until end of transaction block
26.08696699142456
current transaction is aborted, commands ignored until end of transaction block
26.087966680526733
current transaction is aborted, commands ignored until end of transaction block
26.088972091674805
current transaction is aborted, commands ignored until end of transaction block
26.090334177017212
current transaction is aborted, commands ignored until end of transaction block
26.090965747833252
current transaction is aborted, commands ignored until end of transaction block
26.090965747833252
current transaction is aborted, commands ignored until end of transaction block
26.09230399131775
current transaction is aborted, commands ignored until end of transaction block
26.092968225479126
current transaction is aborted, commands ignored until end of transaction block
26.092968225479126
current transaction is aborted, commands ignored until end of transaction block
26.094032764434814
current transaction is aborted, commands ignored until end of transaction block
26.094967365264893
current transaction is aborted, commands ignored until end of transaction block
26.094967365264893
current transaction is aborted, commands ignored until end of transaction block
26.095982551574707
current transaction is aborted, commands ignored until end of transaction block
26.09796690940857
current transaction is aborted, commands ignored until end of transaction block
26.09896683692932
current transaction is aborted, commands ignored until end of transaction block
26.09896683692932
current transaction is aborted, commands ignored until end of transaction block
26.099966049194336
current transaction is aborted, commands ignored until end of transaction block
26.100963830947876
current transaction is aborted, commands ignored until end of transaction block
26.100963830947876
current transaction is aborted, commands ignored until end of transaction block
26.10196542739868
current transaction is aborted, commands ignored until end of transaction block
26.103580713272095
current transaction is aborted, commands ignored until end of transaction block
26.10397243499756
current transaction is aborted, commands ignored until end of transaction block
26.104971170425415
current transaction is aborted, commands ignored until end of transaction block
26.10597038269043
current transaction is aborted, commands ignored until end of transaction block
26.106969833374023
current transaction is aborted, commands ignored until end of transaction block
26.106969833374023
current transaction is aborted, commands ignored until end of transaction block
26.108098030090332
current transaction is aborted, commands ignored until end of transaction block
26.108098030090332
current transaction is aborted, commands ignored until end of transaction block
26.10897135734558
current transaction is aborted, commands ignored until end of transaction block
26.11012315750122
current transaction is aborted, commands ignored until end of transaction block
26.11012315750122
current transaction is aborted, commands ignored until end of transaction block
26.111008167266846
current transaction is aborted, commands ignored until end of transaction block
26.11200761795044
current transaction is aborted, commands ignored until end of transaction block
26.11200761795044
current transaction is aborted, commands ignored until end of transaction block
26.113038301467896
current transaction is aborted, commands ignored until end of transaction block
26.113038301467896
current transaction is aborted, commands ignored until end of transaction block
26.114007472991943
current transaction is aborted, commands ignored until end of transaction block
26.1149685382843
current transaction is aborted, commands ignored until end of transaction block
26.1149685382843
current transaction is aborted, commands ignored until end of transaction block
26.115975618362427
current transaction is aborted, commands ignored until end of transaction block
26.11753249168396
current transaction is aborted, commands ignored until end of transaction block
26.117968320846558
current transaction is aborted, commands ignored until end of transaction block
26.119142293930054
current transaction is aborted, commands ignored until end of transaction block
26.119142293930054
current transaction is aborted, commands ignored until end of transaction block
26.119967222213745
current transaction is aborted, commands ignored until end of transaction block
26.120971202850342
current transaction is aborted, commands ignored until end of transaction block
26.12200927734375
current transaction is aborted, commands ignored until end of transaction block
26.12200927734375
current transaction is aborted, commands ignored until end of transaction block
26.12296986579895
current transaction is aborted, commands ignored until end of transaction block
26.124022960662842
current transaction is aborted, commands ignored until end of transaction block
26.12496519088745
current transaction is aborted, commands ignored until end of transaction block
26.12496519088745
current transaction is aborted, commands ignored until end of transaction block
26.125969648361206
current transaction is aborted, commands ignored until end of transaction block
26.126969814300537
current transaction is aborted, commands ignored until end of transaction block
26.127965927124023
current transaction is aborted, commands ignored until end of transaction block
26.127965927124023
current transaction is aborted, commands ignored until end of transaction block
26.129004955291748
current transaction is aborted, commands ignored until end of transaction block
26.129972457885742
current transaction is aborted, commands ignored until end of transaction block
26.129972457885742
current transaction is aborted, commands ignored until end of transaction block
26.130966901779175
current transaction is aborted, commands ignored until end of transaction block
26.132046699523926
current transaction is aborted, commands ignored until end of transaction block
26.132046699523926
current transaction is aborted, commands ignored until end of transaction block
26.132996082305908
current transaction is aborted, commands ignored until end of transaction block
26.134007215499878
current transaction is aborted, commands ignored until end of transaction block
26.134007215499878
current transaction is aborted, commands ignored until end of transaction block
26.135002374649048
current transaction is aborted, commands ignored until end of transaction block
26.135002374649048
current transaction is aborted, commands ignored until end of transaction block
26.13601064682007
current transaction is aborted, commands ignored until end of transaction block
26.136969566345215
current transaction is aborted, commands ignored until end of transaction block
26.137972354888916
current transaction is aborted, commands ignored until end of transaction block
26.138946056365967
current transaction is aborted, commands ignored until end of transaction block
26.13896608352661
current transaction is aborted, commands ignored until end of transaction block
26.139999628067017
current transaction is aborted, commands ignored until end of transaction block
26.139999628067017
current transaction is aborted, commands ignored until end of transaction block
26.14100217819214
current transaction is aborted, commands ignored until end of transaction block
26.14100217819214
current transaction is aborted, commands ignored until end of transaction block
26.142002820968628
current transaction is aborted, commands ignored until end of transaction block
26.143009424209595
current transaction is aborted, commands ignored until end of transaction block
26.143009424209595
current transaction is aborted, commands ignored until end of transaction block
26.143975734710693
current transaction is aborted, commands ignored until end of transaction block
26.145028829574585
current transaction is aborted, commands ignored until end of transaction block
26.14600968360901
current transaction is aborted, commands ignored until end of transaction block
26.147002935409546
current transaction is aborted, commands ignored until end of transaction block
26.147002935409546
current transaction is aborted, commands ignored until end of transaction block
26.148077487945557
current transaction is aborted, commands ignored until end of transaction block
26.148077487945557
current transaction is aborted, commands ignored until end of transaction block
26.14898443222046
current transaction is aborted, commands ignored until end of transaction block
26.1499981880188
current transaction is aborted, commands ignored until end of transaction block
26.1499981880188
current transaction is aborted, commands ignored until end of transaction block
26.151078701019287
current transaction is aborted, commands ignored until end of transaction block
26.152015209197998
current transaction is aborted, commands ignored until end of transaction block
26.153059005737305
current transaction is aborted, commands ignored until end of transaction block
26.153982400894165
current transaction is aborted, commands ignored until end of transaction block
26.154213190078735
current transaction is aborted, commands ignored until end of transaction block
26.155011892318726
current transaction is aborted, commands ignored until end of transaction block
26.15560555458069
current transaction is aborted, commands ignored until end of transaction block
26.156009674072266
current transaction is aborted, commands ignored until end of transaction block
26.15696406364441
current transaction is aborted, commands ignored until end of transaction block
26.157975912094116
current transaction is aborted, commands ignored until end of transaction block
26.158965826034546
current transaction is aborted, commands ignored until end of transaction block
26.158965826034546
current transaction is aborted, commands ignored until end of transaction block
26.16000771522522
current transaction is aborted, commands ignored until end of transaction block
26.160977840423584
current transaction is aborted, commands ignored until end of transaction block
26.160977840423584
current transaction is aborted, commands ignored until end of transaction block
26.162004470825195
current transaction is aborted, commands ignored until end of transaction block
26.162970542907715
current transaction is aborted, commands ignored until end of transaction block
26.162970542907715
current transaction is aborted, commands ignored until end of transaction block
26.163963794708252
current transaction is aborted, commands ignored until end of transaction block
26.164971113204956
current transaction is aborted, commands ignored until end of transaction block
26.165971755981445
current transaction is aborted, commands ignored until end of transaction block
26.16620182991028
current transaction is aborted, commands ignored until end of transaction block
26.16700792312622
current transaction is aborted, commands ignored until end of transaction block
26.167969226837158
current transaction is aborted, commands ignored until end of transaction block
26.16896629333496
current transaction is aborted, commands ignored until end of transaction block
26.16915726661682
current transaction is aborted, commands ignored until end of transaction block
26.170045614242554
current transaction is aborted, commands ignored until end of transaction block
26.170329809188843
current transaction is aborted, commands ignored until end of transaction block
26.171032190322876
current transaction is aborted, commands ignored until end of transaction block
26.17197036743164
current transaction is aborted, commands ignored until end of transaction block
26.17324662208557
current transaction is aborted, commands ignored until end of transaction block
26.17396855354309
current transaction is aborted, commands ignored until end of transaction block
26.17396855354309
current transaction is aborted, commands ignored until end of transaction block
26.17496967315674
current transaction is aborted, commands ignored until end of transaction block
26.17496967315674
current transaction is aborted, commands ignored until end of transaction block
26.175965785980225
current transaction is aborted, commands ignored until end of transaction block
26.176974058151245
current transaction is aborted, commands ignored until end of transaction block
26.178101301193237
current transaction is aborted, commands ignored until end of transaction block
26.178969144821167
current transaction is aborted, commands ignored until end of transaction block
26.18003797531128
current transaction is aborted, commands ignored until end of transaction block
26.18097162246704
current transaction is aborted, commands ignored until end of transaction block
26.182018518447876
current transaction is aborted, commands ignored until end of transaction block
26.182976245880127
current transaction is aborted, commands ignored until end of transaction block
26.184250116348267
current transaction is aborted, commands ignored until end of transaction block
26.18512511253357
current transaction is aborted, commands ignored until end of transaction block
26.185978412628174
current transaction is aborted, commands ignored until end of transaction block
26.188036918640137
current transaction is aborted, commands ignored until end of transaction block
26.188979625701904
current transaction is aborted, commands ignored until end of transaction block
26.19025993347168
current transaction is aborted, commands ignored until end of transaction block
26.19133448600769
current transaction is aborted, commands ignored until end of transaction block
26.191978216171265
current transaction is aborted, commands ignored until end of transaction block
26.194025993347168
current transaction is aborted, commands ignored until end of transaction block
26.19501495361328
current transaction is aborted, commands ignored until end of transaction block
26.1962149143219
current transaction is aborted, commands ignored until end of transaction block
26.196996450424194
current transaction is aborted, commands ignored until end of transaction block
26.19898295402527
current transaction is aborted, commands ignored until end of transaction block
26.199225664138794
current transaction is aborted, commands ignored until end of transaction block
26.200968265533447
current transaction is aborted, commands ignored until end of transaction block
26.202073574066162
current transaction is aborted, commands ignored until end of transaction block
26.203391551971436
current transaction is aborted, commands ignored until end of transaction block
26.203972339630127
current transaction is aborted, commands ignored until end of transaction block
26.20546579360962
current transaction is aborted, commands ignored until end of transaction block
26.20698356628418
current transaction is aborted, commands ignored until end of transaction block
26.2081139087677
current transaction is aborted, commands ignored until end of transaction block
26.20898151397705
current transaction is aborted, commands ignored until end of transaction block
26.210052967071533
current transaction is aborted, commands ignored until end of transaction block
26.211266040802002
current transaction is aborted, commands ignored until end of transaction block
26.212098360061646
current transaction is aborted, commands ignored until end of transaction block
26.214054346084595
current transaction is aborted, commands ignored until end of transaction block
26.215262413024902
current transaction is aborted, commands ignored until end of transaction block
26.216068506240845
current transaction is aborted, commands ignored until end of transaction block
26.218056201934814
current transaction is aborted, commands ignored until end of transaction block
26.21904969215393
current transaction is aborted, commands ignored until end of transaction block
26.220057249069214
current transaction is aborted, commands ignored until end of transaction block
26.221055030822754
current transaction is aborted, commands ignored until end of transaction block
26.22204279899597
current transaction is aborted, commands ignored until end of transaction block
26.22204279899597
current transaction is aborted, commands ignored until end of transaction block
26.223185300827026
current transaction is aborted, commands ignored until end of transaction block
26.223185300827026
current transaction is aborted, commands ignored until end of transaction block
26.224043607711792
current transaction is aborted, commands ignored until end of transaction block
26.224043607711792
current transaction is aborted, commands ignored until end of transaction block
26.225077629089355
current transaction is aborted, commands ignored until end of transaction block
26.225077629089355
current transaction is aborted, commands ignored until end of transaction block
26.226046800613403
current transaction is aborted, commands ignored until end of transaction block
26.226046800613403
current transaction is aborted, commands ignored until end of transaction block
26.22704267501831
current transaction is aborted, commands ignored until end of transaction block
26.2280433177948
current transaction is aborted, commands ignored until end of transaction block
26.229053020477295
current transaction is aborted, commands ignored until end of transaction block
26.229053020477295
current transaction is aborted, commands ignored until end of transaction block
26.230047464370728
current transaction is aborted, commands ignored until end of transaction block
26.230047464370728
current transaction is aborted, commands ignored until end of transaction block
26.231189250946045
current transaction is aborted, commands ignored until end of transaction block
26.231189250946045
current transaction is aborted, commands ignored until end of transaction block
26.232043504714966
current transaction is aborted, commands ignored until end of transaction block
26.232043504714966
current transaction is aborted, commands ignored until end of transaction block
26.233134984970093
current transaction is aborted, commands ignored until end of transaction block
26.233134984970093
current transaction is aborted, commands ignored until end of transaction block
26.234044551849365
current transaction is aborted, commands ignored until end of transaction block
26.234044551849365
current transaction is aborted, commands ignored until end of transaction block
26.235047817230225
current transaction is aborted, commands ignored until end of transaction block
26.235047817230225
current transaction is aborted, commands ignored until end of transaction block
26.236087799072266
current transaction is aborted, commands ignored until end of transaction block
26.23704218864441
current transaction is aborted, commands ignored until end of transaction block
26.23704218864441
current transaction is aborted, commands ignored until end of transaction block
26.238340854644775
current transaction is aborted, commands ignored until end of transaction block
26.239047527313232
current transaction is aborted, commands ignored until end of transaction block
26.239047527313232
current transaction is aborted, commands ignored until end of transaction block
26.240043878555298
current transaction is aborted, commands ignored until end of transaction block
26.24104881286621
current transaction is aborted, commands ignored until end of transaction block
26.24206829071045
current transaction is aborted, commands ignored until end of transaction block
26.24206829071045
current transaction is aborted, commands ignored until end of transaction block
26.24404239654541
current transaction is aborted, commands ignored until end of transaction block
26.24404239654541
current transaction is aborted, commands ignored until end of transaction block
26.245049238204956
current transaction is aborted, commands ignored until end of transaction block
26.24604344367981
current transaction is aborted, commands ignored until end of transaction block
26.24604344367981
current transaction is aborted, commands ignored until end of transaction block
26.247079372406006
current transaction is aborted, commands ignored until end of transaction block
26.24804973602295
current transaction is aborted, commands ignored until end of transaction block
26.249048709869385
current transaction is aborted, commands ignored until end of transaction block
26.249048709869385
current transaction is aborted, commands ignored until end of transaction block
26.250044107437134
current transaction is aborted, commands ignored until end of transaction block
26.250895977020264
current transaction is aborted, commands ignored until end of transaction block
26.251044750213623
current transaction is aborted, commands ignored until end of transaction block
26.25204634666443
current transaction is aborted, commands ignored until end of transaction block
26.253044605255127
current transaction is aborted, commands ignored until end of transaction block
26.253044605255127
current transaction is aborted, commands ignored until end of transaction block
26.2540442943573
current transaction is aborted, commands ignored until end of transaction block
26.25504159927368
current transaction is aborted, commands ignored until end of transaction block
26.25604248046875
current transaction is aborted, commands ignored until end of transaction block
26.257054805755615
current transaction is aborted, commands ignored until end of transaction block
26.25804901123047
current transaction is aborted, commands ignored until end of transaction block
26.259056568145752
current transaction is aborted, commands ignored until end of transaction block
26.26004433631897
current transaction is aborted, commands ignored until end of transaction block
26.26004433631897
current transaction is aborted, commands ignored until end of transaction block
26.261041164398193
current transaction is aborted, commands ignored until end of transaction block
26.26204824447632
current transaction is aborted, commands ignored until end of transaction block
26.26304864883423
current transaction is aborted, commands ignored until end of transaction block
26.26427435874939
current transaction is aborted, commands ignored until end of transaction block
26.26504135131836
current transaction is aborted, commands ignored until end of transaction block
26.266042709350586
current transaction is aborted, commands ignored until end of transaction block
26.266042709350586
current transaction is aborted, commands ignored until end of transaction block
26.267301559448242
current transaction is aborted, commands ignored until end of transaction block
26.26803994178772
current transaction is aborted, commands ignored until end of transaction block
26.269044399261475
current transaction is aborted, commands ignored until end of transaction block
26.270044803619385
current transaction is aborted, commands ignored until end of transaction block
26.27204442024231
current transaction is aborted, commands ignored until end of transaction block
26.273057222366333
current transaction is aborted, commands ignored until end of transaction block
26.274051666259766
current transaction is aborted, commands ignored until end of transaction block
26.274051666259766
current transaction is aborted, commands ignored until end of transaction block
26.275315523147583
current transaction is aborted, commands ignored until end of transaction block
26.27605152130127
current transaction is aborted, commands ignored until end of transaction block
26.27704644203186
current transaction is aborted, commands ignored until end of transaction block
26.278052806854248
current transaction is aborted, commands ignored until end of transaction block
26.27903985977173
current transaction is aborted, commands ignored until end of transaction block
26.27903985977173
current transaction is aborted, commands ignored until end of transaction block
26.280093908309937
current transaction is aborted, commands ignored until end of transaction block
26.280093908309937
current transaction is aborted, commands ignored until end of transaction block
26.281041622161865
current transaction is aborted, commands ignored until end of transaction block
26.282040119171143
current transaction is aborted, commands ignored until end of transaction block
26.283044576644897
current transaction is aborted, commands ignored until end of transaction block
26.284050464630127
current transaction is aborted, commands ignored until end of transaction block
26.28605580329895
current transaction is aborted, commands ignored until end of transaction block
26.287046909332275
current transaction is aborted, commands ignored until end of transaction block
26.28804349899292
current transaction is aborted, commands ignored until end of transaction block
26.28804349899292
current transaction is aborted, commands ignored until end of transaction block
26.289039850234985
current transaction is aborted, commands ignored until end of transaction block
26.290053129196167
current transaction is aborted, commands ignored until end of transaction block
26.29103970527649
current transaction is aborted, commands ignored until end of transaction block
26.29103970527649
current transaction is aborted, commands ignored until end of transaction block
26.292407989501953
current transaction is aborted, commands ignored until end of transaction block
26.292407989501953
current transaction is aborted, commands ignored until end of transaction block
26.293081998825073
current transaction is aborted, commands ignored until end of transaction block
26.294044733047485
current transaction is aborted, commands ignored until end of transaction block
26.294044733047485
current transaction is aborted, commands ignored until end of transaction block
26.29504418373108
current transaction is aborted, commands ignored until end of transaction block
26.29504418373108
current transaction is aborted, commands ignored until end of transaction block
26.296077489852905
current transaction is aborted, commands ignored until end of transaction block
26.29704451560974
current transaction is aborted, commands ignored until end of transaction block
26.298142910003662
current transaction is aborted, commands ignored until end of transaction block
26.300573587417603
current transaction is aborted, commands ignored until end of transaction block
26.301042318344116
current transaction is aborted, commands ignored until end of transaction block
26.30205011367798
current transaction is aborted, commands ignored until end of transaction block
26.304051637649536
current transaction is aborted, commands ignored until end of transaction block
26.30505394935608
current transaction is aborted, commands ignored until end of transaction block
26.30605411529541
current transaction is aborted, commands ignored until end of transaction block
26.30605411529541
current transaction is aborted, commands ignored until end of transaction block
26.307173252105713
current transaction is aborted, commands ignored until end of transaction block
26.307173252105713
current transaction is aborted, commands ignored until end of transaction block
26.308340311050415
current transaction is aborted, commands ignored until end of transaction block
26.30904221534729
current transaction is aborted, commands ignored until end of transaction block
26.30904221534729
current transaction is aborted, commands ignored until end of transaction block
26.31007218360901
current transaction is aborted, commands ignored until end of transaction block
26.311072826385498
current transaction is aborted, commands ignored until end of transaction block
26.31406545639038
current transaction is aborted, commands ignored until end of transaction block
26.31504535675049
current transaction is aborted, commands ignored until end of transaction block
26.31604766845703
current transaction is aborted, commands ignored until end of transaction block
26.31604766845703
current transaction is aborted, commands ignored until end of transaction block
26.317056894302368
current transaction is aborted, commands ignored until end of transaction block
26.31806230545044
current transaction is aborted, commands ignored until end of transaction block
26.31906533241272
current transaction is aborted, commands ignored until end of transaction block
26.32003951072693
current transaction is aborted, commands ignored until end of transaction block
26.32003951072693
current transaction is aborted, commands ignored until end of transaction block
26.32108473777771
current transaction is aborted, commands ignored until end of transaction block
26.32108473777771
current transaction is aborted, commands ignored until end of transaction block
26.32234287261963
current transaction is aborted, commands ignored until end of transaction block
26.323044300079346
current transaction is aborted, commands ignored until end of transaction block
26.323044300079346
current transaction is aborted, commands ignored until end of transaction block
26.32408332824707
current transaction is aborted, commands ignored until end of transaction block
26.325056552886963
current transaction is aborted, commands ignored until end of transaction block
26.326242923736572
current transaction is aborted, commands ignored until end of transaction block
26.326242923736572
current transaction is aborted, commands ignored until end of transaction block
26.327041625976562
current transaction is aborted, commands ignored until end of transaction block
26.328044891357422
current transaction is aborted, commands ignored until end of transaction block
26.328044891357422
current transaction is aborted, commands ignored until end of transaction block
26.32904291152954
current transaction is aborted, commands ignored until end of transaction block
26.32904291152954
current transaction is aborted, commands ignored until end of transaction block
26.32904291152954
current transaction is aborted, commands ignored until end of transaction block
26.330041646957397
current transaction is aborted, commands ignored until end of transaction block
26.331044912338257
current transaction is aborted, commands ignored until end of transaction block
26.331044912338257
current transaction is aborted, commands ignored until end of transaction block
26.33305048942566
current transaction is aborted, commands ignored until end of transaction block
26.33305048942566
current transaction is aborted, commands ignored until end of transaction block
26.334105730056763
current transaction is aborted, commands ignored until end of transaction block
26.335043907165527
current transaction is aborted, commands ignored until end of transaction block
26.33608078956604
current transaction is aborted, commands ignored until end of transaction block
26.33608078956604
current transaction is aborted, commands ignored until end of transaction block
26.337042808532715
current transaction is aborted, commands ignored until end of transaction block
26.338047742843628
current transaction is aborted, commands ignored until end of transaction block
26.339048862457275
current transaction is aborted, commands ignored until end of transaction block
26.339048862457275
current transaction is aborted, commands ignored until end of transaction block
26.34104561805725
current transaction is aborted, commands ignored until end of transaction block
26.34205389022827
current transaction is aborted, commands ignored until end of transaction block
26.343042850494385
current transaction is aborted, commands ignored until end of transaction block
26.34404468536377
current transaction is aborted, commands ignored until end of transaction block
26.34404468536377
current transaction is aborted, commands ignored until end of transaction block
26.34504771232605
current transaction is aborted, commands ignored until end of transaction block
26.346047401428223
current transaction is aborted, commands ignored until end of transaction block
26.347044944763184
current transaction is aborted, commands ignored until end of transaction block
26.347044944763184
current transaction is aborted, commands ignored until end of transaction block
26.34805178642273
current transaction is aborted, commands ignored until end of transaction block
26.34805178642273
current transaction is aborted, commands ignored until end of transaction block
26.349230766296387
current transaction is aborted, commands ignored until end of transaction block
26.350393295288086
current transaction is aborted, commands ignored until end of transaction block
26.351044416427612
current transaction is aborted, commands ignored until end of transaction block
26.35204768180847
current transaction is aborted, commands ignored until end of transaction block
26.35204768180847
current transaction is aborted, commands ignored until end of transaction block
26.3530433177948
current transaction is aborted, commands ignored until end of transaction block
26.3530433177948
current transaction is aborted, commands ignored until end of transaction block
26.354394912719727
current transaction is aborted, commands ignored until end of transaction block
26.354394912719727
current transaction is aborted, commands ignored until end of transaction block
26.355041980743408
current transaction is aborted, commands ignored until end of transaction block
26.356046676635742
current transaction is aborted, commands ignored until end of transaction block
26.356046676635742
current transaction is aborted, commands ignored until end of transaction block
26.357250928878784
current transaction is aborted, commands ignored until end of transaction block
26.358044862747192
current transaction is aborted, commands ignored until end of transaction block
26.359043836593628
current transaction is aborted, commands ignored until end of transaction block
26.359043836593628
current transaction is aborted, commands ignored until end of transaction block
26.360048532485962
current transaction is aborted, commands ignored until end of transaction block
26.361152172088623
current transaction is aborted, commands ignored until end of transaction block
26.362057209014893
current transaction is aborted, commands ignored until end of transaction block
26.362057209014893
current transaction is aborted, commands ignored until end of transaction block
26.36304521560669
current transaction is aborted, commands ignored until end of transaction block
26.364046335220337
current transaction is aborted, commands ignored until end of transaction block
26.364046335220337
current transaction is aborted, commands ignored until end of transaction block
26.365049600601196
current transaction is aborted, commands ignored until end of transaction block
26.36606192588806
current transaction is aborted, commands ignored until end of transaction block
26.367048263549805
current transaction is aborted, commands ignored until end of transaction block
26.367048263549805
current transaction is aborted, commands ignored until end of transaction block
26.368085622787476
current transaction is aborted, commands ignored until end of transaction block
26.369064807891846
current transaction is aborted, commands ignored until end of transaction block
26.36938762664795
current transaction is aborted, commands ignored until end of transaction block
26.370043992996216
current transaction is aborted, commands ignored until end of transaction block
26.370043992996216
current transaction is aborted, commands ignored until end of transaction block
26.37104296684265
current transaction is aborted, commands ignored until end of transaction block
26.37104296684265
current transaction is aborted, commands ignored until end of transaction block
26.372041702270508
current transaction is aborted, commands ignored until end of transaction block
26.37305188179016
current transaction is aborted, commands ignored until end of transaction block
26.37305188179016
current transaction is aborted, commands ignored until end of transaction block
26.37404727935791
current transaction is aborted, commands ignored until end of transaction block
26.37404727935791
current transaction is aborted, commands ignored until end of transaction block
26.375072717666626
current transaction is aborted, commands ignored until end of transaction block
26.375072717666626
current transaction is aborted, commands ignored until end of transaction block
26.376044511795044
current transaction is aborted, commands ignored until end of transaction block
26.377054452896118
current transaction is aborted, commands ignored until end of transaction block
26.377054452896118
current transaction is aborted, commands ignored until end of transaction block
26.378063678741455
current transaction is aborted, commands ignored until end of transaction block
26.37904977798462
current transaction is aborted, commands ignored until end of transaction block
26.38004446029663
current transaction is aborted, commands ignored until end of transaction block
26.38004446029663
current transaction is aborted, commands ignored until end of transaction block
26.38105082511902
current transaction is aborted, commands ignored until end of transaction block
26.382047176361084
current transaction is aborted, commands ignored until end of transaction block
26.383044481277466
current transaction is aborted, commands ignored until end of transaction block
26.383044481277466
current transaction is aborted, commands ignored until end of transaction block
26.384039878845215
current transaction is aborted, commands ignored until end of transaction block
26.384039878845215
current transaction is aborted, commands ignored until end of transaction block
26.38504648208618
current transaction is aborted, commands ignored until end of transaction block
26.38504648208618
current transaction is aborted, commands ignored until end of transaction block
26.38609266281128
current transaction is aborted, commands ignored until end of transaction block
26.38609266281128
current transaction is aborted, commands ignored until end of transaction block
26.387041330337524
current transaction is aborted, commands ignored until end of transaction block
26.38809561729431
current transaction is aborted, commands ignored until end of transaction block
26.38809561729431
current transaction is aborted, commands ignored until end of transaction block
26.389044523239136
current transaction is aborted, commands ignored until end of transaction block
26.390040636062622
current transaction is aborted, commands ignored until end of transaction block
26.390040636062622
current transaction is aborted, commands ignored until end of transaction block
26.391048192977905
current transaction is aborted, commands ignored until end of transaction block
26.391048192977905
current transaction is aborted, commands ignored until end of transaction block
26.392048358917236
current transaction is aborted, commands ignored until end of transaction block
26.392048358917236
current transaction is aborted, commands ignored until end of transaction block
26.39304828643799
current transaction is aborted, commands ignored until end of transaction block
26.39404320716858
current transaction is aborted, commands ignored until end of transaction block
26.395041942596436
current transaction is aborted, commands ignored until end of transaction block
26.395041942596436
current transaction is aborted, commands ignored until end of transaction block
26.39704394340515
current transaction is aborted, commands ignored until end of transaction block
26.39711570739746
current transaction is aborted, commands ignored until end of transaction block
26.3980655670166
current transaction is aborted, commands ignored until end of transaction block
26.3980655670166
current transaction is aborted, commands ignored until end of transaction block
26.399043321609497
current transaction is aborted, commands ignored until end of transaction block
26.399043321609497
current transaction is aborted, commands ignored until end of transaction block
26.400044918060303
current transaction is aborted, commands ignored until end of transaction block
26.4020516872406
current transaction is aborted, commands ignored until end of transaction block
26.403085470199585
current transaction is aborted, commands ignored until end of transaction block
26.403085470199585
current transaction is aborted, commands ignored until end of transaction block
26.404049158096313
current transaction is aborted, commands ignored until end of transaction block
26.40504240989685
current transaction is aborted, commands ignored until end of transaction block
26.406455278396606
current transaction is aborted, commands ignored until end of transaction block
26.407108783721924
current transaction is aborted, commands ignored until end of transaction block
26.408041954040527
current transaction is aborted, commands ignored until end of transaction block
26.408041954040527
current transaction is aborted, commands ignored until end of transaction block
26.409039735794067
current transaction is aborted, commands ignored until end of transaction block
26.409039735794067
current transaction is aborted, commands ignored until end of transaction block
26.41007971763611
current transaction is aborted, commands ignored until end of transaction block
26.41007971763611
current transaction is aborted, commands ignored until end of transaction block
26.411100149154663
current transaction is aborted, commands ignored until end of transaction block
26.4120991230011
current transaction is aborted, commands ignored until end of transaction block
26.412189245224
current transaction is aborted, commands ignored until end of transaction block
26.413100481033325
current transaction is aborted, commands ignored until end of transaction block
26.41509771347046
current transaction is aborted, commands ignored until end of transaction block
26.416097164154053
current transaction is aborted, commands ignored until end of transaction block
26.416097164154053
current transaction is aborted, commands ignored until end of transaction block
26.417097806930542
current transaction is aborted, commands ignored until end of transaction block
26.418100357055664
current transaction is aborted, commands ignored until end of transaction block
26.419095516204834
current transaction is aborted, commands ignored until end of transaction block
26.419095516204834
current transaction is aborted, commands ignored until end of transaction block
26.42009997367859
current transaction is aborted, commands ignored until end of transaction block
26.420294761657715
current transaction is aborted, commands ignored until end of transaction block
26.421101570129395
current transaction is aborted, commands ignored until end of transaction block
26.421101570129395
current transaction is aborted, commands ignored until end of transaction block
26.422093391418457
current transaction is aborted, commands ignored until end of transaction block
26.42310118675232
current transaction is aborted, commands ignored until end of transaction block
26.42310118675232
current transaction is aborted, commands ignored until end of transaction block
26.42410111427307
current transaction is aborted, commands ignored until end of transaction block
26.425103187561035
current transaction is aborted, commands ignored until end of transaction block
26.42609453201294
current transaction is aborted, commands ignored until end of transaction block
26.42609453201294
current transaction is aborted, commands ignored until end of transaction block
26.427101850509644
current transaction is aborted, commands ignored until end of transaction block
26.42810821533203
current transaction is aborted, commands ignored until end of transaction block
26.42909836769104
current transaction is aborted, commands ignored until end of transaction block
26.43009352684021
current transaction is aborted, commands ignored until end of transaction block
26.431111097335815
current transaction is aborted, commands ignored until end of transaction block
26.431251764297485
current transaction is aborted, commands ignored until end of transaction block
26.43209719657898
current transaction is aborted, commands ignored until end of transaction block
26.433094263076782
current transaction is aborted, commands ignored until end of transaction block
26.433094263076782
current transaction is aborted, commands ignored until end of transaction block
26.434093236923218
current transaction is aborted, commands ignored until end of transaction block
26.434093236923218
current transaction is aborted, commands ignored until end of transaction block
26.43509531021118
current transaction is aborted, commands ignored until end of transaction block
26.43609309196472
current transaction is aborted, commands ignored until end of transaction block
26.43709373474121
current transaction is aborted, commands ignored until end of transaction block
26.43713355064392
current transaction is aborted, commands ignored until end of transaction block
26.438101053237915
current transaction is aborted, commands ignored until end of transaction block
26.438101053237915
current transaction is aborted, commands ignored until end of transaction block
26.439104318618774
current transaction is aborted, commands ignored until end of transaction block
26.44009256362915
current transaction is aborted, commands ignored until end of transaction block
26.44009256362915
current transaction is aborted, commands ignored until end of transaction block
26.4411039352417
current transaction is aborted, commands ignored until end of transaction block
26.44222903251648
current transaction is aborted, commands ignored until end of transaction block
26.44309949874878
current transaction is aborted, commands ignored until end of transaction block
26.44409680366516
current transaction is aborted, commands ignored until end of transaction block
26.44409680366516
current transaction is aborted, commands ignored until end of transaction block
26.445109844207764
current transaction is aborted, commands ignored until end of transaction block
26.446096897125244
current transaction is aborted, commands ignored until end of transaction block
26.446096897125244
current transaction is aborted, commands ignored until end of transaction block
26.447094440460205
current transaction is aborted, commands ignored until end of transaction block
26.447094440460205
current transaction is aborted, commands ignored until end of transaction block
26.448164701461792
current transaction is aborted, commands ignored until end of transaction block
26.449098587036133
current transaction is aborted, commands ignored until end of transaction block
26.449098587036133
current transaction is aborted, commands ignored until end of transaction block
26.45109510421753
current transaction is aborted, commands ignored until end of transaction block
26.45210027694702
current transaction is aborted, commands ignored until end of transaction block
26.454095602035522
current transaction is aborted, commands ignored until end of transaction block
26.45524787902832
current transaction is aborted, commands ignored until end of transaction block
26.456180334091187
current transaction is aborted, commands ignored until end of transaction block
26.45712447166443
current transaction is aborted, commands ignored until end of transaction block
26.458102464675903
current transaction is aborted, commands ignored until end of transaction block
26.459096670150757
current transaction is aborted, commands ignored until end of transaction block
26.459096670150757
current transaction is aborted, commands ignored until end of transaction block
26.460233211517334
current transaction is aborted, commands ignored until end of transaction block
26.460233211517334
current transaction is aborted, commands ignored until end of transaction block
26.46109914779663
current transaction is aborted, commands ignored until end of transaction block
26.462093830108643
current transaction is aborted, commands ignored until end of transaction block
26.463093757629395
current transaction is aborted, commands ignored until end of transaction block
26.463093757629395
current transaction is aborted, commands ignored until end of transaction block
26.464098691940308
current transaction is aborted, commands ignored until end of transaction block
26.46509623527527
current transaction is aborted, commands ignored until end of transaction block
26.46509623527527
current transaction is aborted, commands ignored until end of transaction block
26.467108964920044
current transaction is aborted, commands ignored until end of transaction block
26.468118906021118
current transaction is aborted, commands ignored until end of transaction block
26.469139099121094
current transaction is aborted, commands ignored until end of transaction block
26.4700984954834
current transaction is aborted, commands ignored until end of transaction block
26.4700984954834
current transaction is aborted, commands ignored until end of transaction block
26.471111536026
current transaction is aborted, commands ignored until end of transaction block
26.47231149673462
current transaction is aborted, commands ignored until end of transaction block
26.47231149673462
current transaction is aborted, commands ignored until end of transaction block
26.473096132278442
current transaction is aborted, commands ignored until end of transaction block
26.473096132278442
current transaction is aborted, commands ignored until end of transaction block
26.474095106124878
current transaction is aborted, commands ignored until end of transaction block
26.474095106124878
current transaction is aborted, commands ignored until end of transaction block
26.475093841552734
current transaction is aborted, commands ignored until end of transaction block
26.476092100143433
current transaction is aborted, commands ignored until end of transaction block
26.477097034454346
current transaction is aborted, commands ignored until end of transaction block
26.47810173034668
current transaction is aborted, commands ignored until end of transaction block
26.479125499725342
current transaction is aborted, commands ignored until end of transaction block
26.48009753227234
current transaction is aborted, commands ignored until end of transaction block
26.48110866546631
current transaction is aborted, commands ignored until end of transaction block
26.482104539871216
current transaction is aborted, commands ignored until end of transaction block
26.482104539871216
current transaction is aborted, commands ignored until end of transaction block
26.483094453811646
current transaction is aborted, commands ignored until end of transaction block
26.484093189239502
current transaction is aborted, commands ignored until end of transaction block
26.484093189239502
current transaction is aborted, commands ignored until end of transaction block
26.4851016998291
current transaction is aborted, commands ignored until end of transaction block
26.486100435256958
current transaction is aborted, commands ignored until end of transaction block
26.486100435256958
current transaction is aborted, commands ignored until end of transaction block
26.487285614013672
current transaction is aborted, commands ignored until end of transaction block
26.487285614013672
current transaction is aborted, commands ignored until end of transaction block
26.488097190856934
current transaction is aborted, commands ignored until end of transaction block
26.488097190856934
current transaction is aborted, commands ignored until end of transaction block
26.48941469192505
current transaction is aborted, commands ignored until end of transaction block
26.48941469192505
current transaction is aborted, commands ignored until end of transaction block
26.490095615386963
current transaction is aborted, commands ignored until end of transaction block
26.4910945892334
current transaction is aborted, commands ignored until end of transaction block
26.49321722984314
current transaction is aborted, commands ignored until end of transaction block
26.495120763778687
current transaction is aborted, commands ignored until end of transaction block
26.496095657348633
current transaction is aborted, commands ignored until end of transaction block
26.497310161590576
current transaction is aborted, commands ignored until end of transaction block
26.498316764831543
current transaction is aborted, commands ignored until end of transaction block
26.4991512298584
current transaction is aborted, commands ignored until end of transaction block
26.50031304359436
current transaction is aborted, commands ignored until end of transaction block
26.501097440719604
current transaction is aborted, commands ignored until end of transaction block
26.50209403038025
current transaction is aborted, commands ignored until end of transaction block
26.50309443473816
current transaction is aborted, commands ignored until end of transaction block
26.503268241882324
current transaction is aborted, commands ignored until end of transaction block
26.504093170166016
current transaction is aborted, commands ignored until end of transaction block
26.505097150802612
current transaction is aborted, commands ignored until end of transaction block
26.506093502044678
current transaction is aborted, commands ignored until end of transaction block
26.507105350494385
current transaction is aborted, commands ignored until end of transaction block
26.50820016860962
current transaction is aborted, commands ignored until end of transaction block
26.51009774208069
current transaction is aborted, commands ignored until end of transaction block
26.51009774208069
current transaction is aborted, commands ignored until end of transaction block
26.511236667633057
current transaction is aborted, commands ignored until end of transaction block
26.5121066570282
current transaction is aborted, commands ignored until end of transaction block
26.513179063796997
current transaction is aborted, commands ignored until end of transaction block
26.513179063796997
current transaction is aborted, commands ignored until end of transaction block
26.514092922210693
current transaction is aborted, commands ignored until end of transaction block
26.516101598739624
current transaction is aborted, commands ignored until end of transaction block
26.51709008216858
current transaction is aborted, commands ignored until end of transaction block
26.51709008216858
current transaction is aborted, commands ignored until end of transaction block
26.518096208572388
current transaction is aborted, commands ignored until end of transaction block
26.51909899711609
current transaction is aborted, commands ignored until end of transaction block
26.520095825195312
current transaction is aborted, commands ignored until end of transaction block
26.520095825195312
current transaction is aborted, commands ignored until end of transaction block
26.521094799041748
current transaction is aborted, commands ignored until end of transaction block
26.522096872329712
current transaction is aborted, commands ignored until end of transaction block
26.522096872329712
current transaction is aborted, commands ignored until end of transaction block
26.523096561431885
current transaction is aborted, commands ignored until end of transaction block
26.524108171463013
current transaction is aborted, commands ignored until end of transaction block
26.526097059249878
current transaction is aborted, commands ignored until end of transaction block
26.52709674835205
current transaction is aborted, commands ignored until end of transaction block
26.52709674835205
current transaction is aborted, commands ignored until end of transaction block
26.5281822681427
current transaction is aborted, commands ignored until end of transaction block
26.52909564971924
current transaction is aborted, commands ignored until end of transaction block
26.52909564971924
current transaction is aborted, commands ignored until end of transaction block
26.53009581565857
current transaction is aborted, commands ignored until end of transaction block
26.531126737594604
current transaction is aborted, commands ignored until end of transaction block
26.532094478607178
current transaction is aborted, commands ignored until end of transaction block
26.53309726715088
current transaction is aborted, commands ignored until end of transaction block
26.53309726715088
current transaction is aborted, commands ignored until end of transaction block
26.534093379974365
current transaction is aborted, commands ignored until end of transaction block
26.535112380981445
current transaction is aborted, commands ignored until end of transaction block
26.536102056503296
current transaction is aborted, commands ignored until end of transaction block
26.537614345550537
current transaction is aborted, commands ignored until end of transaction block
26.53809142112732
current transaction is aborted, commands ignored until end of transaction block
26.5390944480896
current transaction is aborted, commands ignored until end of transaction block
26.54009437561035
current transaction is aborted, commands ignored until end of transaction block
26.541090726852417
current transaction is aborted, commands ignored until end of transaction block
26.541090726852417
current transaction is aborted, commands ignored until end of transaction block
26.542519092559814
current transaction is aborted, commands ignored until end of transaction block
26.543588876724243
current transaction is aborted, commands ignored until end of transaction block
26.544092655181885
current transaction is aborted, commands ignored until end of transaction block
26.545091152191162
current transaction is aborted, commands ignored until end of transaction block
26.545091152191162
current transaction is aborted, commands ignored until end of transaction block
26.546093702316284
current transaction is aborted, commands ignored until end of transaction block
26.547094106674194
current transaction is aborted, commands ignored until end of transaction block
26.548092365264893
current transaction is aborted, commands ignored until end of transaction block
26.550099849700928
current transaction is aborted, commands ignored until end of transaction block
26.550099849700928
current transaction is aborted, commands ignored until end of transaction block
26.55109667778015
current transaction is aborted, commands ignored until end of transaction block
26.55109667778015
current transaction is aborted, commands ignored until end of transaction block
26.552096605300903
current transaction is aborted, commands ignored until end of transaction block
26.553096294403076
current transaction is aborted, commands ignored until end of transaction block
26.55509853363037
current transaction is aborted, commands ignored until end of transaction block
26.556097507476807
current transaction is aborted, commands ignored until end of transaction block
26.556097507476807
current transaction is aborted, commands ignored until end of transaction block
26.55732774734497
current transaction is aborted, commands ignored until end of transaction block
26.558091402053833
current transaction is aborted, commands ignored until end of transaction block
26.559091806411743
current transaction is aborted, commands ignored until end of transaction block
26.559091806411743
current transaction is aborted, commands ignored until end of transaction block
26.56009316444397
current transaction is aborted, commands ignored until end of transaction block
26.56110644340515
current transaction is aborted, commands ignored until end of transaction block
26.56209707260132
current transaction is aborted, commands ignored until end of transaction block
26.56209707260132
current transaction is aborted, commands ignored until end of transaction block
26.563318729400635
current transaction is aborted, commands ignored until end of transaction block
26.5640926361084
current transaction is aborted, commands ignored until end of transaction block
26.5640926361084
current transaction is aborted, commands ignored until end of transaction block
26.565092086791992
current transaction is aborted, commands ignored until end of transaction block
26.566091060638428
current transaction is aborted, commands ignored until end of transaction block
26.567091464996338
current transaction is aborted, commands ignored until end of transaction block
26.56714963912964
current transaction is aborted, commands ignored until end of transaction block
26.56809377670288
current transaction is aborted, commands ignored until end of transaction block
26.569095134735107
current transaction is aborted, commands ignored until end of transaction block
26.569095134735107
current transaction is aborted, commands ignored until end of transaction block
26.5700945854187
current transaction is aborted, commands ignored until end of transaction block
26.571110248565674
current transaction is aborted, commands ignored until end of transaction block
26.57209348678589
current transaction is aborted, commands ignored until end of transaction block
26.57209348678589
current transaction is aborted, commands ignored until end of transaction block
26.573120832443237
current transaction is aborted, commands ignored until end of transaction block
26.574116230010986
current transaction is aborted, commands ignored until end of transaction block
26.57509756088257
current transaction is aborted, commands ignored until end of transaction block
26.576094388961792
current transaction is aborted, commands ignored until end of transaction block
26.576094388961792
current transaction is aborted, commands ignored until end of transaction block
26.57709765434265
current transaction is aborted, commands ignored until end of transaction block
26.57709765434265
current transaction is aborted, commands ignored until end of transaction block
26.578091859817505
current transaction is aborted, commands ignored until end of transaction block
26.57909393310547
current transaction is aborted, commands ignored until end of transaction block
26.57909393310547
current transaction is aborted, commands ignored until end of transaction block
26.58009386062622
current transaction is aborted, commands ignored until end of transaction block
26.58009386062622
current transaction is aborted, commands ignored until end of transaction block
26.581192016601562
current transaction is aborted, commands ignored until end of transaction block
26.581192016601562
current transaction is aborted, commands ignored until end of transaction block
26.582099676132202
current transaction is aborted, commands ignored until end of transaction block
26.58309268951416
current transaction is aborted, commands ignored until end of transaction block
26.58409833908081
current transaction is aborted, commands ignored until end of transaction block
26.58409833908081
current transaction is aborted, commands ignored until end of transaction block
26.585103750228882
current transaction is aborted, commands ignored until end of transaction block
26.585103750228882
current transaction is aborted, commands ignored until end of transaction block
26.58609437942505
current transaction is aborted, commands ignored until end of transaction block
26.58609437942505
current transaction is aborted, commands ignored until end of transaction block
26.587125062942505
current transaction is aborted, commands ignored until end of transaction block
26.587125062942505
current transaction is aborted, commands ignored until end of transaction block
26.58809542655945
current transaction is aborted, commands ignored until end of transaction block
26.589094877243042
current transaction is aborted, commands ignored until end of transaction block
26.590100049972534
current transaction is aborted, commands ignored until end of transaction block
26.59109401702881
current transaction is aborted, commands ignored until end of transaction block
26.59236168861389
current transaction is aborted, commands ignored until end of transaction block
26.593092918395996
current transaction is aborted, commands ignored until end of transaction block
26.593092918395996
current transaction is aborted, commands ignored until end of transaction block
26.59409213066101
current transaction is aborted, commands ignored until end of transaction block
26.59409213066101
current transaction is aborted, commands ignored until end of transaction block
26.595093250274658
current transaction is aborted, commands ignored until end of transaction block
26.59609889984131
current transaction is aborted, commands ignored until end of transaction block
26.59609889984131
current transaction is aborted, commands ignored until end of transaction block
26.597119331359863
current transaction is aborted, commands ignored until end of transaction block
26.597119331359863
current transaction is aborted, commands ignored until end of transaction block
26.598098278045654
current transaction is aborted, commands ignored until end of transaction block
26.598098278045654
current transaction is aborted, commands ignored until end of transaction block
26.5990993976593
current transaction is aborted, commands ignored until end of transaction block
26.5990993976593
current transaction is aborted, commands ignored until end of transaction block
26.600269317626953
current transaction is aborted, commands ignored until end of transaction block
26.600269317626953
current transaction is aborted, commands ignored until end of transaction block
26.601134777069092
current transaction is aborted, commands ignored until end of transaction block
26.601134777069092
current transaction is aborted, commands ignored until end of transaction block
26.603104829788208
current transaction is aborted, commands ignored until end of transaction block
26.604095697402954
current transaction is aborted, commands ignored until end of transaction block
26.604095697402954
current transaction is aborted, commands ignored until end of transaction block
26.605092525482178
current transaction is aborted, commands ignored until end of transaction block
26.605092525482178
current transaction is aborted, commands ignored until end of transaction block
26.606258153915405
current transaction is aborted, commands ignored until end of transaction block
26.606258153915405
current transaction is aborted, commands ignored until end of transaction block
26.607157707214355
current transaction is aborted, commands ignored until end of transaction block
26.607157707214355
current transaction is aborted, commands ignored until end of transaction block
26.608333826065063
current transaction is aborted, commands ignored until end of transaction block
26.609112977981567
current transaction is aborted, commands ignored until end of transaction block
26.610235452651978
current transaction is aborted, commands ignored until end of transaction block
26.611252546310425
current transaction is aborted, commands ignored until end of transaction block
26.611252546310425
current transaction is aborted, commands ignored until end of transaction block
26.61225199699402
current transaction is aborted, commands ignored until end of transaction block
26.61225199699402
current transaction is aborted, commands ignored until end of transaction block
26.61325240135193
current transaction is aborted, commands ignored until end of transaction block
26.61325240135193
current transaction is aborted, commands ignored until end of transaction block
26.614320993423462
current transaction is aborted, commands ignored until end of transaction block
26.614320993423462
current transaction is aborted, commands ignored until end of transaction block
26.615251064300537
current transaction is aborted, commands ignored until end of transaction block
26.616268634796143
current transaction is aborted, commands ignored until end of transaction block
26.61725926399231
current transaction is aborted, commands ignored until end of transaction block
26.61827325820923
current transaction is aborted, commands ignored until end of transaction block
26.61925458908081
current transaction is aborted, commands ignored until end of transaction block
26.620254278182983
current transaction is aborted, commands ignored until end of transaction block
26.620564937591553
current transaction is aborted, commands ignored until end of transaction block
26.621254920959473
current transaction is aborted, commands ignored until end of transaction block
26.621254920959473
current transaction is aborted, commands ignored until end of transaction block
26.622251272201538
current transaction is aborted, commands ignored until end of transaction block
26.623257637023926
current transaction is aborted, commands ignored until end of transaction block
26.624255418777466
current transaction is aborted, commands ignored until end of transaction block
26.624255418777466
current transaction is aborted, commands ignored until end of transaction block
26.62525200843811
current transaction is aborted, commands ignored until end of transaction block
26.626262187957764
current transaction is aborted, commands ignored until end of transaction block
26.626262187957764
current transaction is aborted, commands ignored until end of transaction block
26.627257347106934
current transaction is aborted, commands ignored until end of transaction block
26.628480911254883
current transaction is aborted, commands ignored until end of transaction block
26.628480911254883
current transaction is aborted, commands ignored until end of transaction block
26.629334211349487
current transaction is aborted, commands ignored until end of transaction block
26.63026189804077
current transaction is aborted, commands ignored until end of transaction block
26.63125228881836
current transaction is aborted, commands ignored until end of transaction block
26.632256746292114
current transaction is aborted, commands ignored until end of transaction block
26.632256746292114
current transaction is aborted, commands ignored until end of transaction block
26.633254766464233
current transaction is aborted, commands ignored until end of transaction block
26.634295225143433
current transaction is aborted, commands ignored until end of transaction block
26.635256052017212
current transaction is aborted, commands ignored until end of transaction block
26.636253356933594
current transaction is aborted, commands ignored until end of transaction block
26.638256311416626
current transaction is aborted, commands ignored until end of transaction block
26.640255212783813
current transaction is aborted, commands ignored until end of transaction block
26.641433715820312
current transaction is aborted, commands ignored until end of transaction block
26.64225459098816
current transaction is aborted, commands ignored until end of transaction block
26.64225459098816
current transaction is aborted, commands ignored until end of transaction block
26.643561601638794
current transaction is aborted, commands ignored until end of transaction block
26.645267009735107
current transaction is aborted, commands ignored until end of transaction block
26.6462721824646
current transaction is aborted, commands ignored until end of transaction block
26.6462721824646
current transaction is aborted, commands ignored until end of transaction block
26.647263288497925
current transaction is aborted, commands ignored until end of transaction block
26.648258447647095
current transaction is aborted, commands ignored until end of transaction block
26.64851403236389
current transaction is aborted, commands ignored until end of transaction block
26.649255990982056
current transaction is aborted, commands ignored until end of transaction block
26.650264739990234
current transaction is aborted, commands ignored until end of transaction block
26.651267766952515
current transaction is aborted, commands ignored until end of transaction block
26.652262210845947
current transaction is aborted, commands ignored until end of transaction block
26.65447974205017
current transaction is aborted, commands ignored until end of transaction block
26.655252933502197
current transaction is aborted, commands ignored until end of transaction block
26.655252933502197
current transaction is aborted, commands ignored until end of transaction block
26.6562602519989
current transaction is aborted, commands ignored until end of transaction block
26.657254457473755
current transaction is aborted, commands ignored until end of transaction block
26.65828275680542
current transaction is aborted, commands ignored until end of transaction block
26.65926241874695
current transaction is aborted, commands ignored until end of transaction block
26.660252809524536
current transaction is aborted, commands ignored until end of transaction block
26.660252809524536
current transaction is aborted, commands ignored until end of transaction block
26.66125726699829
current transaction is aborted, commands ignored until end of transaction block
26.662251234054565
current transaction is aborted, commands ignored until end of transaction block
26.66230821609497
current transaction is aborted, commands ignored until end of transaction block
26.66230821609497
current transaction is aborted, commands ignored until end of transaction block
26.663251161575317
current transaction is aborted, commands ignored until end of transaction block
26.664255380630493
current transaction is aborted, commands ignored until end of transaction block
26.664255380630493
current transaction is aborted, commands ignored until end of transaction block
26.66525149345398
current transaction is aborted, commands ignored until end of transaction block
26.666255950927734
current transaction is aborted, commands ignored until end of transaction block
26.66725444793701
current transaction is aborted, commands ignored until end of transaction block
26.66725444793701
current transaction is aborted, commands ignored until end of transaction block
26.668301105499268
current transaction is aborted, commands ignored until end of transaction block
26.668301105499268
current transaction is aborted, commands ignored until end of transaction block
26.669261932373047
current transaction is aborted, commands ignored until end of transaction block
26.67026376724243
current transaction is aborted, commands ignored until end of transaction block
26.671286821365356
current transaction is aborted, commands ignored until end of transaction block
26.6722891330719
current transaction is aborted, commands ignored until end of transaction block
26.673274755477905
current transaction is aborted, commands ignored until end of transaction block
26.67425298690796
current transaction is aborted, commands ignored until end of transaction block
26.6752986907959
current transaction is aborted, commands ignored until end of transaction block
26.6752986907959
current transaction is aborted, commands ignored until end of transaction block
26.67625594139099
current transaction is aborted, commands ignored until end of transaction block
26.677352905273438
current transaction is aborted, commands ignored until end of transaction block
26.677352905273438
current transaction is aborted, commands ignored until end of transaction block
26.6782546043396
current transaction is aborted, commands ignored until end of transaction block
26.679258346557617
current transaction is aborted, commands ignored until end of transaction block
26.68134379386902
current transaction is aborted, commands ignored until end of transaction block
26.682255506515503
current transaction is aborted, commands ignored until end of transaction block
26.682255506515503
current transaction is aborted, commands ignored until end of transaction block
26.68325424194336
current transaction is aborted, commands ignored until end of transaction block
26.684252738952637
current transaction is aborted, commands ignored until end of transaction block
26.685256242752075
current transaction is aborted, commands ignored until end of transaction block
26.685256242752075
current transaction is aborted, commands ignored until end of transaction block
26.686256170272827
current transaction is aborted, commands ignored until end of transaction block
26.687253713607788
current transaction is aborted, commands ignored until end of transaction block
26.687253713607788
current transaction is aborted, commands ignored until end of transaction block
26.68826389312744
current transaction is aborted, commands ignored until end of transaction block
26.68926501274109
current transaction is aborted, commands ignored until end of transaction block
26.68926501274109
current transaction is aborted, commands ignored until end of transaction block
26.690251111984253
current transaction is aborted, commands ignored until end of transaction block
26.690251111984253
current transaction is aborted, commands ignored until end of transaction block
26.691345930099487
current transaction is aborted, commands ignored until end of transaction block
26.69233727455139
current transaction is aborted, commands ignored until end of transaction block
26.693294763565063
current transaction is aborted, commands ignored until end of transaction block
26.69433093070984
current transaction is aborted, commands ignored until end of transaction block
26.69433093070984
current transaction is aborted, commands ignored until end of transaction block
26.69525408744812
current transaction is aborted, commands ignored until end of transaction block
26.696255445480347
current transaction is aborted, commands ignored until end of transaction block
26.696255445480347
current transaction is aborted, commands ignored until end of transaction block
26.697254180908203
current transaction is aborted, commands ignored until end of transaction block
26.69825291633606
current transaction is aborted, commands ignored until end of transaction block
26.700254201889038
current transaction is aborted, commands ignored until end of transaction block
26.700254201889038
current transaction is aborted, commands ignored until end of transaction block
26.701307773590088
current transaction is aborted, commands ignored until end of transaction block
26.70298671722412
current transaction is aborted, commands ignored until end of transaction block
26.703255653381348
current transaction is aborted, commands ignored until end of transaction block
26.704360961914062
current transaction is aborted, commands ignored until end of transaction block
26.70525360107422
current transaction is aborted, commands ignored until end of transaction block
26.70525360107422
current transaction is aborted, commands ignored until end of transaction block
26.706297636032104
current transaction is aborted, commands ignored until end of transaction block
26.707294464111328
current transaction is aborted, commands ignored until end of transaction block
26.70825457572937
current transaction is aborted, commands ignored until end of transaction block
26.709272623062134
current transaction is aborted, commands ignored until end of transaction block
26.710254192352295
current transaction is aborted, commands ignored until end of transaction block
26.710254192352295
current transaction is aborted, commands ignored until end of transaction block
26.711254835128784
current transaction is aborted, commands ignored until end of transaction block
26.712255716323853
current transaction is aborted, commands ignored until end of transaction block
26.712255716323853
current transaction is aborted, commands ignored until end of transaction block
26.713261127471924
current transaction is aborted, commands ignored until end of transaction block
26.71425771713257
current transaction is aborted, commands ignored until end of transaction block
26.7154324054718
current transaction is aborted, commands ignored until end of transaction block
26.7154324054718
current transaction is aborted, commands ignored until end of transaction block
26.716253757476807
current transaction is aborted, commands ignored until end of transaction block
26.717252254486084
current transaction is aborted, commands ignored until end of transaction block
26.717252254486084
current transaction is aborted, commands ignored until end of transaction block
26.718252420425415
current transaction is aborted, commands ignored until end of transaction block
26.718252420425415
current transaction is aborted, commands ignored until end of transaction block
26.719252109527588
current transaction is aborted, commands ignored until end of transaction block
26.72025442123413
current transaction is aborted, commands ignored until end of transaction block
26.721277236938477
current transaction is aborted, commands ignored until end of transaction block
26.722254037857056
current transaction is aborted, commands ignored until end of transaction block
26.722254037857056
current transaction is aborted, commands ignored until end of transaction block
26.72325563430786
current transaction is aborted, commands ignored until end of transaction block
26.7242534160614
current transaction is aborted, commands ignored until end of transaction block
26.7242534160614
current transaction is aborted, commands ignored until end of transaction block
26.72525453567505
current transaction is aborted, commands ignored until end of transaction block
26.72525453567505
current transaction is aborted, commands ignored until end of transaction block
26.726261138916016
current transaction is aborted, commands ignored until end of transaction block
26.726261138916016
current transaction is aborted, commands ignored until end of transaction block
26.727262258529663
current transaction is aborted, commands ignored until end of transaction block
26.728269577026367
current transaction is aborted, commands ignored until end of transaction block
26.72936201095581
current transaction is aborted, commands ignored until end of transaction block
26.72936201095581
current transaction is aborted, commands ignored until end of transaction block
26.730259895324707
current transaction is aborted, commands ignored until end of transaction block
26.730259895324707
current transaction is aborted, commands ignored until end of transaction block
26.73125410079956
current transaction is aborted, commands ignored until end of transaction block
26.73125410079956
current transaction is aborted, commands ignored until end of transaction block
26.732253789901733
current transaction is aborted, commands ignored until end of transaction block
26.733253240585327
current transaction is aborted, commands ignored until end of transaction block
26.733253240585327
current transaction is aborted, commands ignored until end of transaction block
26.73446798324585
current transaction is aborted, commands ignored until end of transaction block
26.735252857208252
current transaction is aborted, commands ignored until end of transaction block
26.736258029937744
current transaction is aborted, commands ignored until end of transaction block
26.7377769947052
current transaction is aborted, commands ignored until end of transaction block
26.73848605155945
current transaction is aborted, commands ignored until end of transaction block
26.73848605155945
current transaction is aborted, commands ignored until end of transaction block
26.739253044128418
current transaction is aborted, commands ignored until end of transaction block
26.740251302719116
current transaction is aborted, commands ignored until end of transaction block
26.740251302719116
current transaction is aborted, commands ignored until end of transaction block
26.741272926330566
current transaction is aborted, commands ignored until end of transaction block
26.742254495620728
current transaction is aborted, commands ignored until end of transaction block
26.74328851699829
current transaction is aborted, commands ignored until end of transaction block
26.744260549545288
current transaction is aborted, commands ignored until end of transaction block
26.7452654838562
current transaction is aborted, commands ignored until end of transaction block
26.74625277519226
current transaction is aborted, commands ignored until end of transaction block
26.74625277519226
current transaction is aborted, commands ignored until end of transaction block
26.74725317955017
current transaction is aborted, commands ignored until end of transaction block
26.74725317955017
current transaction is aborted, commands ignored until end of transaction block
26.748263835906982
current transaction is aborted, commands ignored until end of transaction block
26.74925994873047
current transaction is aborted, commands ignored until end of transaction block
26.750258684158325
current transaction is aborted, commands ignored until end of transaction block
26.751257181167603
current transaction is aborted, commands ignored until end of transaction block
26.75225305557251
current transaction is aborted, commands ignored until end of transaction block
26.75225305557251
current transaction is aborted, commands ignored until end of transaction block
26.75325345993042
current transaction is aborted, commands ignored until end of transaction block
26.75425148010254
current transaction is aborted, commands ignored until end of transaction block
26.755258321762085
current transaction is aborted, commands ignored until end of transaction block
26.756516695022583
current transaction is aborted, commands ignored until end of transaction block
26.75725483894348
current transaction is aborted, commands ignored until end of transaction block
26.758261680603027
current transaction is aborted, commands ignored until end of transaction block
26.759340286254883
current transaction is aborted, commands ignored until end of transaction block
26.759340286254883
current transaction is aborted, commands ignored until end of transaction block
26.76043176651001
current transaction is aborted, commands ignored until end of transaction block
26.761259078979492
current transaction is aborted, commands ignored until end of transaction block
26.761259078979492
current transaction is aborted, commands ignored until end of transaction block
26.762255668640137
current transaction is aborted, commands ignored until end of transaction block
26.762255668640137
current transaction is aborted, commands ignored until end of transaction block
26.763254404067993
current transaction is aborted, commands ignored until end of transaction block
26.763254404067993
current transaction is aborted, commands ignored until end of transaction block
26.76425838470459
current transaction is aborted, commands ignored until end of transaction block
26.765257358551025
current transaction is aborted, commands ignored until end of transaction block
26.76626181602478
current transaction is aborted, commands ignored until end of transaction block
26.76626181602478
current transaction is aborted, commands ignored until end of transaction block
26.767263650894165
current transaction is aborted, commands ignored until end of transaction block
26.768253803253174
current transaction is aborted, commands ignored until end of transaction block
26.76938557624817
current transaction is aborted, commands ignored until end of transaction block
26.77025532722473
current transaction is aborted, commands ignored until end of transaction block
26.771260023117065
current transaction is aborted, commands ignored until end of transaction block
26.77225399017334
current transaction is aborted, commands ignored until end of transaction block
26.773254871368408
current transaction is aborted, commands ignored until end of transaction block
26.774327993392944
current transaction is aborted, commands ignored until end of transaction block
26.774327993392944
current transaction is aborted, commands ignored until end of transaction block
26.775468349456787
current transaction is aborted, commands ignored until end of transaction block
26.776257514953613
current transaction is aborted, commands ignored until end of transaction block
26.776257514953613
current transaction is aborted, commands ignored until end of transaction block
26.77729058265686
current transaction is aborted, commands ignored until end of transaction block
26.77729058265686
current transaction is aborted, commands ignored until end of transaction block
26.7783522605896
current transaction is aborted, commands ignored until end of transaction block
26.779257774353027
current transaction is aborted, commands ignored until end of transaction block
26.779257774353027
current transaction is aborted, commands ignored until end of transaction block
26.78033137321472
current transaction is aborted, commands ignored until end of transaction block
26.78033137321472
current transaction is aborted, commands ignored until end of transaction block
26.781256198883057
current transaction is aborted, commands ignored until end of transaction block
26.781256198883057
current transaction is aborted, commands ignored until end of transaction block
26.78228259086609
current transaction is aborted, commands ignored until end of transaction block
26.78228259086609
current transaction is aborted, commands ignored until end of transaction block
26.783267974853516
current transaction is aborted, commands ignored until end of transaction block
26.78525733947754
current transaction is aborted, commands ignored until end of transaction block
26.786253690719604
current transaction is aborted, commands ignored until end of transaction block
26.786253690719604
current transaction is aborted, commands ignored until end of transaction block
26.787253379821777
current transaction is aborted, commands ignored until end of transaction block
26.78825354576111
current transaction is aborted, commands ignored until end of transaction block
26.78825354576111
current transaction is aborted, commands ignored until end of transaction block
26.78925395011902
current transaction is aborted, commands ignored until end of transaction block
26.79025912284851
current transaction is aborted, commands ignored until end of transaction block
26.79025912284851
current transaction is aborted, commands ignored until end of transaction block
26.791263103485107
current transaction is aborted, commands ignored until end of transaction block
26.792253017425537
current transaction is aborted, commands ignored until end of transaction block
26.792253017425537
current transaction is aborted, commands ignored until end of transaction block
26.79325246810913
current transaction is aborted, commands ignored until end of transaction block
26.79325246810913
current transaction is aborted, commands ignored until end of transaction block
26.794251918792725
current transaction is aborted, commands ignored until end of transaction block
26.794251918792725
current transaction is aborted, commands ignored until end of transaction block
26.795252799987793
current transaction is aborted, commands ignored until end of transaction block
26.795252799987793
current transaction is aborted, commands ignored until end of transaction block
26.79625630378723
current transaction is aborted, commands ignored until end of transaction block
26.797250986099243
current transaction is aborted, commands ignored until end of transaction block
26.797250986099243
current transaction is aborted, commands ignored until end of transaction block
26.79826068878174
current transaction is aborted, commands ignored until end of transaction block
26.79848074913025
current transaction is aborted, commands ignored until end of transaction block
26.79925227165222
current transaction is aborted, commands ignored until end of transaction block
26.79925227165222
current transaction is aborted, commands ignored until end of transaction block
26.800285577774048
current transaction is aborted, commands ignored until end of transaction block
26.800285577774048
current transaction is aborted, commands ignored until end of transaction block
26.80125093460083
current transaction is aborted, commands ignored until end of transaction block
26.80125093460083
current transaction is aborted, commands ignored until end of transaction block
26.802250623703003
current transaction is aborted, commands ignored until end of transaction block
26.80230140686035
current transaction is aborted, commands ignored until end of transaction block
26.80325698852539
current transaction is aborted, commands ignored until end of transaction block
26.80325698852539
current transaction is aborted, commands ignored until end of transaction block
26.804264307022095
current transaction is aborted, commands ignored until end of transaction block
26.804264307022095
current transaction is aborted, commands ignored until end of transaction block
26.805256128311157
current transaction is aborted, commands ignored until end of transaction block
26.805256128311157
current transaction is aborted, commands ignored until end of transaction block
26.806256771087646
current transaction is aborted, commands ignored until end of transaction block
26.807260990142822
current transaction is aborted, commands ignored until end of transaction block
26.807260990142822
current transaction is aborted, commands ignored until end of transaction block
26.808252573013306
current transaction is aborted, commands ignored until end of transaction block
26.808252573013306
current transaction is aborted, commands ignored until end of transaction block
26.809253454208374
current transaction is aborted, commands ignored until end of transaction block
26.810254335403442
current transaction is aborted, commands ignored until end of transaction block
26.810254335403442
current transaction is aborted, commands ignored until end of transaction block
26.81125521659851
current transaction is aborted, commands ignored until end of transaction block
26.812256574630737
current transaction is aborted, commands ignored until end of transaction block
26.812256574630737
current transaction is aborted, commands ignored until end of transaction block
26.813591957092285
current transaction is aborted, commands ignored until end of transaction block
26.81425428390503
current transaction is aborted, commands ignored until end of transaction block
26.81425428390503
current transaction is aborted, commands ignored until end of transaction block
26.815253257751465
current transaction is aborted, commands ignored until end of transaction block
26.815378427505493
current transaction is aborted, commands ignored until end of transaction block
26.8162522315979
current transaction is aborted, commands ignored until end of transaction block
26.817258596420288
current transaction is aborted, commands ignored until end of transaction block
26.817258596420288
current transaction is aborted, commands ignored until end of transaction block
26.8182532787323
current transaction is aborted, commands ignored until end of transaction block
26.8182532787323
current transaction is aborted, commands ignored until end of transaction block
26.819254398345947
current transaction is aborted, commands ignored until end of transaction block
26.819254398345947
current transaction is aborted, commands ignored until end of transaction block
26.82025456428528
current transaction is aborted, commands ignored until end of transaction block
26.82025456428528
current transaction is aborted, commands ignored until end of transaction block
26.821271419525146
current transaction is aborted, commands ignored until end of transaction block
26.82225465774536
current transaction is aborted, commands ignored until end of transaction block
26.82225465774536
current transaction is aborted, commands ignored until end of transaction block
26.823254585266113
current transaction is aborted, commands ignored until end of transaction block
26.823254585266113
current transaction is aborted, commands ignored until end of transaction block
26.824251651763916
current transaction is aborted, commands ignored until end of transaction block
26.824251651763916
current transaction is aborted, commands ignored until end of transaction block
26.826252698898315
current transaction is aborted, commands ignored until end of transaction block
26.827251434326172
current transaction is aborted, commands ignored until end of transaction block
26.827251434326172
current transaction is aborted, commands ignored until end of transaction block
26.827251434326172
current transaction is aborted, commands ignored until end of transaction block
26.828254461288452
current transaction is aborted, commands ignored until end of transaction block
26.828254461288452
current transaction is aborted, commands ignored until end of transaction block
26.829251050949097
current transaction is aborted, commands ignored until end of transaction block
26.829251050949097
current transaction is aborted, commands ignored until end of transaction block
26.830541610717773
current transaction is aborted, commands ignored until end of transaction block
26.830541610717773
current transaction is aborted, commands ignored until end of transaction block
26.831254959106445
current transaction is aborted, commands ignored until end of transaction block
26.832263469696045
current transaction is aborted, commands ignored until end of transaction block
26.8332576751709
current transaction is aborted, commands ignored until end of transaction block
26.8332576751709
current transaction is aborted, commands ignored until end of transaction block
26.83425545692444
current transaction is aborted, commands ignored until end of transaction block
26.83425545692444
current transaction is aborted, commands ignored until end of transaction block
26.835254192352295
current transaction is aborted, commands ignored until end of transaction block
26.835254192352295
current transaction is aborted, commands ignored until end of transaction block
26.836257219314575
current transaction is aborted, commands ignored until end of transaction block
26.836257219314575
current transaction is aborted, commands ignored until end of transaction block
26.83725619316101
current transaction is aborted, commands ignored until end of transaction block
26.838258028030396
current transaction is aborted, commands ignored until end of transaction block
26.838258028030396
current transaction is aborted, commands ignored until end of transaction block
26.83925771713257
current transaction is aborted, commands ignored until end of transaction block
26.840261697769165
current transaction is aborted, commands ignored until end of transaction block
26.841254949569702
current transaction is aborted, commands ignored until end of transaction block
26.841254949569702
current transaction is aborted, commands ignored until end of transaction block
26.84225344657898
current transaction is aborted, commands ignored until end of transaction block
26.84325337409973
current transaction is aborted, commands ignored until end of transaction block
26.84325337409973
current transaction is aborted, commands ignored until end of transaction block
26.84325337409973
current transaction is aborted, commands ignored until end of transaction block
26.844252347946167
current transaction is aborted, commands ignored until end of transaction block
26.845253944396973
current transaction is aborted, commands ignored until end of transaction block
26.84633779525757
current transaction is aborted, commands ignored until end of transaction block
26.84633779525757
current transaction is aborted, commands ignored until end of transaction block
26.84725260734558
current transaction is aborted, commands ignored until end of transaction block
26.84725260734558
current transaction is aborted, commands ignored until end of transaction block
26.848265171051025
current transaction is aborted, commands ignored until end of transaction block
26.849255800247192
current transaction is aborted, commands ignored until end of transaction block
26.850252151489258
current transaction is aborted, commands ignored until end of transaction block
26.850252151489258
current transaction is aborted, commands ignored until end of transaction block
26.85125207901001
current transaction is aborted, commands ignored until end of transaction block
26.85125207901001
current transaction is aborted, commands ignored until end of transaction block
26.853269338607788
current transaction is aborted, commands ignored until end of transaction block
26.854254007339478
current transaction is aborted, commands ignored until end of transaction block
26.854254007339478
current transaction is aborted, commands ignored until end of transaction block
26.855253219604492
current transaction is aborted, commands ignored until end of transaction block
26.855253219604492
current transaction is aborted, commands ignored until end of transaction block
26.856253623962402
current transaction is aborted, commands ignored until end of transaction block
26.856253623962402
current transaction is aborted, commands ignored until end of transaction block
26.85732865333557
current transaction is aborted, commands ignored until end of transaction block
26.85732865333557
current transaction is aborted, commands ignored until end of transaction block
26.858454942703247
current transaction is aborted, commands ignored until end of transaction block
26.858454942703247
current transaction is aborted, commands ignored until end of transaction block
26.85926342010498
current transaction is aborted, commands ignored until end of transaction block
26.860337734222412
current transaction is aborted, commands ignored until end of transaction block
26.860337734222412
current transaction is aborted, commands ignored until end of transaction block
26.861251831054688
current transaction is aborted, commands ignored until end of transaction block
26.861251831054688
current transaction is aborted, commands ignored until end of transaction block
26.86225199699402
current transaction is aborted, commands ignored until end of transaction block
26.86225199699402
current transaction is aborted, commands ignored until end of transaction block
26.86325478553772
current transaction is aborted, commands ignored until end of transaction block
26.86325478553772
current transaction is aborted, commands ignored until end of transaction block
26.864253044128418
current transaction is aborted, commands ignored until end of transaction block
26.864253044128418
current transaction is aborted, commands ignored until end of transaction block
26.865258932113647
current transaction is aborted, commands ignored until end of transaction block
26.865258932113647
current transaction is aborted, commands ignored until end of transaction block
26.866254329681396
current transaction is aborted, commands ignored until end of transaction block
26.86729645729065
current transaction is aborted, commands ignored until end of transaction block
26.86729645729065
current transaction is aborted, commands ignored until end of transaction block
26.86825394630432
current transaction is aborted, commands ignored until end of transaction block
26.869293451309204
current transaction is aborted, commands ignored until end of transaction block
26.869293451309204
current transaction is aborted, commands ignored until end of transaction block
26.8702552318573
current transaction is aborted, commands ignored until end of transaction block
26.8702552318573
current transaction is aborted, commands ignored until end of transaction block
26.8702552318573
current transaction is aborted, commands ignored until end of transaction block
26.87161612510681
current transaction is aborted, commands ignored until end of transaction block
26.872258186340332
current transaction is aborted, commands ignored until end of transaction block
26.872258186340332
current transaction is aborted, commands ignored until end of transaction block
26.87325620651245
current transaction is aborted, commands ignored until end of transaction block
26.87325620651245
current transaction is aborted, commands ignored until end of transaction block
26.874255657196045
current transaction is aborted, commands ignored until end of transaction block
26.874255657196045
current transaction is aborted, commands ignored until end of transaction block
26.875293731689453
1637452739.1762984
time taken to load the data is 26.875293731689453 seconds
|
course/question_answering/03_exact_match.ipynb | ###Markdown
Exact Match MetricThe exact match (EM) metric does what you would expect it to. It returns a boolean value, yes or no, as to whether our predicted text matches to our true text. Let's take the following answers from the previous section:
###Code
answers = [
{"predicted": "France", "true": "France."},
{"predicted": "in the 10th and 11th centuries", "true": "10th and 11th centuries"},
{"predicted": "10th and 11th centuries", "true": "10th and 11th centuries"},
{"predicted": "Denmark, Iceland and Norway", "true": "Denmark, Iceland and Norway"},
{"predicted": "Rollo", "true": "Rollo,"},
]
###Output
_____no_output_____
###Markdown
To calculate the EM accuracy of our model using these five predictions, all we need to do is iterate through each prediction, and append a `1` where there is an exact match, or a `0` where there is not.
###Code
em = []
for answer in answers:
if answer["predicted"] == answer["true"]:
em.append(1)
else:
em.append(0)
# then total up all values in em and divide by number of values
sum(em) / len(em)
###Output
_____no_output_____
###Markdown
A 40% EM score, which doesn't look very good despite the fact that we got incredibly close on every single answer. This is one of the limitations of using the EM metric, but we can make it slightly more lenient. For example our first answer returns *`'France'`* and *`'France.'`*, the only difference being the final punctuation which is included in the *true* answer (which is actually less correct that what our model predicted).We can clean each side of our text before comparison to remove these minor differences and return an exact match. For this, we can use regular expressions. We will remove any character which is not a space, letter, or number.
###Code
import re
em = []
for answer in answers:
pred = re.sub("[^0-9a-z ]", "", answer["predicted"].lower())
true = re.sub("[^0-9a-z ]", "", answer["true"].lower())
if pred == true:
em.append(1)
else:
em.append(0)
# then total up all values in em and divide by number of values
sum(em) / len(em)
###Output
_____no_output_____
###Markdown
Exact Match MetricThe exact match (EM) metric does what you would expect it to. It returns a boolean value, yes or no, as to whether our predicted text matches to our true text. Let's take the following answers from the previous section:
###Code
answers = [{'predicted': 'France', 'true': 'France.'},
{'predicted': 'in the 10th and 11th centuries',
'true': '10th and 11th centuries'},
{'predicted': '10th and 11th centuries', 'true': '10th and 11th centuries'},
{'predicted': 'Denmark, Iceland and Norway',
'true': 'Denmark, Iceland and Norway'},
{'predicted': 'Rollo', 'true': 'Rollo,'}]
###Output
_____no_output_____
###Markdown
To calculate the EM accuracy of our model using these five predictions, all we need to do is iterate through each prediction, and append a `1` where there is an exact match, or a `0` where there is not.
###Code
em = []
for answer in answers:
if answer['predicted'] == answer['true']:
em.append(1)
else:
em.append(0)
# then total up all values in em and divide by number of values
sum(em)/len(em)
###Output
_____no_output_____
###Markdown
A 40% EM score, which doesn't look very good despite the fact that we got incredibly close on every single answer. This is one of the limitations of using the EM metric, but we can make it slightly more lenient. For example our first answer returns *`'France'`* and *`'France.'`*, the only difference being the final punctuation which is included in the *true* answer (which is actually less correct that what our model predicted).We can clean each side of our text before comparison to remove these minor differences and return an exact match. For this, we can use regular expressions. We will remove any character which is not a space, letter, or number.
###Code
import re
em = []
for answer in answers:
pred = re.sub('[^0-9a-z ]', '', answer['predicted'].lower())
true = re.sub('[^0-9a-z ]', '', answer['true'].lower())
if pred == true:
em.append(1)
else:
em.append(0)
# then total up all values in em and divide by number of values
sum(em)/len(em)
###Output
_____no_output_____
###Markdown
Exact Match MetricThe exact match (EM) metric does what you would expect it to. It returns a boolean value, yes or no, as to whether our predicted text matches to our true text. Let's take the following answers from the previous section:
###Code
answers = [{'predicted': 'France', 'true': 'France.'},
{'predicted': 'in the 10th and 11th centuries',
'true': '10th and 11th centuries'},
{'predicted': '10th and 11th centuries', 'true': '10th and 11th centuries'},
{'predicted': 'Denmark, Iceland and Norway',
'true': 'Denmark, Iceland and Norway'},
{'predicted': 'Rollo', 'true': 'Rollo,'}]
###Output
_____no_output_____
###Markdown
To calculate the EM accuracy of our model using these five predictions, all we need to do is iterate through each prediction, and append a `1` where there is an exact match, or a `0` where there is not.
###Code
em = []
for answer in answers:
if answer['predicted'] == answer['true']:
em.append(1)
else:
em.append(0)
# then total up all values in em and divide by number of values
sum(em)/len(em)
###Output
_____no_output_____
###Markdown
A 40% EM score, which doesn't look very good despite the fact that we got incredibly close on every single answer. This is one of the limitations of using the EM metric, but we can make it slightly more lenient. For example our first answer returns *`'France'`* and *`'France.'`*, the only difference being the final punctuation which is included in the *true* answer (which is actually less correct that what our model predicted).We can clean each side of our text before comparison to remove these minor differences and return an exact match. For this, we can use regular expressions. We will remove any character which is not a space, letter, or number.
###Code
import re
em = []
for answer in answers:
pred = re.sub('[^0-9a-z ]', '', answer['predicted'].lower())
true = re.sub('[^0-9a-z ]', '', answer['true'].lower())
if pred == true:
em.append(1)
else:
em.append(0)
# then total up all values in em and divide by number of values
sum(em)/len(em)
###Output
_____no_output_____ |
2D/Bandstructure-2D.ipynb | ###Markdown
Run the cell below by clicking on it and pressing *ctrl + enter* and enjoy the game:* Since the 2D problem is computationally more involved than the 1D, then it takes a bit slower to recalculate the band structure, that is, 1-3 seconds.* The image opens in a new window and it is possible to control it by the widgets appearing bellow* To move the 3D image use the left click of the mouse; to zoom in and out use left click and move the mouse up/down. The user icons in the figure work only for the 2D figure* The Brillouin zone is discretised to $ 30 \times 30 $ different k-points here.
###Code
%run UserInterface_2D.ipynb
display(ToDisplay)
fig.show()
###Output
_____no_output_____ |
projects/alasdair/notebooks/contextual_bandits.ipynb | ###Markdown
Contextual Bandits (incomplete)
###Code
import numpy as np
import pandas as pd
import pickle
import seaborn as sns
from pandas import DataFrame, Index
from sklearn import metrics
from sklearn.linear_model import SGDClassifier
from sklearn.svm import SVC
from sklearn.kernel_approximation import RBFSampler, Nystroem
from sklearn.linear_model import PassiveAggressiveClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.cluster import MiniBatchKMeans
from sklearn.utils import shuffle
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import KFold
from scipy.spatial.distance import cosine
from IPython.core.display import HTML
from mclearn import *
%matplotlib inline
sns.set_palette("husl", 7)
HTML(open("styles/stylesheet.css", "r").read())
# read in the data
sdss = pd.io.parsers.read_csv("data/sdss_dr7_photometry.csv.gz", compression="gzip", index_col=["ra", "dec"])
# save the names of the 11 feature vectors and the target column
feature_names = ["psfMag_u", "psfMag_g", "psfMag_r", "psfMag_i", "psfMag_z",
"petroMag_u", "petroMag_g", "petroMag_r", "petroMag_i", "petroMag_z", "petroRad_r"]
target_name = "class"
X_train, X_test, y_train, y_test = train_test_split(np.array(sdss[feature_names]), np.array(sdss['class']), train_size=100000, test_size=30000)
# shuffle the data
X_train, y_train = shuffle(X_train, y_train)
X_test, y_test = shuffle(X_test, y_test)
###Output
_____no_output_____
###Markdown
Query by Committee
###Code
accuracies = []
predictions = [[] for i in range(10)]
forests = [None] * 11
# initially, pick 100 random points to query
X_train_cur, y_train_cur = X_train[:100], y_train[:100]
X_train_pool, y_train_pool = X_train[100:], y_train[100:]
# find the accuracy rate, given the current training example
forests[-1] = RandomForestClassifier(n_jobs=-1, class_weight='auto', random_state=5)
forests[-1].fit(X_train_cur, y_train_cur)
y_pred_test = forests[-1].predict(X_test)
confusion_test = metrics.confusion_matrix(y_test, y_pred_test)
accuracies.append(balanced_accuracy_expected(confusion_test))
# query by committee to pick the next point to sample
kfold = KFold(len(y_train_cur), n_folds=10, shuffle=True)
for i, (train_index, test_index) in enumerate(kfold):
forests[i] = RandomForestClassifier(n_jobs=-1, class_weight='auto', random_state=5)
forests[i].fit(X_train_cur[train_index], y_train_cur[train_index])
predictions[i] = forests[i].predict(X_train_pool)
###Output
_____no_output_____
###Markdown
Stochastic Gradient Descent
###Code
# normalise features to have mean 0 and variance 1
scaler = StandardScaler()
scaler.fit(X_train) # fit only on training data
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
# approximates feature map of an RBF kernel by Monte Carlo approximation of its Fourier transform.
rbf_feature = RBFSampler(n_components=200, gamma=0.3, random_state=1)
X_train_rbf = rbf_feature.fit_transform(X_train)
X_test_rbf = rbf_feature.transform(X_test)
###Output
_____no_output_____
###Markdown
Random selection of data points at each iteration.
###Code
benchmark_sgd = SGDClassifier(loss="hinge", alpha=0.000001, penalty="l1", n_iter=10, n_jobs=-1,
class_weight='auto', fit_intercept=True, random_state=1)
benchmark_sgd.fit(X_train_rbf[:100], y_train[:100])
benchmark_y_pred = benchmark_sgd.predict(X_test_rbf)
benchmark_confusion = metrics.confusion_matrix(y_test, benchmark_y_pred)
benchmark_learning_curve = []
sample_sizes = np.concatenate((np.arange(100, 1000, 100), np.arange(1000, 10000, 1000), np.arange(10000, 100000, 10000),
np.arange(100000, 1000000, 100000), np.arange(1000000, len(X_train), 500000), [len(X_train)]))
benchmark_learning_curve.append(balanced_accuracy_expected(benchmark_confusion))
classes = np.unique(y_train)
for i, j in zip(sample_sizes[:-1], sample_sizes[1:]):
for _ in range(10):
X_train_partial, y_train_partial = shuffle(X_train_rbf[i:j], y_train[i:j])
benchmark_sgd.partial_fit(X_train_partial, y_train_partial, classes=classes)
benchmark_y_pred = benchmark_sgd.predict(X_test_rbf)
benchmark_confusion = metrics.confusion_matrix(y_test, benchmark_y_pred)
benchmark_learning_curve.append(balanced_accuracy_expected(benchmark_confusion))
# save output for later re-use
with open('results/sdss_active_learning/sgd_benchmark.pickle', 'wb') as f:
pickle.dump((benchmark_sgd, sample_sizes, benchmark_learning_curve), f, pickle.HIGHEST_PROTOCOL)
plot_learning_curve(sample_sizes, benchmark_learning_curve, "Benchmark Learning Curve (Random Selection)")
###Output
_____no_output_____
###Markdown
SVM with Random Sampling
###Code
svm_random = SVC(kernel='rbf', random_state=7, cache_size=2000, class_weight='auto')
svm_random.fit(X_train[:100], y_train[:100])
svm_y_pred = svm_random.predict(X_test)
svm_confusion = metrics.confusion_matrix(y_test, svm_y_pred)
svm_learning_curve = []
sample_sizes = np.concatenate((np.arange(200, 1000, 100), np.arange(1000, 20000, 1000)))
svm_learning_curve.append(balanced_accuracy_expected(svm_confusion))
previous_h = svm_random.predict(X_train)
rewards = []
for i in sample_sizes:
svm_random.fit(X_train[:i], y_train[:i])
svm_y_pred = svm_random.predict(X_test)
svm_confusion = metrics.confusion_matrix(y_test, svm_y_pred)
svm_learning_curve.append(balanced_accuracy_expected(svm_confusion))
current_h = svm_random.predict(X_train)
reward = 0
for i, j in zip(current_h, previous_h):
reward += 1 if i != j else 0
reward = reward / len(current_h)
previous_h = current_h
rewards.append(reward)
# save output for later re-use
with open('results/sdss_active_learning/sgd_svm_random.pickle', 'wb') as f:
pickle.dump((sample_sizes, svm_learning_curve, rewards), f, pickle.HIGHEST_PROTOCOL)
log_rewards = np.log(rewards)
beta, intercept = np.polyfit(sample_sizes, log_rewards, 1)
alpha = np.exp(intercept)
plt.plot(sample_sizes, rewards)
plt.plot(sample_sizes, alpha * np.exp(beta * sample_sizes))
plot_learning_curve(sample_sizes, svm_learning_curve, "SVM Learning Curve (Random Selection)")
###Output
_____no_output_____
###Markdown
Contextual Bandits We implement a contextual bandit algorithm for active learning, suggested by Bouneffouf et al (2014).
###Code
n_clusters = 100
kmeans = MiniBatchKMeans(n_clusters=n_clusters, init_size=100*n_clusters, random_state=2)
X_train_transformed = kmeans.fit_transform(X_train)
###Output
_____no_output_____
###Markdown
Each cluster has a context vector containing 4 pieces of information:* The mean distance between individual points in the cluster.* The variance of the distance between individual points in the cluster.* The number of points in the cluster.* The proportion of points that have been labelled in the cluster.
###Code
unlabelled_points = set(range(0, len(X_train)))
empty_clusters = set()
cluster_sizes = [len(np.flatnonzero(kmeans.labels_ == i)) for i in range(n_clusters)]
cluster_points = [list(np.flatnonzero(kmeans.labels_ == i)) for i in range(n_clusters)]
no_labelled = [0 for i in range(n_clusters)]
prop_labelled = [0 for i in range(n_clusters)]
d_means = []
d_var = []
for i in range(n_clusters):
distance, distance_squared, count = 0, 0, 0
for j, p1 in enumerate(cluster_points[i]):
for p2 in cluster_points[i][j+1:]:
d = np.fabs(X_train_transformed[p1][i] - X_train_transformed[p2][i])
distance += d
distance_squared += d**2
count += 1
if cluster_sizes[i] > 1:
d_means.append(distance / count)
d_var.append((distance_squared / count) - (distance / count)**2)
else:
d_means.append(0)
d_var.append(0)
context = np.array([list(x)for x in zip(d_means, d_var, cluster_sizes, prop_labelled)])
###Output
_____no_output_____
###Markdown
We'll use Thompson Sampling with linear payoff and with Gaussian prior and likelihood. The algorithm is described in Argawal et al (2013).
###Code
context_size = 4
B = np.eye(context_size)
mu = np.array([0] * context_size)
f = np.array([0] * context_size)
v_squared = 0.25
###Output
_____no_output_____
###Markdown
Initially, we choose 100 random points to sample.
###Code
active_sgd = SVC(kernel='rbf', random_state=7, cache_size=2000, class_weight='auto')
#active_sgd = SGDClassifier(loss="hinge", alpha=0.000001, penalty="l1", n_iter=10, n_jobs=-1,
# class_weight='auto', fit_intercept=True, random_state=1)
X_train_cur, y_train_cur = X_train[:100], y_train[:100]
active_sgd.fit(X_train_cur, y_train_cur)
# update context
for i in np.arange(0, 100):
this_cluster = kmeans.labels_[i]
cluster_points[this_cluster].remove(i)
unlabelled_points.remove(i)
if not cluster_points[this_cluster]:
empty_clusters.add(this_cluster)
no_labelled[this_cluster] += 1
context[this_cluster][3] = no_labelled[this_cluster] / cluster_sizes[this_cluster]
# initial prediction
active_y_pred = active_sgd.predict(X_test)
active_confusion = metrics.confusion_matrix(y_test, active_y_pred)
active_learning_curve = []
active_learning_curve.append(balanced_accuracy_expected(active_confusion))
classes = np.unique(y_train)
# compute the current hypothesis
previous_h = active_sgd.predict(X_train)
active_steps = [100]
no_choices = 1
rewards = []
for i in range(2000 // no_choices):
mu_sample = np.random.multivariate_normal(mu, v_squared * np.linalg.inv(B))
reward_sample = [np.dot(c, mu_sample) for c in context]
chosen_arm = np.argmax(reward_sample)
while chosen_arm in empty_clusters:
reward_sample[chosen_arm] = float('-inf')
chosen_arm = np.argmax(reward_sample)
# select a random point in the cluster
query = np.random.choice(cluster_points[chosen_arm], min(len(cluster_points[chosen_arm]), no_choices), replace=False)
# update context
for q in query:
cluster_points[chosen_arm].remove(q)
unlabelled_points.remove(q)
if not cluster_points[chosen_arm]:
empty_clusters.add(chosen_arm)
no_labelled[chosen_arm] += len(query)
context[chosen_arm][3] = no_labelled[chosen_arm] / cluster_sizes[chosen_arm]
active_steps.append(active_steps[-1] + len(query))
# run stochastic gradient descent
#active_sgd.partial_fit(X_train_rbf[query], y_train[query], classes=classes)
X_train_cur = np.vstack((X_train_cur, X_train[query]))
y_train_cur = np.concatenate((y_train_cur, y_train[query]))
active_sgd = SVC(kernel='rbf', random_state=7, cache_size=2000, class_weight='auto')
active_sgd.fit(X_train_cur, y_train_cur)
active_y_pred = active_sgd.predict(X_test)
active_confusion = metrics.confusion_matrix(y_test, active_y_pred)
active_learning_curve.append(balanced_accuracy_expected(active_confusion))
# compute the reward from choosing such arm
current_h = active_sgd.predict(X_train)
reward = 0
for i, j in zip(current_h, previous_h):
reward += 1 if i != j else 0
reward = reward / len(current_h)
reward = reward / (alpha * np.exp(beta * len(y_train_cur)))
previous_h = current_h
rewards.append(reward)
# compute posterior distribution
B = B + np.outer(context[chosen_arm], context[chosen_arm])
f = f + reward * context[chosen_arm]
mu = np.dot(np.linalg.inv(B), f)
plot_learning_curve(active_steps, active_learning_curve, "SVM Learning Curve (Active Learning)")
###Output
_____no_output_____ |
Sessions/Session14/Day2/DeeplearningBlank.ipynb | ###Markdown
Classification with a Multi-layer Perceptron (MLP)Author: V. Ashley VillarIn this problem set, we will *not* be implementing neural networks from scratch. Yesterday, you built a *perceptron* in Python. Multi-layer perceptrons (MLPs) are, as discussed in the lecture, several layers of these perceptrons stacked. Here, we will learn how to use one of the most common modules for building neural networks: Pytorch
###Code
# this module contains our dataset
!pip install astronn
#this is pytorch, which we will use to build our nn
import torch
#Standards for plotting, math
import matplotlib.pyplot as plt
import numpy as np
#for our objective function
from sklearn.metrics import accuracy_score, confusion_matrix, ConfusionMatrixDisplay
###Output
_____no_output_____
###Markdown
A few notes on Pytorch syntax (Many thanks to Vanessa Bohm!!)Pytorch datatype summary: The model expects a single precision input. You can change the type of a tensor with tensor_name.type(), where tensor_name is the name of your tensor and type is the dtype. For typecasting into single precision floating points, use float(). A numpy array is typecasted with array_name.astype(type). For single precision, the type should be np.float32.Before we analyze tensors we often want to convert them to numpy arrays with tensor_name.numpy()If pytorch has been tracking operations that resulted in the current tensor value, you need to detach the tensor from the graph (meaning you want to ignore things like its derivative) before you can transform it into a numpy array: tensor_name.detach(). Scalars can be detached with scalar.item()Pytorch allows you to easily use your CPU or GPU; however, we are not using this feature. If you tensor is currently on the GPU, you can bring it onto the CPU with tensor_name.cpu() Problem 1: Understanding the DataFor this problem set, we will use the Galaxy10 dataset made available via the astroNN module. This dataset is made up of 17736 images of galaxies which have been labelled by hand. See this [link](https://astronn.readthedocs.io/en/latest/galaxy10.html) for more information. First we will visualize our data.**Problem 1a** Show one example of each class as an image.
###Code
from astroNN.datasets import load_galaxy10
from astroNN.datasets.galaxy10 import galaxy10cls_lookup
%matplotlib inline
#helpful functions:
#Load the images and labels as numbers
images, labels_original = load_galaxy10()
#convert numbers to a string
galaxy10cls_lookup(labels_original[0])
###Output
_____no_output_____
###Markdown
**Problem 2b** Make a histogram showing the fraction of each classKeep only the top two classes (i.e., the classes with the most galaxies)
###Code
images_top_two = ...
labels_top_two = ...
###Output
_____no_output_____
###Markdown
This next block of code converts the data to a format which is more compatible with our neural network.
###Code
# This code converts from integer labels to 'one-hot encodings'. What does that term mean?
import torch.nn.functional as F
torch.set_default_dtype(torch.float)
labels_top_two_one_hot = F.one_hot(torch.tensor(labels_top_two - np.min(labels_top_two)).long(), num_classes=2)
images_top_two = torch.tensor(images_top_two).float()
labels_top_two_one_hot = labels_top_two_one_hot.float()
# we're going to flatten the images for our MLP
images_top_two_flat = ...
#Normalize the flux of the images here
images_top_two_flat_normed = ...
###Output
_____no_output_____
###Markdown
**Problem 2c** Split the data into a training and test set (66/33 split) using the train_test_split function from sklearn
###Code
from sklearn.model_selection import train_test_split
###Output
_____no_output_____
###Markdown
The next cell will outline how one can make a MLP with pytorch. **Problem 3a** Talk to a partner about how this code works, line by line. Add another hidden layer which is the same size as the first hidden layer. Choose an appropriate final nonlinear layer for this classification problem. Choose the appropriate number of outputs.
###Code
class MLP(torch.nn.Module):
# this defines the model
def __init__(self, input_size, hidden_size):
super(MLP, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.hiddenlayer = torch.nn.Linear(self.input_size, self.hidden_size)
self.outputlayer = torch.nn.Linear(self.hidden_size, HOW_MANY_OUTPUTS)
# some nonlinear options
self.sigmoid = torch.nn.Sigmoid()
self.softmax = torch.nn.Softmax()
self.relu = torch.nn.ReLU()
def forward(self, x):
layer1 = self.hiddenlayer(x)
activation = self.sigmoid(layer1)
layer2 = self.outputlayer(activation)
output = self.NONLINEAR(layer2)
return output
###Output
_____no_output_____
###Markdown
The next block of code will show how one can train the model for 100 epochs. Note that we use the *binary cross-entropy* as our objective function and *stochastic gradient descent* as our optimization method.**Problem 3b** Edit the code so that the function plots the loss for the training and test loss for each epoch.
###Code
# train the model
def train_model(training_data,training_labels, test_data,test_labels, model):
# define the optimization
criterion = torch.nn.BCELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.1,momentum=0.9)
# Increase the number of epochs for your "final" run
for epoch in range(10):
# clear the gradient
optimizer.zero_grad()
# compute the model output
myoutput = model(training_data)
# calculate loss
loss = criterion(myoutput, training_labels)
# credit assignment
loss.backward()
# update model weights
optimizer.step()
# ADD PLOT
###Output
_____no_output_____
###Markdown
The next block trains the code, assuming a hidden layer size of 100 neurons.**Problem 3c** Change the learning rate `lr` to minimize the cross entropy score
###Code
model = MLP(np.shape(images_train[0])[0],100)
train_model(images_train, labels_train, images_test, labels_test, model)
###Output
_____no_output_____
###Markdown
Write a function called `evaluate_model` which takes the image data, labels and model as input, and the accuracy as output. you can use the `accuracy_score` function.
###Code
# evaluate the model
def evaluate_model(data,labels, model):
return(acc)
# evaluate the model
acc = evaluate_model(images_test,labels_test, model)
print('Accuracy: %.3f' % acc)
###Output
_____no_output_____
###Markdown
**Problem 3d** Make a confusion matrix for the test set using `confusiion_matrix` and 'ConfusionMatrixDisplay`
###Code
###Output
_____no_output_____
###Markdown
**Challenge Problem** Add a third class to your classifier and begin accounting for uneven classes. There are several steps to this:1. Edit the neural network to output 3 classes2. Change the criterion to *Cross Entropy Loss* , such that the entropy of each class is weighted by the inverse fraction of each class size (e.g., if the galaxy class breakdowns are 1:2:3, the weights would be 6:3:2).
###Code
###Output
_____no_output_____
###Markdown
Classification with a Multi-layer Perceptron (MLP)In this problem set, we will *not* be implementing neural networks from scratch. Yesterday, you built a *perceptron* in Python. Multi-layer perceptrons (MLPs) are, as discussed in the lecture, several layers of these perceptrons stacked. Here, we will learn how to use one of the most common modules for building neural networks: Pytorch
###Code
# this module contains our dataset
!pip install astronn
#this is pytorch, which we will use to build our nn
import torch
#Standards for plotting, math
import matplotlib.pyplot as plt
import numpy as np
#for our objective function
from sklearn.metrics import accuracy_score, confusion_matrix, ConfusionMatrixDisplay
###Output
_____no_output_____
###Markdown
A few notes on Pytorch syntax (Many thanks to Vanessa Bohm!!)Pytorch datatype summary: The model expects a single precision input. You can change the type of a tensor with tensor_name.type(), where tensor_name is the name of your tensor and type is the dtype. For typecasting into single precision floating points, use float(). A numpy array is typecasted with array_name.astype(type). For single precision, the type should be np.float32.Before we analyze tensors we often want to convert them to numpy arrays with tensor_name.numpy()If pytorch has been tracking operations that resulted in the current tensor value, you need to detach the tensor from the graph (meaning you want to ignore things like its derivative) before you can transform it into a numpy array: tensor_name.detach(). Scalars can be detached with scalar.item()Pytorch allows you to easily use your CPU or GPU; however, we are not using this feature. If you tensor is currently on the GPU, you can bring it onto the CPU with tensor_name.cpu() Problem 1: Understanding the DataFor this problem set, we will use the Galaxy10 dataset made available via the astroNN module. This dataset is made up of 17736 images of galaxies which have been labelled by hand. See this [link](https://astronn.readthedocs.io/en/latest/galaxy10.html) for more information. First we will visualize our data.**Problem 1a** Show one example of each class as an image.
###Code
from astroNN.datasets import load_galaxy10
from astroNN.datasets.galaxy10 import galaxy10cls_lookup
%matplotlib inline
#helpful functions:
#Load the images and labels as numbers
images, labels_original = load_galaxy10()
#convert numbers to a string
galaxy10cls_lookup(labels_original[0])
###Output
_____no_output_____
###Markdown
**Problem 2b** Make a histogram showing the fraction of each classKeep only the top two classes (i.e., the classes with the most galaxies)
###Code
images_top_two = ...
labels_top_two = ...
###Output
_____no_output_____
###Markdown
This next block of code converts the data to a format which is more compatible with our neural network.
###Code
# This code converts from integer labels to 'one-hot encodings'. What does that term mean?
import torch.nn.functional as F
torch.set_default_dtype(torch.float)
labels_top_two_one_hot = F.one_hot(torch.tensor(labels_top_two - np.min(labels_top_two)).long(), num_classes=2)
images_top_two = torch.tensor(images_top_two).float()
labels_top_two_one_hot = labels_top_two_one_hot.float()
# we're going to flatten the images for our MLP
images_top_two_flat = ...
#Normalize the flux of the images here
images_top_two_flat_normed = ...
###Output
_____no_output_____
###Markdown
**Problem 2c** Split the data into a training and test set (66/33 split) using the train_test_split function from sklearn
###Code
from sklearn.model_selection import train_test_split
###Output
_____no_output_____
###Markdown
The next cell will outline how one can make a MLP with pytorch. **Problem 3a** Talk to a partner about how this code works, line by line. Add another hidden layer which is the same size as the first hidden layer. Choose an appropriate final nonlinear layer for this classification problem. Choose the appropriate number of outputs.
###Code
class MLP(torch.nn.Module):
# this defines the model
def __init__(self, input_size, hidden_size):
super(MLP, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.hiddenlayer = torch.nn.Linear(self.input_size, self.hidden_size)
self.outputlayer = torch.nn.Linear(self.hidden_size, HOW_MANY_OUTPUTS)
# some nonlinear options
self.sigmoid = torch.nn.Sigmoid()
self.softmax = torch.nn.Softmax()
self.relu = torch.nn.ReLU()
def forward(self, x):
layer1 = self.hiddenlayer(x)
activation = self.sigmoid(layer1)
layer2 = self.outputlayer(activation)
output = self.NONLINEAR(layer2)
return output
###Output
_____no_output_____
###Markdown
The next block of code will show how one can train the model for 100 epochs. Note that we use the *binary cross-entropy* as our objective function and *stochastic gradient descent* as our optimization method.**Problem 3b** Edit the code so that the function plots the loss for the training and test loss for each epoch.
###Code
# train the model
def train_model(training_data,training_labels, test_data,test_labels, model):
# define the optimization
criterion = torch.nn.BCELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.1,momentum=0.9)
# Increase the number of epochs for your "final" run
for epoch in range(10):
# clear the gradient
optimizer.zero_grad()
# compute the model output
myoutput = model(training_data)
# calculate loss
loss = criterion(myoutput, training_labels)
# credit assignment
loss.backward()
# update model weights
optimizer.step()
# ADD PLOT
###Output
_____no_output_____
###Markdown
The next block trains the code, assuming a hidden layer size of 100 neurons.**Problem 3c** Change the learning rate `lr` to minimize the cross entropy score
###Code
model = MLP(np.shape(images_train[0])[0],100)
train_model(images_train, labels_train, images_test, labels_test, model)
###Output
_____no_output_____
###Markdown
Write a function called `evaluate_model` which takes the image data, labels and model as input, and the accuracy as output. you can use the `accuracy_score` function.
###Code
# evaluate the model
def evaluate_model(data,labels, model):
return(acc)
# evaluate the model
acc = evaluate_model(images_test,labels_test, model)
print('Accuracy: %.3f' % acc)
###Output
_____no_output_____
###Markdown
**Problem 3d** Make a confusion matrix for the test set using `confusiion_matrix` and 'ConfusionMatrixDisplay`
###Code
###Output
_____no_output_____
###Markdown
**Challenge Problem** Add a third class to your classifier and begin accounting for uneven classes. There are several steps to this:1. Edit the neural network to output 3 classes2. Change the criterion to *Cross Entropy Loss* , such that the entropy of each class is weighted by the inverse fraction of each class size (e.g., if the galaxy class breakdowns are 1:2:3, the weights would be 6:3:2).
###Code
###Output
_____no_output_____ |
examples/6_p_scale_test_Dorogokupets2015_Au.ipynb | ###Markdown
For high dpi displays.
###Code
%config InlineBackend.figure_format = 'retina'
###Output
_____no_output_____
###Markdown
0. General note This example compares pressure calculated from `pytheos` and original publication for the gold scale by Dorogokupets 2015. 1. Global setup
###Code
import matplotlib.pyplot as plt
import numpy as np
from uncertainties import unumpy as unp
import pytheos as eos
###Output
_____no_output_____
###Markdown
3. Compare
###Code
eta = np.linspace(1., 0.65, 8)
print(eta)
dorogokupets2015_au = eos.gold.Dorogokupets2015()
help(dorogokupets2015_au)
dorogokupets2015_au.print_equations()
dorogokupets2015_au.print_equations()
dorogokupets2015_au.print_parameters()
v0 = 67.84742110765599
dorogokupets2015_au.three_r
v = v0 * (eta)
temp = 2500.
p = dorogokupets2015_au.cal_p(v, temp * np.ones_like(v))
print('for T = ', temp)
for eta_i, p_i in zip(eta, p):
print("{0: .3f} {1: .2f}".format(eta_i, p_i))
###Output
for T = 2500.0
1.000 15.67+/-0.00
0.950 24.96+/-0.00
0.900 38.28+/-0.00
0.850 57.17+/-0.01
0.800 83.82+/-0.01
0.750 121.43+/-0.01
0.700 174.70+/-0.01
0.650 250.67+/-0.02
###Markdown
Table is not given for this publication.
###Code
v = dorogokupets2015_au.cal_v(p, temp * np.ones_like(p), min_strain=0.6)
print((v/v0))
###Output
[1. 0.95 0.9 0.85 0.8 0.75 0.7 0.65]
|
doc/source/tutorials/tutorial_binning_process_sklearn_pipeline.ipynb | ###Markdown
Tutorial: Binning process with sklearn Pipeline This example shows how to use a binning process as a transformation within a Scikit-learn Pipeline. A pipeline generally comprises the application of one or more transforms and a final estimator.
###Code
from optbinning import BinningProcess
from sklearn.datasets import load_boston
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
###Output
_____no_output_____
###Markdown
To get us started, let’s load a well-known dataset from the UCI repository
###Code
data = load_boston()
variable_names = data.feature_names
X = data.data
y = data.target
variable_names
categorical_variables = ['CHAS']
###Output
_____no_output_____
###Markdown
Instantiate a ``BinningProcess`` object class with variable names and the list of numerical variables to be considered categorical. Create pipeline object by providing two steps: a binning process transformer and a linear regression estimator.
###Code
binning_process = BinningProcess(variable_names,
categorical_variables=categorical_variables)
lr = Pipeline(steps=[('binning_process', binning_process),
('regressor', LinearRegression())])
###Output
_____no_output_____
###Markdown
Split dataset into train and test Fit pipeline with training data.
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
lr.fit(X_train, y_train)
y_test_predict = lr.predict(X_test)
print("MSE: {:.3f}".format(mean_squared_error(y_test, y_test_predict)))
print("R2 score: {:.3f}".format(r2_score(y_test, y_test_predict)))
###Output
MSE: 17.626
R2 score: 0.760
###Markdown
In this case, the performance metrics show that the binning process transformation is effective in improving predictions.
###Code
lr2 = LinearRegression()
lr2.fit(X_train, y_train)
y_test_predict = lr2.predict(X_test)
print("MSE: {:.3f}".format(mean_squared_error(y_test, y_test_predict)))
print("R2 score: {:.3f}".format(r2_score(y_test, y_test_predict)))
###Output
MSE: 24.291
R2 score: 0.669
###Markdown
Binning process statistics The binning process of the pipeline can be retrieved to show information about the problem and timing statistics.
###Code
binning_process.information(print_level=1)
###Output
optbinning (Version 0.13.1)
Copyright (c) 2019-2022 Guillermo Navas-Palencia, Apache License 2.0
Statistics
Number of records 404
Number of variables 13
Target type continuous
Number of numerical 12
Number of categorical 1
Number of selected 13
Time 1.6495 sec
###Markdown
The ``summary`` method returns basic statistics for each binned variable.
###Code
binning_process.summary()
###Output
_____no_output_____
###Markdown
Tutorial: Binning process with sklearn Pipeline This example shows how to use a binning process as a transformation within a Scikit-learn Pipeline. A pipeline generally comprises the application of one or more transforms and a final estimator.
###Code
from optbinning import BinningProcess
from sklearn.datasets import load_boston
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
###Output
_____no_output_____
###Markdown
To get us started, let’s load a well-known dataset from the UCI repository
###Code
data = load_boston()
variable_names = data.feature_names
X = data.data
y = data.target
variable_names
categorical_variables = ['CHAS']
###Output
_____no_output_____
###Markdown
Instantiate a ``BinningProcess`` object class with variable names and the list of numerical variables to be considered categorical. Create pipeline object by providing two steps: a binning process transformer and a linear regression estimator.
###Code
binning_process = BinningProcess(variable_names,
categorical_variables=categorical_variables)
lr = Pipeline(steps=[('binning_process', binning_process),
('regressor', LinearRegression())])
###Output
_____no_output_____
###Markdown
Split dataset into train and test Fit pipeline with training data.
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
lr.fit(X_train, y_train)
y_test_predict = lr.predict(X_test)
print("MSE: {:.3f}".format(mean_squared_error(y_test, y_test_predict)))
print("R2 score: {:.3f}".format(r2_score(y_test, y_test_predict)))
###Output
MSE: 17.626
R2 score: 0.760
###Markdown
In this case, the performance metrics show that the binning process transformation is effective in improving predictions.
###Code
lr2 = LinearRegression()
lr2.fit(X_train, y_train)
y_test_predict = lr2.predict(X_test)
print("MSE: {:.3f}".format(mean_squared_error(y_test, y_test_predict)))
print("R2 score: {:.3f}".format(r2_score(y_test, y_test_predict)))
###Output
MSE: 24.291
R2 score: 0.669
###Markdown
Binning process statistics The binning process of the pipeline can be retrieved to show information about the problem and timing statistics.
###Code
binning_process.information(print_level=1)
###Output
optbinning (Version 0.5.0)
Copyright (c) 2019-2020 Guillermo Navas-Palencia, Apache License 2.0
Statistics
Number of records 404
Number of variables 13
Target type continuous
Number of numerical 12
Number of categorical 1
Number of selected 13
Time 1.5416 sec
###Markdown
The ``summary`` method returns basic statistics for each binned variable.
###Code
binning_process.summary()
###Output
_____no_output_____
###Markdown
Tutorial: Binning process with sklearn Pipeline This example shows how to use a binning process as a transformation within a Scikit-learn Pipeline. A pipeline generally comprises the application of one or more transforms and a final estimator.
###Code
from optbinning import BinningProcess
from sklearn.datasets import load_boston
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
###Output
_____no_output_____
###Markdown
To get us started, let’s load a well-known dataset from the UCI repository
###Code
data = load_boston()
variable_names = data.feature_names
X = data.data
y = data.target
variable_names
categorical_variables = ['CHAS']
###Output
_____no_output_____
###Markdown
Instantiate a ``BinningProcess`` object class with variable names and the list of numerical variables to be considered categorical. Create pipeline object by providing two steps: a binning process transformer and a linear regression estimator.
###Code
binning_process = BinningProcess(variable_names,
categorical_variables=categorical_variables)
lr = Pipeline(steps=[('binning_process', binning_process),
('regressor', LinearRegression())])
###Output
_____no_output_____
###Markdown
Split dataset into train and test Fit pipeline with training data.
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
lr.fit(X_train, y_train)
y_test_predict = lr.predict(X_test)
print("MSE: {:.3f}".format(mean_squared_error(y_test, y_test_predict)))
print("R2 score: {:.3f}".format(r2_score(y_test, y_test_predict)))
###Output
MSE: 17.626
R2 score: 0.760
###Markdown
In this case, the performance metrics show that the binning process transformation is effective in improving predictions.
###Code
lr2 = LinearRegression()
lr2.fit(X_train, y_train)
y_test_predict = lr2.predict(X_test)
print("MSE: {:.3f}".format(mean_squared_error(y_test, y_test_predict)))
print("R2 score: {:.3f}".format(r2_score(y_test, y_test_predict)))
###Output
MSE: 24.291
R2 score: 0.669
###Markdown
Binning process statistics The binning process of the pipeline can be retrieved to show information about the problem and timing statistics.
###Code
binning_process.information(print_level=1)
###Output
optbinning (Version 0.13.0)
Copyright (c) 2019-2021 Guillermo Navas-Palencia, Apache License 2.0
Statistics
Number of records 404
Number of variables 13
Target type continuous
Number of numerical 12
Number of categorical 1
Number of selected 13
Time 1.6495 sec
###Markdown
The ``summary`` method returns basic statistics for each binned variable.
###Code
binning_process.summary()
###Output
_____no_output_____
###Markdown
Tutorial: Binning process with sklearn Pipeline This example shows how to use a binning process as a transformation within a Scikit-learn Pipeline. A pipeline generally comprises the application of one or more transforms and a final estimator.
###Code
from optbinning import BinningProcess
from sklearn.datasets import load_boston
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
###Output
_____no_output_____
###Markdown
To get us started, let’s load a well-known dataset from the UCI repository
###Code
data = load_boston()
variable_names = data.feature_names
X = data.data
y = data.target
variable_names
categorical_variables = ['CHAS']
###Output
_____no_output_____
###Markdown
Instantiate a ``BinningProcess`` object class with variable names and the list of numerical variables to be considered categorical. Create pipeline object by providing two steps: a binning process transformer and a linear regression estimator.
###Code
binning_process = BinningProcess(variable_names,
categorical_variables=categorical_variables)
lr = Pipeline(steps=[('binning_process', binning_process),
('regressor', LinearRegression())])
###Output
_____no_output_____
###Markdown
Split dataset into train and test Fit pipeline with training data.
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
lr.fit(X_train, y_train)
y_test_predict = lr.predict(X_test)
print("MSE: {:.3f}".format(mean_squared_error(y_test, y_test_predict)))
print("R2 score: {:.3f}".format(r2_score(y_test, y_test_predict)))
###Output
MSE: 17.626
R2 score: 0.760
###Markdown
In this case, the performance metrics show that the binning process transformation is effective in improving predictions.
###Code
lr2 = LinearRegression()
lr2.fit(X_train, y_train)
y_test_predict = lr2.predict(X_test)
print("MSE: {:.3f}".format(mean_squared_error(y_test, y_test_predict)))
print("R2 score: {:.3f}".format(r2_score(y_test, y_test_predict)))
###Output
MSE: 24.291
R2 score: 0.669
###Markdown
Binning process statistics The binning process of the pipeline can be retrieved to show information about the problem and timing statistics.
###Code
binning_process.information(print_level=1)
###Output
optbinning (Version 0.9.0)
Copyright (c) 2019-2021 Guillermo Navas-Palencia, Apache License 2.0
Statistics
Number of records 404
Number of variables 13
Target type continuous
Number of numerical 12
Number of categorical 1
Number of selected 13
Time 1.5685 sec
###Markdown
The ``summary`` method returns basic statistics for each binned variable.
###Code
binning_process.summary()
###Output
_____no_output_____
###Markdown
Tutorial: Binning process with sklearn Pipeline This example shows how to use a binning process as a transformation within a Scikit-learn Pipeline. A pipeline generally comprises the application of one or more transforms and a final estimator.
###Code
from optbinning import BinningProcess
from sklearn.datasets import load_boston
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
###Output
_____no_output_____
###Markdown
To get us started, let’s load a well-known dataset from the UCI repository
###Code
data = load_boston()
variable_names = data.feature_names
X = data.data
y = data.target
variable_names
categorical_variables = ['CHAS']
###Output
_____no_output_____
###Markdown
Instantiate a ``BinningProcess`` object class with variable names and the list of numerical variables to be considered categorical. Create pipeline object by providing two steps: a binning process transformer and a linear regression estimator.
###Code
binning_process = BinningProcess(variable_names,
categorical_variables=categorical_variables)
lr = Pipeline(steps=[('binning_process', binning_process),
('regressor', LinearRegression())])
###Output
_____no_output_____
###Markdown
Split dataset into train and test Fit pipeline with training data.
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
lr.fit(X_train, y_train)
y_test_predict = lr.predict(X_test)
print("MSE: {:.3f}".format(mean_squared_error(y_test, y_test_predict)))
print("R2 score: {:.3f}".format(r2_score(y_test, y_test_predict)))
###Output
MSE: 17.626
R2 score: 0.760
###Markdown
In this case, the performance metrics show that the binning process transformation is effective in improving predictions.
###Code
lr2 = LinearRegression()
lr2.fit(X_train, y_train)
y_test_predict = lr2.predict(X_test)
print("MSE: {:.3f}".format(mean_squared_error(y_test, y_test_predict)))
print("R2 score: {:.3f}".format(r2_score(y_test, y_test_predict)))
###Output
MSE: 24.291
R2 score: 0.669
###Markdown
Binning process statistics The binning process of the pipeline can be retrieved to show information about the problem and timing statistics.
###Code
binning_process.information(print_level=1)
###Output
optbinning (Version 0.10.0)
Copyright (c) 2019-2021 Guillermo Navas-Palencia, Apache License 2.0
Statistics
Number of records 404
Number of variables 13
Target type continuous
Number of numerical 12
Number of categorical 1
Number of selected 13
Time 1.5685 sec
###Markdown
The ``summary`` method returns basic statistics for each binned variable.
###Code
binning_process.summary()
###Output
_____no_output_____
###Markdown
Tutorial: Binning process with sklearn Pipeline This example shows how to use a binning process as a transformation within a Scikit-learn Pipeline. A pipeline generally comprises the application of one or more transforms and a final estimator.
###Code
from optbinning import BinningProcess
from sklearn.datasets import load_boston
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
###Output
_____no_output_____
###Markdown
To get us started, let’s load a well-known dataset from the UCI repository
###Code
data = load_boston()
variable_names = data.feature_names
X = data.data
y = data.target
variable_names
categorical_variables = ['CHAS']
###Output
_____no_output_____
###Markdown
Instantiate a ``BinningProcess`` object class with variable names and the list of numerical variables to be considered categorical. Create pipeline object by providing two steps: a binning process transformer and a linear regression estimator.
###Code
binning_process = BinningProcess(variable_names,
categorical_variables=categorical_variables)
lr = Pipeline(steps=[('binning_process', binning_process),
('regressor', LinearRegression())])
###Output
_____no_output_____
###Markdown
Split dataset into train and test Fit pipeline with training data.
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
lr.fit(X_train, y_train)
y_test_predict = lr.predict(X_test)
print("MSE: {:.3f}".format(mean_squared_error(y_test, y_test_predict)))
print("R2 score: {:.3f}".format(r2_score(y_test, y_test_predict)))
###Output
MSE: 17.626
R2 score: 0.760
###Markdown
In this case, the performance metrics show that the binning process transformation is effective in improving predictions.
###Code
lr2 = LinearRegression()
lr2.fit(X_train, y_train)
y_test_predict = lr2.predict(X_test)
print("MSE: {:.3f}".format(mean_squared_error(y_test, y_test_predict)))
print("R2 score: {:.3f}".format(r2_score(y_test, y_test_predict)))
###Output
MSE: 24.291
R2 score: 0.669
###Markdown
Binning process statistics The binning process of the pipeline can be retrieved to show information about the problem and timing statistics.
###Code
binning_process.information(print_level=1)
###Output
optbinning (Version 0.8.0)
Copyright (c) 2019-2020 Guillermo Navas-Palencia, Apache License 2.0
Statistics
Number of records 404
Number of variables 13
Target type continuous
Number of numerical 12
Number of categorical 1
Number of selected 13
Time 1.5685 sec
###Markdown
The ``summary`` method returns basic statistics for each binned variable.
###Code
binning_process.summary()
###Output
_____no_output_____
###Markdown
Tutorial: Binning process with sklearn Pipeline This example shows how to use a binning process as a transformation within a Scikit-learn Pipeline. A pipeline generally comprises the application of one or more transforms and a final estimator.
###Code
from optbinning import BinningProcess
from sklearn.datasets import load_boston
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
###Output
_____no_output_____
###Markdown
To get us started, let’s load a well-known dataset from the UCI repository
###Code
data = load_boston()
variable_names = data.feature_names
X = data.data
y = data.target
variable_names
categorical_variables = ['CHAS']
###Output
_____no_output_____
###Markdown
Instantiate a ``BinningProcess`` object class with variable names and the list of numerical variables to be considered categorical. Create pipeline object by providing two steps: a binning process transformer and a linear regression estimator.
###Code
binning_process = BinningProcess(variable_names,
categorical_variables=categorical_variables)
lr = Pipeline(steps=[('binning_process', binning_process),
('regressor', LinearRegression())])
###Output
_____no_output_____
###Markdown
Split dataset into train and test Fit pipeline with training data.
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
lr.fit(X_train, y_train)
y_test_predict = lr.predict(X_test)
print("MSE: {:.3f}".format(mean_squared_error(y_test, y_test_predict)))
print("R2 score: {:.3f}".format(r2_score(y_test, y_test_predict)))
###Output
MSE: 17.626
R2 score: 0.760
###Markdown
In this case, the performance metrics show that the binning process transformation is effective in improving predictions.
###Code
lr2 = LinearRegression()
lr2.fit(X_train, y_train)
y_test_predict = lr2.predict(X_test)
print("MSE: {:.3f}".format(mean_squared_error(y_test, y_test_predict)))
print("R2 score: {:.3f}".format(r2_score(y_test, y_test_predict)))
###Output
MSE: 24.291
R2 score: 0.669
###Markdown
Binning process statistics The binning process of the pipeline can be retrieved to show information about the problem and timing statistics.
###Code
binning_process.information(print_level=1)
###Output
optbinning (Version 0.6.0)
Copyright (c) 2019-2020 Guillermo Navas-Palencia, Apache License 2.0
Statistics
Number of records 404
Number of variables 13
Target type continuous
Number of numerical 12
Number of categorical 1
Number of selected 13
Time 1.5416 sec
###Markdown
The ``summary`` method returns basic statistics for each binned variable.
###Code
binning_process.summary()
###Output
_____no_output_____
###Markdown
Tutorial: Binning process with sklearn Pipeline This example shows how to use a binning process as a transformation within a Scikit-learn Pipeline. A pipeline generally comprises the application of one or more transforms and a final estimator.
###Code
from optbinning import BinningProcess
from sklearn.datasets import load_boston
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
###Output
_____no_output_____
###Markdown
To get us started, let’s load a well-known dataset from the UCI repository
###Code
data = load_boston()
variable_names = data.feature_names
X = data.data
y = data.target
variable_names
categorical_variables = ['CHAS']
###Output
_____no_output_____
###Markdown
Instantiate a ``BinningProcess`` object class with variable names and the list of numerical variables to be considered categorical. Create pipeline object by providing two steps: a binning process transformer and a linear regression estimator.
###Code
binning_process = BinningProcess(variable_names,
categorical_variables=categorical_variables)
lr = Pipeline(steps=[('binning_process', binning_process),
('regressor', LinearRegression())])
###Output
_____no_output_____
###Markdown
Split dataset into train and test Fit pipeline with training data.
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
lr.fit(X_train, y_train)
y_test_predict = lr.predict(X_test)
print("MSE: {:.3f}".format(mean_squared_error(y_test, y_test_predict)))
print("R2 score: {:.3f}".format(r2_score(y_test, y_test_predict)))
###Output
MSE: 17.626
R2 score: 0.760
###Markdown
In this case, the performance metrics show that the binning process transformation is effective in improving predictions.
###Code
lr2 = LinearRegression()
lr2.fit(X_train, y_train)
y_test_predict = lr2.predict(X_test)
print("MSE: {:.3f}".format(mean_squared_error(y_test, y_test_predict)))
print("R2 score: {:.3f}".format(r2_score(y_test, y_test_predict)))
###Output
MSE: 24.291
R2 score: 0.669
###Markdown
Binning process statistics The binning process of the pipeline can be retrieved to show information about the problem and timing statistics.
###Code
binning_process.information(print_level=1)
###Output
optbinning (Version 0.11.0)
Copyright (c) 2019-2021 Guillermo Navas-Palencia, Apache License 2.0
Statistics
Number of records 404
Number of variables 13
Target type continuous
Number of numerical 12
Number of categorical 1
Number of selected 13
Time 1.5685 sec
###Markdown
The ``summary`` method returns basic statistics for each binned variable.
###Code
binning_process.summary()
###Output
_____no_output_____ |
docs/case-studies/TDEM/Kevitsa_VTEM.ipynb | ###Markdown
Kevitsa VTEM
###Code
from SimPEG import Mesh, EM, Utils, Maps
from matplotlib.colors import LogNorm
%pylab inline
import numpy as np
from scipy.constants import mu_0
from ipywidgets import interact, IntSlider
import cPickle as pickle
url = "https://storage.googleapis.com/simpeg/kevitsa_synthetic/"
files = ['dc_mesh.txt', 'dc_sigma.txt']
keys = ['mesh', 'sigma']
downloads = Utils.download([url + f for f in files], folder='./KevitsaDC', overwrite=True)
downloads = dict(zip(keys, downloads))
###Output
overwriting dc_mesh.txt
overwriting dc_sigma.txt
Downloading https://storage.googleapis.com/simpeg/kevitsa_synthetic/dc_mesh.txt
Downloading https://storage.googleapis.com/simpeg/kevitsa_synthetic/dc_sigma.txt
Download completed!
###Markdown
ModelThis model is a synthetic based on geologic surfaces interpreted from seismic data over the Kevitsa deposit in Finland. Synthetic 3D conductivity model is generated, and below figure shows conductivity section acrosses the mineralzined zone of interest. Nearsurface conductor on the lefthand side corresponds to sedimentary unit, and embedded conductor on the righthand side indicates conductive mineralized zone. Our interest here is in conductive mineralized zone at depth.
###Code
mesh3D = Mesh.TensorMesh.readUBC(downloads["mesh"])
sigmadc = mesh3D.readModelUBC(downloads["sigma"])
actind = ~np.isnan(sigmadc)
figsize(8, 4)
indy = 6
temp = 1./sigmadc.copy()
temp[~actind] = np.nan
out = mesh3D.plotSlice(temp, normal="Y", ind=indy, pcolorOpts={"norm": LogNorm(), "cmap":"jet_r"}, clim=(1e0, 1e3))
plt.ylim(-800, 250)
plt.xlim(5000, 11000)
plt.gca().set_aspect(2.)
plt.title(("y= %d m")%(mesh3D.vectorCCy[indy]))
cb = plt.colorbar(out[0], orientation="horizontal")
cb.set_label("Resistivity (Ohm-m)")
###Output
/anaconda/lib/python2.7/site-packages/matplotlib/colors.py:927: RuntimeWarning: invalid value encountered in less_equal
mask |= resdat <= 0
###Markdown
Question: Can we see mineralized zone at depth (~200 m) using airborne EM?To answer this question, we simplify our model as a) conductive layer and b) conductive cylinder embedded at depth. MeshWe use symmetric cylindrical mesh to simulate airborne time domain EM with this simplied model. Below code show how you design mesh.
###Code
sig_halfspace = 2e-3
sig_target = 0.1
sig_air = 1e-8
times = np.logspace(-4, -2, 21)
def diffusion_distance(sigma, time):
return 1.28*np.sqrt(time/(sigma * mu_0))
print(
'min diffusion distance: {:.2e} m'.format(diffusion_distance(sig_halfspace, times.min()))
)
print(
'max diffusion distance: {:.2e} m'.format(diffusion_distance(sig_halfspace, times.max()))
)
# x-direction
csx = 20 # core mesh cell width in the x-direction
ncx = 20
npadx = 15 # number of x padding cells
# z-direction
csz = 20 # core mesh cell width in the z-direction
ncz = 40
npadz = 15 # number of z padding cells
# padding factor (expand cells to infinity)
pf = 1.3
# cell spacings in the x and z directions
hx = Utils.meshTensor([(csx, ncx), (csx, npadx, pf)])
hz = Utils.meshTensor([(csz, npadz, -pf), (csz, ncz), (csz, npadz, pf)])
# define a SimPEG mesh
mesh = Mesh.CylMesh([hx, 1, hz], x0 ="00C")
# X and Z limits we want to plot to. Try
xlim = np.r_[0., mesh.vectorCCx.max()]
zlim = np.r_[mesh.vectorCCz.max(), mesh.vectorCCz.min()]
fig, ax = plt.subplots(1,1)
mesh.plotGrid(ax=ax)
ax.set_title('Simulation Mesh')
ax.set_xlim(xlim)
ax.set_ylim(zlim)
print(
'The maximum diffusion distance (in background) is: {:.2e} m. '
'Does the mesh go sufficiently past that?'.format(
diffusion_distance(sig_halfspace, times.max())
)
)
ax.set_aspect("equal")
###Output
The maximum diffusion distance (in background) is: 2.55e+03 m. Does the mesh go sufficiently past that?
###Markdown
Next, we put the model on the mesh
###Code
# create a vector that has one entry for every cell center
sigma = sig_air*np.ones(mesh.nC) # start by defining the conductivity of the air everwhere
sigma[mesh.gridCC[:,2] < 0.] = sig_halfspace # assign halfspace cells below the earth
sigma_background = sigma.copy()
sigma_layer = sigma.copy()
radius = 150.
# indices of the sphere (where (x-x0)**2 + (z-z0)**2 <= R**2)
layer_ind = np.logical_and(mesh.gridCC[:,2]>-300, mesh.gridCC[:,2]<-200)
blk_ind = (mesh.gridCC[:,0] < radius) & layer_ind
sigma[blk_ind] = sig_target # assign the conductivity of the sphere
sigma_layer[layer_ind] = sig_target # assign the conductivity of the sphere
plt.set_cmap(plt.get_cmap('jet_r'))
# Plot a cross section of the conductivity model
fig, ax = plt.subplots(1,1)
out = mesh.plotImage(np.log10(1./sigma_layer), ax=ax, mirror=True, clim=(0, 3), grid=False)
cb = plt.colorbar(out[0], ticks=np.linspace(0,3,4), format="10$^{%.1f}$")
# plot formatting and titles
cb.set_label('Resistivity (Ohm-m)', fontsize=13)
ax.axis('equal')
ax.set_xlim([-120., 120.])
ax.set_ylim([-500., 0.])
ax.set_title('Layer')
# Plot a cross section of the conductivity model
fig, ax = plt.subplots(1,1)
out = mesh.plotImage(np.log10(1./sigma), ax=ax, mirror=True, clim=(0, 3), grid=False)
# plot formatting and titles
cb = plt.colorbar(out[0], ticks=np.linspace(0,3,4), format="10$^{%.1f}$")
# plot formatting and titles
cb.set_label('Resistivity (Ohm-m)', fontsize=13)
ax.axis('equal')
ax.set_xlim([-120., 120.])
ax.set_ylim([-500., 0.])
ax.set_title('Cylinder')
###Output
_____no_output_____
###Markdown
Forward Simulation Define source and receiver loop location, and put parameters of the waveform. Here we use a current loop source having 13m-radius and measure db/dt in vertical direction inside of the loop. Both loops are located 41m above the surface.
###Code
rx_loc = np.array([[0., 0., 41.]])
src_loc = np.array([[0., 0., 41.]])
offTime = 0.007307
peakTime = 0.006
a = 3.
dbdt_z = EM.TDEM.Rx.Point_dbdt(locs=rx_loc, times=times+offTime, orientation='z') # vertical db_dt
rxList = [dbdt_z] # list of receivers
srcList = [
EM.TDEM.Src.CircularLoop(
rxList, loc=src_loc, radius=13., orientation='z', waveform=EM.TDEM.Src.VTEMWaveform(offTime=offTime, peakTime=peakTime, a=3.)
)
]
# solve the problem at these times
timeSteps = [(peakTime/5, 5), ((offTime-peakTime)/5, 5), (1e-5, 10), (5e-5, 10), (1e-4, 10), (5e-4, 19)]
prob = EM.TDEM.Problem3D_b(mesh, timeSteps = timeSteps, sigmaMap=Maps.IdentityMap(mesh))
survey = EM.TDEM.Survey(srcList)
prob.pair(survey)
src = srcList[0]
rx = src.rxList[0]
wave = []
for time in prob.times:
wave.append(src.waveform.eval(time))
wave = np.hstack(wave)
plt.plot(prob.times, wave, 'k.-')
plt.plot(rx.times, np.zeros_like(rx.times), 'r.')
plt.ylim(-0.2, 1.2)
plt.grid(True)
plt.title('Current Waveform')
plt.xlabel('time (s)')
###Output
_____no_output_____
###Markdown
Compute Predicted DataWe compute three different data: a) background (halfspace), b) layer, and d) cylinder models.
###Code
d_background = survey.dpred(sigma_background)
d_layer = survey.dpred(sigma_layer)
d = survey.dpred(sigma)
area = 13**2*np.pi
figsize(6, 3)
plt.loglog((rx.times-offTime)*1e6, -d_layer*1e12/area, 'k', lw=2)
plt.loglog((rx.times-offTime)*1e6, -d*1e12/area , 'b', lw=2)
plt.loglog((rx.times-offTime)*1e6, -d_background*1e12/area, 'k--', lw=1)
plt.xlabel("Time (micro-s)")
plt.ylabel("Voltage (pV/A-m$^4$)")
plt.legend(("Layer", "Cylinder","Half-space"), loc=1, fontsize = 10)
plt.ylim(1e-4, 1e1)
plt.grid(True)
###Output
_____no_output_____
###Markdown
Question:What was your thoughts on the above plot? Can we see conductive mineralzied zone?Singals from Layer and Cylinder have significant difference, can you explain why? Underlying physics of the measured voltage can be govered by Faraday's law:$$ \nabla \times \vec{e} - \frac{d\vec{b}}{dt}$$By showing how electric field propagates in the subsurface we illustrate why layer and cylinder model show significant difference. Electric field in the layer model
###Code
f_layer = prob.fields(sigma_layer)
plt.set_cmap(plt.get_cmap('viridis'))
def vizfield_layer(itime):
fig = plt.figure(figsize = (7*0.8,5*0.8))
ax = plt.subplot(111)
cb = plt.colorbar(mesh.plotImage(mesh.aveE2CC*f_layer[src, 'e', itime], ax=ax, mirror=True)[0])
# plot formatting and titles
cb.set_label('e$_{y}$ (V/m)', fontsize=13)
ax.axis('equal')
ax.set_xlim([-300., 300.])
ax.set_ylim([-500., 0.])
ax.set_title(('|e$_{y}$| at %d micro-s')%(prob.times[itime]*1e6))
plt.show()
interact(vizfield_layer, itime=IntSlider(min=0, max=len(prob.times)-1, step=1, value=11))
###Output
_____no_output_____
###Markdown
Electric Field in the Cylinder model
###Code
f = prob.fields(sigma)
def vizfield_cylinder(itime):
fig = plt.figure(figsize = (7*0.8,5*0.8))
ax = plt.subplot(111)
cb = plt.colorbar(mesh.plotImage(mesh.aveE2CC*f[src, 'e', itime], ax=ax, mirror=True)[0])
# plot formatting and titles
cb.set_label('e$_{y}$ (V/m)', fontsize=13)
# ax.axis('equal')
ax.set_xlim([-300., 300.])
ax.set_ylim([-500., 0.])
ax.set_title(('|e$_{y}$| at %d micro-s')%(prob.times[itime]*1e6))
plt.tight_layout()
plt.show()
interact(vizfield_cylinder, itime=IntSlider(min=0, max=len(prob.times)-1, step=1, value=11))
###Output
_____no_output_____ |
0.13/_downloads/plot_decoding_csp_space.ipynb | ###Markdown
====================================================================Decoding in sensor space data using the Common Spatial Pattern (CSP)====================================================================Decoding applied to MEG data in sensor space decomposed using CSP.Here the classifier is applied to features extracted on CSP filtered signals.See http://en.wikipedia.org/wiki/Common_spatial_pattern and [1][1] Zoltan J. Koles. The quantitative extraction and topographic mapping of the abnormal components in the clinical EEG. Electroencephalography and Clinical Neurophysiology, 79(6):440--447, December 1991.
###Code
# Authors: Alexandre Gramfort <[email protected]>
# Romain Trachel <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
###Output
_____no_output_____
###Markdown
Set parameters and read data
###Code
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
tmin, tmax = -0.2, 0.5
event_id = dict(aud_l=1, vis_l=3)
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname, preload=True)
raw.filter(2, None, method='iir') # replace baselining with high-pass
events = mne.read_events(event_fname)
raw.info['bads'] = ['MEG 2443'] # set bad channels
picks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=False, eog=False,
exclude='bads')
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=picks, baseline=None, preload=True)
labels = epochs.events[:, -1]
evoked = epochs.average()
###Output
_____no_output_____
###Markdown
Decoding in sensor space using a linear SVM
###Code
from sklearn.svm import SVC # noqa
from sklearn.cross_validation import ShuffleSplit # noqa
from mne.decoding import CSP # noqa
n_components = 3 # pick some components
svc = SVC(C=1, kernel='linear')
csp = CSP(n_components=n_components)
# Define a monte-carlo cross-validation generator (reduce variance):
cv = ShuffleSplit(len(labels), 10, test_size=0.2, random_state=42)
scores = []
epochs_data = epochs.get_data()
for train_idx, test_idx in cv:
y_train, y_test = labels[train_idx], labels[test_idx]
X_train = csp.fit_transform(epochs_data[train_idx], y_train)
X_test = csp.transform(epochs_data[test_idx])
# fit classifier
svc.fit(X_train, y_train)
scores.append(svc.score(X_test, y_test))
# Printing the results
class_balance = np.mean(labels == labels[0])
class_balance = max(class_balance, 1. - class_balance)
print("Classification accuracy: %f / Chance level: %f" % (np.mean(scores),
class_balance))
# Or use much more convenient scikit-learn cross_val_score function using
# a Pipeline
from sklearn.pipeline import Pipeline # noqa
from sklearn.cross_validation import cross_val_score # noqa
cv = ShuffleSplit(len(labels), 10, test_size=0.2, random_state=42)
clf = Pipeline([('CSP', csp), ('SVC', svc)])
scores = cross_val_score(clf, epochs_data, labels, cv=cv, n_jobs=1)
print(scores.mean()) # should match results above
# And using reuglarized csp with Ledoit-Wolf estimator
csp = CSP(n_components=n_components, reg='ledoit_wolf')
clf = Pipeline([('CSP', csp), ('SVC', svc)])
scores = cross_val_score(clf, epochs_data, labels, cv=cv, n_jobs=1)
print(scores.mean()) # should get better results than above
# plot CSP patterns estimated on full data for visualization
csp.fit_transform(epochs_data, labels)
data = csp.patterns_
fig, axes = plt.subplots(1, 4)
for idx in range(4):
mne.viz.plot_topomap(data[idx], evoked.info, axes=axes[idx], show=False)
fig.suptitle('CSP patterns')
fig.tight_layout()
fig.show()
###Output
_____no_output_____ |
Titanic_Dataset.ipynb | ###Markdown
**Titanic Dataset**
**Using PANDAS and MATPLOTLIB to evaluate and figure out the specifics of the Titanic dataset and draw some obvious conclusions dervied from the result of our code**.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df = pd.read_csv("https://gist.githubusercontent.com/michhar/2dfd2de0d4f8727f873422c5d959fff5/raw/fa71405126017e6a37bea592440b4bee94bf7b9e/titanic.csv")
###Output
_____no_output_____
###Markdown
**Basic overview of the data**
---
First we take a look at the head, i.e the first five rows of data. This gives us the basic overview and tells us the different columns that are present.
The [x:y] function gives us values from x to y-1.
---
Information not obvious:
* survival - Survival (0 = No; 1 = Yes)
* class - Passenger Class (1 = 1st; 2 = 2nd; 3 = 3rd)
* Name - Name
* sex - Sex
* age - Age
* sibsp - Number of Siblings/Spouses Aboard
* parch - Number of Parents/Children Aboard
* ticket - Ticket Number
* fare - Passenger Fare
* cabin - Cabin
* embarked - Port of Embarkation (C = Cherbourg; Q = Queenstown; S = Southampton)
###Code
df.head()
#can also be done by
#df[0:5]
###Output
_____no_output_____
###Markdown
Beginning our analysis
Let's start off small and get a basic statistical overview. (some values here may or may not make sense, but let's see anyway.)
###Code
df.describe()
###Output
_____no_output_____
###Markdown
We see that there are a total of 891 passengers taken into account in this database. The mean for the columns PassengerId, Pclass make no sense, so we shall ignore those.
* We see that the mean of age is about 29.7 years, which means most of the people on board were middle-aged adults.
* Mean fare was \$32.20 which means that there were a lot of people with tickets in the lower class comparatively. Maximum fare was \$512.33.
Rest of the data will be made clear as we move along.
Now let's take a look at the number of people who survived the sinking of Titanic.
###Code
df.groupby('Survived').count()[['PassengerId']]
###Output
_____no_output_____
###Markdown
Here we see that, the number of people who survived are 549 and number of peeople who died are 342.
---
Now let's analyse the following:
Amongst the number of survivors,
* how many had family on board and,
* what class in the boat did they reside.
---
First let's take a look at how many people were there in the various classes of the boat.
###Code
df.groupby('Pclass').count()[['PassengerId']]
###Output
_____no_output_____
###Markdown
We notice that maximum people resided in third class as suspected above by looking at the mean fare
---
Now lets take a look at the survival numbers based on class
###Code
df.groupby(['Survived', 'Pclass']).count()[['PassengerId']]
###Output
_____no_output_____
###Markdown
Here we see that, amongst the people that **did not** survive, 80 were in the 1st class while 372 were in the third class.
We notice that amongst the people in third class, with total of 491 people, 372 were dead and only 119 were alive.
It was not the case in first class however, where 80 out of 216 died and 136 people were alive. Let's take a look at the number of male and female passengers.
###Code
df.groupby('Sex').count()[['PassengerId']]
###Output
_____no_output_____
###Markdown
Now let's look at the male to female ratio of the survivors and dead people.
###Code
df.groupby(['Survived', 'Sex']).count()[['PassengerId']]
###Output
_____no_output_____
###Markdown
As we know that the women and children were evacuated first, the number of male deaths were staggeringly high as compared to the female. As a result, number of male survivers also were less compared to that over female.
* 468/577 dead males
* 81/314 dead females Let's analyse how many people were under the age of 18.
###Code
df[df.Age <= 18].count()[['PassengerId']]
###Output
_____no_output_____
###Markdown
139 people were under or of the age 18 i.e children.
---
How many of these children survived the sinking?
###Code
df[df.Age <= 18].groupby('Survived').count()[['PassengerId']]
###Output
_____no_output_____
###Markdown
Sadly, about half the children did not make it out of the boat.
---
Youngest person to die?
###Code
#df.sort_values(by = 'Age',ascending= True)[['Survived','Age']]
dead = df[df.Survived == 0].sort_values(by = 'Age')
dead = dead[dead['Age'].notnull()] ## Here we choose the rows which dont have age as NULL to check the eldest recorded person.
dead
# dead = df[df.Survived == 0]
# dead[dead.Age.min()]
###Output
_____no_output_____
###Markdown
Here we see that, according to this dataset, youngest person to die on the ship was 1 year old. and eldest person to die, that we have a record of, is 74 years old.
---
---
We shall now see some graphical representations of the above data using matplotlib.
Matplotlib and Seaborn
###Code
import seaborn as sns
###Output
_____no_output_____
###Markdown
Plotting the different features of the dataset against their frequency to get a visual idea.
###Code
sns.distplot(df["Pclass"], kde = False, color = "red", axlabel = "Number of people in each class")
plt.show()
sns.distplot(df.Survived, vertical = True, kde= False, color="green", label= " Number of suvivors and deaths ")
plt.yticks([0,1])
plt.show()
sns.distplot(df.Age, vertical = False, kde= False, color="cyan")
plt.show()
###Output
/usr/local/lib/python3.6/dist-packages/seaborn/distributions.py:2551: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).
warnings.warn(msg, FutureWarning)
###Markdown
We get a visual representation of the data which coincides with the conclusions we made previously from analysing the dataset.
---
To get a better idea, we use countplots.
###Code
sns.countplot( x='Survived', data= df)
plt.show()
sns.countplot( x='Pclass', data= df)
plt.show()
sns.countplot( x='Age', data= df, hue= 'Sex')
plt.xticks([1, 20, 40, 60, 80, 100])
plt.show()
###Output
_____no_output_____
###Markdown
Plotting the age and fare with the hue applied for sex where males are blue and females are while, we see that most of the cheaper seats were filled with many more males than females, while as the price increased, we see an increase in number of females than males.
###Code
sns.lmplot( x = 'Age', y = 'Fare', data = df, fit_reg= False, hue='Sex')
###Output
_____no_output_____
###Markdown
Plotting the age and fare with the hue applied for survived where death are blue and survivals are while, we see that most of the cheaper seats ended up dead, while as the price increased, we see an that number of survivors were more compared to deaths.
###Code
sns.lmplot( x = 'Age', y = 'Fare', data = df, fit_reg= False, hue='Survived')
###Output
_____no_output_____
###Markdown
Let's look at the boxplots of a few of the features/data.
###Code
sns.boxplot(data = df.loc[:,["Age"]])
sns.boxplot(data = df.loc[:,["Pclass", "Parch", "SibSp"]])
plt.show()
###Output
_____no_output_____
###Markdown
###Code
sns.boxplot( x = df.Survived, y = df.Pclass )
g = sns.FacetGrid(df , row= "Pclass")
g = g.map(plt.hist, "Survived")
plt.show
sns.boxplot( x = df.Fare, y = df.Survived )
g = sns.FacetGrid(df , row= "Survived")
g = g.map(plt.hist, "Fare")
plt.show
from sklearn.model_selection import train_test_split
import tensorflow as tf
from IPython.display import clear_output
df.head()
dataset = df
temp = df
dataset.head()
dataset = dataset[dataset["Age"].notnull() & dataset["Cabin"].notnull() & dataset["Embarked"].notnull()]
dataset.head()
#dataset = dataset.drop(['Name','Ticket','PassengerId'], axis=1)
dataset.head()
y = dataset.Survived
x = dataset.drop("Survived", axis = 1)
x
###Output
_____no_output_____
###Markdown
###Code
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.2)
x_train.head()
print(x_train.shape)
print(y_train.shape)
print(x_test.shape)
print(y_test.shape)
CATEGORICAL_COLUMNS = ["Sex", "SibSp", "Parch", "Pclass", "Cabin", "Embarked"]
NUMERICAL_COLUMNS = ["Age", "Fare"]
feature_columns= []
for feature_name in CATEGORICAL_COLUMNS:
vocabulary = x_train[feature_name].unique()
feature_columns.append( tf.feature_column.categorical_column_with_vocabulary_list(feature_name, vocabulary))
for feature_name in NUMERICAL_COLUMNS:
feature_columns.append(tf.feature_column.numeric_column(feature_name, dtype = float))
print(feature_columns)
def make_input_fn(data_df, label_df, num_epochs=700, shuffle=True, batch_size=8):
def input_function(): # inner function, this will be returned
ds = tf.data.Dataset.from_tensor_slices((dict(data_df), label_df)) # create tf.data.Dataset object with data and its label
if shuffle:
ds = ds.shuffle(1000) # randomize order of data
ds = ds.batch(batch_size).repeat(num_epochs) # split dataset into batches of 32 and repeat process for number of epochs
return ds # return a batch of the dataset
return input_function # return a function object for use
train_input_fn = make_input_fn(x_train, y_train) # here we will call the input_function that was returned to us to get a dataset object we can feed to the model
eval_input_fn = make_input_fn(x_test, y_test, num_epochs=1, shuffle=False)
linear_est = tf.estimator.LinearClassifier(feature_columns= feature_columns, optimizer=tf.keras.optimizers.Ftrl(
learning_rate=0.0025,
l2_regularization_strength=0.1
))
linear_est.train(train_input_fn)
result = linear_est.evaluate(eval_input_fn)
clear_output()
print(result["accuracy"])
result = list(linear_est.predict(eval_input_fn))
print(x_train.iloc[12])
print(y_train.iloc[12])
print(result[12]["probabilities"][1])
###Output
_____no_output_____
###Markdown
###Code
# Importing Libraries
import pandas as pd
import numpy as np
import seaborn as sns
import math
%matplotlib inline
# Reading Dataset
titanic_dataset=pd.read_csv("https://raw.githubusercontent.com/manishanker/Statistics_ML_26Aug/master/titanic_data.csv")
titanic_dataset.head(3)
titanic_dataset.shape
###Output
_____no_output_____
###Markdown
- Analyse dataset- How to get total number of passengers
###Code
sns.countplot('Survived',data=titanic_dataset)
sns.countplot('Survived',hue='Sex',data=titanic_dataset)
titanic_dataset.columns
sns.countplot('Survived',hue='Pclass',data=titanic_dataset)
titanic_dataset.Age.unique()
# titanic_dataset.shape
from matplotlib import pyplot as plt
plt.xticks(range(0,80,5))
plt.yticks(range(0,300,10))
titanic_dataset["Age"].plot.hist(bins=[0,10,20,30,40,50,60,70,80],figsize=(10,10),edgecolor="gray",facecolor='yellow',alpha=0.6)
###Output
_____no_output_____
###Markdown
How many older people are on titanic ship
###Code
len(titanic_dataset[titanic_dataset['Age']>60])
###Output
_____no_output_____
###Markdown
How many people are in the age group of 25 yrs to 45 yrs ?
###Code
titanic_dataset[(titanic_dataset["Age"]>=25) & (titanic_dataset["Age"]<=45)].shape[0]
sns.countplot('SibSp',data=titanic_dataset)
sns.countplot("Parch",data=titanic_dataset)
###Output
_____no_output_____
###Markdown
Data Wrangling- Is cleaning required- Imputation What percentage of Null values are present in each column
###Code
(titanic_dataset.isna().sum()/titanic_dataset.shape[0])*100
###Output
_____no_output_____
###Markdown
Remove cabin column as it has a lot of missing values ~77%
###Code
titanic_dataset.drop('Cabin',axis=1,inplace=True)
###Output
_____no_output_____
###Markdown
Fillna in Age column
###Code
titanic_dataset["Age"]=titanic_dataset["Age"].fillna(titanic_dataset["Age"].median())
titanic_dataset['Embarked'].value_counts()
titanic_dataset["Embarked"].mode()[0]
titanic_dataset["Embarked"]=titanic_dataset["Embarked"].fillna(titanic_dataset["Embarked"].mode()[0])
titanic_dataset.isna().sum()
## Cleaning is done
###Output
_____no_output_____
###Markdown
EDA Box Plot
###Code
sns.boxplot(titanic_dataset["Age"],data=titanic_dataset)
sns.boxplot(titanic_dataset["Fare"],data=titanic_dataset)
###Output
_____no_output_____
###Markdown
Pending- Removing Outlier- Removing Duplicate
###Code
titanic_dataset.duplicated().sum()
sex=pd.get_dummies(titanic_dataset["Sex"])
sex.head()
embark=pd.get_dummies(titanic_dataset["Embarked"])
embark.head()
pcl=pd.get_dummies(titanic_dataset["Pclass"])
pcl.head()
titanic_dataset=pd.concat([titanic_dataset,sex,embark,pcl],axis=1)
titanic_dataset.head()
titanic_dataset.drop(["Ticket","PassengerId","Name","Sex","Pclass","Embarked"],axis=1,inplace=True)
titanic_dataset.head()
titanic_dataset.to_csv("clean_titanic.csv")
###Output
_____no_output_____
###Markdown
Test train split
###Code
X=titanic_dataset.drop(["Survived"],axis=1)
y=titanic_dataset["Survived"]
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.3,random_state=444)
from sklearn.linear_model import LogisticRegression
model=LogisticRegression()
X_train.head()
model.fit(X_train,y_train)
predictions=model.predict(X_test)
from sklearn.metrics import classification_report
print(classification_report(y_test,predictions))
y_test.value_counts()
from collections import Counter
Counter(predictions)
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test,predictions)
from sklearn.metrics import accuracy_score
accuracy_score(y_test,predictions)
from sklearn.preprocessing import StandardScaler
scaler=StandardScaler()
X_std=scaler.fit_transform(X)
X_train,X_test,y_train,y_test=train_test_split(X_std,y,test_size=0.3,random_state=444)
model.fit(X_train,y_train)
predictions=model.predict(X_test)
accuracy_score(y_test,predictions)
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.3,random_state=444)
X_train_std=scaler.fit_transform(X_train)
model.fit(X_train_std,y_train)
X_test_std=scaler.transform(X_test)
predictions=model.predict(X_test)
accuracy_score(y_test,predictions)
predictions=model.predict(X_test_std)
accuracy_score(y_test,predictions)
import sklearn
sklearn.__version__
X.SibSp.unique()
X=X.drop("SibSp",axis=1)
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.3,random_state=444)
X_train_std=scaler.fit_transform(X_train)
model.fit(X_train_std,y_train)
X_test_std=scaler.transform(X_test)
predictions=model.predict(X_test)
accuracy_score(y_test,predictions)
predictions=model.predict(X_test_std)
accuracy_score(y_test,predictions)
###Output
_____no_output_____
###Markdown
Titanic dataset analysis Author: Romullo Ferreira Data Analysis and Prediction of Survivors on the Titanic DatasetOverview This study is an exercise to show how to use foundations of Data Science in order to import data, study, visualize, and present the results. The aim of this project is to analyse the Kaggle Titanic dataset, which includes the following steps: 1. Business Questions (Business Understanding)2. Data wrangling - 2.1. Gather - 2.2. Assess - 2.3. Prepare Data (Clean)3. Data exploration and visualization (Data Modeling).4. Conclusions Useful facts about the data:- Titanic data - contains demographic data and information for 891 of the 2,224 passengers and crew aboard the Titanic. - You can see a description of this data set on the Kaggle website, where the data was taken from. 1. Business Questions (Business Understanding) We have a lot of things to ask. The dataset has a lot of information, but for the analysis not to be too long I decided to ask some questions that I thought were important as a start. But after taking a look at my analysis feel free to ask other questions and analyze. - a. -What is the mean age of passengers on board?- b. -How is the distribution of passengers on the ship by class?- c. -What is the mean sex of the passengers who survived?- d. -What class of passengers survived?- e. -What is the mean age of the passengers who survived?- f. -What were the factors that made people survive? 2. Data wrangling 2.1. Gather Firstly, let's import the necessary libraries for this project
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
###Output
_____no_output_____
###Markdown
Import CSV file
###Code
df_titanic = pd.read_csv('titanic-data-6.csv', sep=',')
###Output
_____no_output_____
###Markdown
2.2. Assess Maybe it is difficult to analyze a large data set, but if we analyze smaller samples we can answer some questions at the beginning. Let's take a first look at the data using the head() function. That returns the first 5 lines of the dataframe.
###Code
df_titanic.head()
###Output
_____no_output_____
###Markdown
We can also return the last 5 lines of the dataframe using the tail() function.
###Code
df_titanic.tail()
###Output
_____no_output_____
###Markdown
Descriptive statistics are useful for each column of data.The describe() function gives us an idea of the mean of all our columns and other useful information like Max and Min.
###Code
df_titanic.describe()
###Output
_____no_output_____
###Markdown
Returning the dimensions of the dataframe
###Code
df_titanic.shape
###Output
_____no_output_____
###Markdown
Dataframe dimensions. Just above we can see that the dataframe has 891 rows and 12 columns. Using the info () function we will display a concise summary of the dataframe, including the number of non-null values in each column, see if they have missing values and the types of data for each resource
###Code
df_titanic.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891 entries, 0 to 890
Data columns (total 12 columns):
PassengerId 891 non-null int64
Survived 891 non-null int64
Pclass 891 non-null int64
Name 891 non-null object
Sex 891 non-null object
Age 714 non-null float64
SibSp 891 non-null int64
Parch 891 non-null int64
Ticket 891 non-null object
Fare 891 non-null float64
Cabin 204 non-null object
Embarked 889 non-null object
dtypes: float64(2), int64(5), object(5)
memory usage: 83.6+ KB
###Markdown
As we can see, the Age, Cabin and Embarked columns have missing values. What are the data types of the columns? We can also see data types with the dtypes.
###Code
df_titanic.dtypes
###Output
_____no_output_____
###Markdown
I believe that we have no problem with data types.The Age variable is float because there are children on board under the age of 1. Let's leave it at that. Let's see the number of unique values for each column.
###Code
df_titanic.nunique()
###Output
_____no_output_____
###Markdown
Are there duplicate rows? Using the function 'sum' with the function duplicated we can check this information.
###Code
sum(df_titanic.duplicated())
###Output
_____no_output_____
###Markdown
It looks like we don't have any duplicate rows, this is very good. 2.3. Prepare Data (Clean) Missing values (Fixing NaN data values in the Age column.) - There are missing values at Age column Define - We use the fillna method to correct missing data in the Age column. Code
###Code
mean = df_titanic['Age'].mean()
df_titanic['Age'].fillna(mean, inplace=True)
###Output
_____no_output_____
###Markdown
Test
###Code
df_titanic.count()
###Output
_____no_output_____
###Markdown
Nice! We solved the problem at the Age column. Delete columns - The Cabin column also has missing values, but I chose to delete it because I will not use it in this project. Define - We will delete the Cabin column. Code
###Code
drop_column = ['Cabin']
df_titanic.drop(drop_column, axis=1, inplace = True)
###Output
_____no_output_____
###Markdown
Test
###Code
df_titanic.head()
###Output
_____no_output_____
###Markdown
Great! Cabin column successfully deleted! 3. Data exploration and visualization (Data Modeling) a. -What is the mean age of passengers on board?
###Code
def column_median(data_column):
"""
Function column_median.
Args:
data_column: Parameter that takes the column that we want to take the median.
Returns:
The median of the data in the chosen column.
"""
column = df_titanic[data_column].median()
return column
print("\nMedian age column")
print(column_median('Age'))
###Output
Median age column
29.69911764705882
###Markdown
We can see that the median was young people, 29 years old. b. -How is the distribution of passengers on the ship by class?
###Code
#Reuse of the column_median function
print("\nMedian class column")
print(column_median('Pclass'))
###Output
Median class column
3.0
###Markdown
Graph 01 - Pie Graph - Distribution by Class (PCLASS) on board
###Code
df_titanic['Pclass'].value_counts().plot(kind='pie', figsize=(8,8), title="Graph of the distribution of passengers by class");
###Output
_____no_output_____
###Markdown
As we can see the median of people was class 3. People of the lowest class. c- What is the mean sex of the passengers who survived? We will use the Groupby function to group data
###Code
#Correlation between Sex and Survived
df_titanic[["Sex", "Survived"]].groupby(['Sex'], as_index=False).mean().sort_values(by='Survived', ascending=False)
###Output
_____no_output_____
###Markdown
Graph 2 - Bar graph - Average survivors by sex
###Code
#print percentage of women x men who survived
print("Percentage of women who survived:", df_titanic["Survived"][df_titanic["Sex"] == 'female'].value_counts(normalize = True)[1]*100)
print("Percentage of men who survived:", df_titanic["Survived"][df_titanic["Sex"] == 'male'].value_counts(normalize = True)[1]*100)
#Bar graph with the percentage of each sex that survived
df_titanic.groupby(["Sex"]).mean()["Survived"].plot(kind="bar", title="Média dos sobreviventes por sexo")
plt.ylabel('Percentage of survivors');
###Output
Percentage of women who survived: 74.20382165605095
Percentage of men who survived: 18.890814558058924
###Markdown
As we can see above, women survived more than men. d- What class of passengers survived?
###Code
#Correlation between Pcclass and Survived
df_titanic[['Pclass', 'Survived']].groupby(['Pclass'], as_index=False).mean()
###Output
_____no_output_____
###Markdown
Graph 3 - Bar graph - Average survivors by class
###Code
print("Percentage of class 1 that survived:", df_titanic["Survived"][df_titanic["Pclass"] == 1].value_counts(normalize = True)[1]*100)
print("Percentage of class 2 that survived:", df_titanic["Survived"][df_titanic["Pclass"] == 2].value_counts(normalize = True)[1]*100)
print("Percentage of class 3 that survived:", df_titanic["Survived"][df_titanic["Pclass"] == 3].value_counts(normalize = True)[1]*100)
df_titanic.groupby(["Pclass"]).mean()["Survived"].plot(kind="bar", title="Average of survivors by class")
plt.ylabel('Percentage of survivors');
###Output
Percentage of class 1 that survived: 62.96296296296296
Percentage of class 2 that survived: 47.28260869565217
Percentage of class 3 that survived: 24.236252545824847
###Markdown
As we can see above, more than 50% of class 1 survived. e- What is the mean age of the passengers who survived?
###Code
#HISTOGRAM CHART
#Left side of the graph shows the age of the passengers who died and the right side shows the age of the passengers who survived
grafico_sobreviventes_idade = sns.FacetGrid(df_titanic, col='Survived')
grafico_sobreviventes_idade.map(plt.hist, 'Age', bins=10).set_ylabels('people')
plt.ylabel('people', fontsize=10)
###Output
_____no_output_____ |
Python_Stock/Technical_Indicators/Linear_Regression_Slope.ipynb | ###Markdown
Linear Regression Slope (LRS) https://library.tradingtechnologies.com/trade/chrt-ti-linear-regression-slope.html
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
# yfinance is used to fetch data
import yfinance as yf
yf.pdr_override()
# input
symbol1 = 'AAPL'
symbol2 = 'QQQ'
start = '2018-08-01'
end = '2019-01-01'
# Read data
df1 = yf.download(symbol1,start,end)
df2 = yf.download(symbol2,start,end)
# View Columns
df1.head()
df2.head()
avg1 = df1['Adj Close'].mean()
avg2 = df2['Adj Close'].mean()
df1['AVGS1_S1'] = avg1 - df1['Adj Close']
df1['AVGS2_S2'] = avg2 - df2['Adj Close']
df1['Average_SQ'] = df1['AVGS1_S1']**2
df1['AVG_AVG'] = df1['AVGS1_S1']*df1['AVGS2_S2']
sum_sq = df1['Average_SQ'].sum()
sum_avg = df1['AVG_AVG'].sum()
slope = sum_avg/sum_sq
intercept = avg2-(slope*avg1)
m = (df1['Adj Close']-df1['Adj Close'].mean())*(df2['Adj Close']-df2['Adj Close'].mean())/(df1['Adj Close']-df1['Adj Close'].mean())
n = 20
df1['Slope'] = m.rolling(n).mean()
fig = plt.figure(figsize=(14,10))
ax1 = plt.subplot(2, 1, 1)
ax1.plot(df1['Adj Close'])
ax1.set_title('Stock '+ symbol1 +' Closing Price')
ax1.set_ylabel('Price')
ax1.legend(loc='best')
ax2 = plt.subplot(2, 1, 2)
#df1['VolumePositive'] = df1['Open'] < df1['Adj Close']
#colors = df1.VolumePositive.map({True: 'g', False: 'r'})
#ax2.bar(df1.index, df1['Volume'], color=colors, alpha=0.4)
ax2.plot(df1['Slope'], label='Slope')
ax2.grid()
ax2.set_ylabel('Slope')
ax2.set_xlabel('Date')
###Output
_____no_output_____
###Markdown
Candlestick with Linear Regression Slope
###Code
from matplotlib import dates as mdates
import datetime as dt
dfc = df1.copy()
dfc['VolumePositive'] = dfc['Open'] < dfc['Adj Close']
#dfc = dfc.dropna()
dfc = dfc.reset_index()
dfc['Date'] = mdates.date2num(dfc['Date'].astype(dt.date))
dfc.head()
from mpl_finance import candlestick_ohlc
fig = plt.figure(figsize=(14,10))
ax1 = plt.subplot(2, 1, 1)
candlestick_ohlc(ax1,dfc.values, width=0.5, colorup='g', colordown='r', alpha=1.0)
ax1.xaxis_date()
ax1.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y'))
ax1.grid(True, which='both')
ax1.minorticks_on()
ax1v = ax1.twinx()
colors = dfc.VolumePositive.map({True: 'g', False: 'r'})
ax1v.bar(dfc.Date, dfc['Volume'], color=colors, alpha=0.4)
ax1v.axes.yaxis.set_ticklabels([])
ax1v.set_ylim(0, 3*df1.Volume.max())
ax1.set_title('Stock '+ symbol1 +' Closing Price')
ax1.set_ylabel('Price')
ax2 = plt.subplot(2, 1, 2)
#df1['VolumePositive'] = df1['Open'] < df1['Adj Close']
#colors = df1.VolumePositive.map({True: 'g', False: 'r'})
#ax2.bar(df1.index, df1['Volume'], color=colors, alpha=0.4)
ax2.plot(df1['Slope'], label='Slope')
ax2.grid()
ax2.set_ylabel('Slope')
ax2.set_xlabel('Date')
###Output
_____no_output_____
###Markdown
Linear Regression Slope (LRS) https://library.tradingtechnologies.com/trade/chrt-ti-linear-regression-slope.html
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
# fix_yahoo_finance is used to fetch data
import fix_yahoo_finance as yf
yf.pdr_override()
# input
symbol1 = 'AAPL'
symbol2 = 'QQQ'
start = '2018-08-01'
end = '2019-01-01'
# Read data
df1 = yf.download(symbol1,start,end)
df2 = yf.download(symbol2,start,end)
# View Columns
df1.head()
df2.head()
avg1 = df1['Adj Close'].mean()
avg2 = df2['Adj Close'].mean()
df1['AVGS1_S1'] = avg1 - df1['Adj Close']
df1['AVGS2_S2'] = avg2 - df2['Adj Close']
df1['Average_SQ'] = df1['AVGS1_S1']**2
df1['AVG_AVG'] = df1['AVGS1_S1']*df1['AVGS2_S2']
sum_sq = df1['Average_SQ'].sum()
sum_avg = df1['AVG_AVG'].sum()
slope = sum_avg/sum_sq
intercept = avg2-(slope*avg1)
m = (df1['Adj Close']-df1['Adj Close'].mean())*(df2['Adj Close']-df2['Adj Close'].mean())/(df1['Adj Close']-df1['Adj Close'].mean())
n = 20
df1['Slope'] = m.rolling(n).mean()
fig = plt.figure(figsize=(14,10))
ax1 = plt.subplot(2, 1, 1)
ax1.plot(df1['Adj Close'])
ax1.set_title('Stock '+ symbol1 +' Closing Price')
ax1.set_ylabel('Price')
ax1.legend(loc='best')
ax2 = plt.subplot(2, 1, 2)
#df1['VolumePositive'] = df1['Open'] < df1['Adj Close']
#colors = df1.VolumePositive.map({True: 'g', False: 'r'})
#ax2.bar(df1.index, df1['Volume'], color=colors, alpha=0.4)
ax2.plot(df1['Slope'], label='Slope')
ax2.grid()
ax2.set_ylabel('Slope')
ax2.set_xlabel('Date')
###Output
_____no_output_____
###Markdown
Candlestick with Linear Regression Slope
###Code
from matplotlib import dates as mdates
import datetime as dt
dfc = df1.copy()
dfc['VolumePositive'] = dfc['Open'] < dfc['Adj Close']
#dfc = dfc.dropna()
dfc = dfc.reset_index()
dfc['Date'] = mdates.date2num(dfc['Date'].astype(dt.date))
dfc.head()
from mpl_finance import candlestick_ohlc
fig = plt.figure(figsize=(14,10))
ax1 = plt.subplot(2, 1, 1)
candlestick_ohlc(ax1,dfc.values, width=0.5, colorup='g', colordown='r', alpha=1.0)
ax1.xaxis_date()
ax1.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y'))
ax1.grid(True, which='both')
ax1.minorticks_on()
ax1v = ax1.twinx()
colors = dfc.VolumePositive.map({True: 'g', False: 'r'})
ax1v.bar(dfc.Date, dfc['Volume'], color=colors, alpha=0.4)
ax1v.axes.yaxis.set_ticklabels([])
ax1v.set_ylim(0, 3*df1.Volume.max())
ax1.set_title('Stock '+ symbol1 +' Closing Price')
ax1.set_ylabel('Price')
ax2 = plt.subplot(2, 1, 2)
#df1['VolumePositive'] = df1['Open'] < df1['Adj Close']
#colors = df1.VolumePositive.map({True: 'g', False: 'r'})
#ax2.bar(df1.index, df1['Volume'], color=colors, alpha=0.4)
ax2.plot(df1['Slope'], label='Slope')
ax2.grid()
ax2.set_ylabel('Slope')
ax2.set_xlabel('Date')
###Output
_____no_output_____ |
utils/CostModels/Recursive_LSTM_v2_nDCG-loss/Recursive_LSTM_v2_ranking.ipynb | ###Markdown
Training the cost model with a ranking loss
###Code
from os import environ
environ['train_device'] = 'cuda:1' # training device: 'cpu' or 'cuda:X'
environ['store_device'] = 'cuda:1' # Data storing device: 'cpu' or 'cuda:X'
environ['dataset_file'] = '/data/scratch/mmerouani/processed_datasets/dataset_batch4X_train_val_set.pkl' #training / validation set
environ['test_dataset_file'] = '/data/scratch/mmerouani/processed_datasets/dataset_batch4X_test_set.pkl' #test set
environ['benchmark_dataset_file']='/data/scratch/mmerouani/processed_datasets/dataset_Benchmark_batch10.pkl' #benchmarks set
#a copy of these datasets can be found in /data/commit/tiramisu/cost-model_datasets/processed_datasets/
%run utils.py # imports and defines some utils functions
###Output
_____no_output_____
###Markdown
Data loading
###Code
train_val_dataset, val_bl, val_indices, train_bl, train_indices = load_data_meta_batches(dataset_file, 0.2, max_batch_size=832)
test_dataset, test_bl, test_indices, _, _ = load_data_meta_batches(test_dataset_file, 1)
###Output
loading batches from: /data/scratch/mmerouani/processed_datasets/dataset_batch4X_train_val_set.pkl
###Markdown
Model definition
###Code
input_size = 2534
criterion = ndcgLoss2PP_meta_batches # Using nDCG-Loss-2++
model = None
model = Model_Recursive_LSTM_v2_ranking(input_size, drops=[0.112, 0.112, 0.112, 0.112])
model.to(train_device)
optimizer = AdamW(model.parameters(),weight_decay=0.375e-2)
###Output
_____no_output_____
###Markdown
Model training
###Code
bl_dict={'train':train_bl, 'val':val_bl}
log_file = 'log_Recursive_LSTM_v2_ranking.txt'
losses, best_model = train_model_meta_batches(model, criterion, optimizer , max_lr=0.002, dataloader=bl_dict, num_epochs=800,
logFile=log_file, log_every=1)
###Output
Epoch 1/800: train Loss: 11.1857 val Loss: 8.1448 time: 1209.35s best loss: 8.1448
Epoch 2/800: train Loss: 8.4159 val Loss: 7.6152 time: 1214.85s best loss: 7.6152
Epoch 3/800: train Loss: 7.9054 val Loss: 7.3919 time: 1247.45s best loss: 7.3919
Epoch 4/800: train Loss: 7.6130 val Loss: 7.1376 time: 1225.39s best loss: 7.1376
Epoch 5/800: train Loss: 7.4745 val Loss: 6.9653 time: 1236.47s best loss: 6.9653
###Markdown
Loading a pre-trained model
###Code
model.load_state_dict(torch.load('Recursive_LSTM_v2_ndcgLoss2PP.pkl',map_location=train_device))
model.to(train_device)
print()
###Output
###Markdown
Basic results on the test set
###Code
test_df, test_df_rank_scores = get_results_df_meta_batches(test_dataset, test_bl, test_indices, model)
test_df
test_df_rank_scores.describe()
cf_matrix = confusion_matrix(test_df['real_rank'].astype('int32'), test_df['predicted_rank'].astype('int32'))
fig = px.imshow(cf_matrix,
labels=dict(x="Real rank", y="Predicted rank", color="Number of Schedules (out of 276k)" ),
x=[str(i) for i in range(1,34)],
y=[str(i) for i in range(1,34)]
)
fig.update_xaxes(side="top")
fig.show('png') #use fig.show() for interactive mode
###Output
_____no_output_____
###Markdown
Basic results on the benchmark set
###Code
benchmark_dataset, benchmark_bl, benchmark_indices, _, _ = load_data_meta_batches(benchmark_dataset_file, 1)
benchmark_df, benchmark_df_rank_scores = get_results_df_meta_batches(benchmark_dataset, benchmark_bl, benchmark_indices, model)
benchmark_df_rank_scores.describe()
pass
###Output
_____no_output_____ |
_notebooks/2019-08-15-nbafantasy2.ipynb | ###Markdown
Fantasy NBA 2- toc: false- branch: master- badges: true- comments: false- categories: [basketball, data science]
###Code
from pprint import pprint
import numpy as np
import pandas as pd
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
import scipy
from scipy.stats import expon, skewnorm, norm
import nba_api
from nba_api.stats.static import teams, players
from nba_api.stats.endpoints import shotchartdetail, playercareerstats, playergamelog
import ballDontLie
from ballDontLie.util.api_nba import find_player_id
from ballDontLie.util.fantasy import compute_fantasy_points
seasons_range = ['2018-19', '2017-18', '2016-17', '2015-16']
players_range = ['Anthony Davis', 'James Harden', 'Stephen Curry', 'Giannis Antetokounmpo', 'Karl-Anthony Towns',
'Nikola Jokic', 'Joel Embiid', 'Paul George', 'Kawhi Leonard', 'Damian Lillard', 'Jimmy Butler',
'LeBron James', "Bradley Beal"]
player_id_map = {a: find_player_id(a) for a in players_range}
###Output
_____no_output_____
###Markdown
For the various players and the various seasons, let's look at the distributions of some of their box stats
###Code
for player, player_id in player_id_map.items():
fig, ax = plt.subplots(1,1)
df = pd.read_csv('data/{}.csv'.format(player.replace(" ","")))
df.hist(column=['FGM', 'FGA', 'FTM', 'FTA', "REB", 'AST',
'STL', 'BLK', "PTS"], ax=ax)
fig.suptitle(player)
###Output
/Users/ayang41/anaconda3/envs/py36/lib/python3.6/site-packages/IPython/core/interactiveshell.py:3296: UserWarning: To output multiple subplots, the figure containing the passed axes is being cleared
exec(code_obj, self.user_global_ns, self.user_ns)
###Markdown
I'm going off by what these distributions sort of look like over all the players:* AST: Skewed normal* BLK: Exponential* FGA: Normal* FGM: Normal* FTA: Skewed normal* FTM: Skewed normal* PTS: Normal* REB: Skewed normal* STL: Skewed normalFor all players, I'm going to model each box stat as such. Given the gamelog data (blue), fit the model to that data, generate some values with that model (orange), and compare to the actual gamelog data.Some comments:For the "bigger" numbers like PTS, FGA, FGM, REB, the model distributions fit pretty well. For the "smaller" numbers like BLK or STL (a player will usually have 0, 1, 2, 3, or maybe 4 of that stat) - these numbers are more discrete than the "bigger numbers". If you can score points between 0 and 40, each actually reported points behaves more continuously since there is more variety.From earlier work with PyMC for Bayesian probability modeling, I could have tried using PyMC to sample parameters for each stat-distribution, rather than just do a singular fitting. While that could help report a variety of parameters for each stat-distribution in addition to a sense of variation or uncertainty, I don't think it's super necessary to really venture into exploring the different distributions and their parameters that could fit each box stat; the fitting schemes via scipy seem to work well.It's possible there are better models to fit some of the data - I can't say my brain-database of statistical models is extensive, so I just kinda perused through `scipy.stats`.Fitting a distribution helps formalize how much a player's game can vary (is he consistently a 20ppg player? Or are is he hot and cold between 10 and 30 ppg?) Furthermore, if a player is out (injured or some other reason), that implicitly gets captured by a gamelog of 0pts, 0reb, etc. This is definitely important in fantasy because some may value a more reliable/consistent player who will show up to 80/82 games rather than a glass weapon who could drop 50 points, but will only play 40-50/82 gamesThese distributions assume we can ignore: coaching changes, team roster changes, and maybe player development. For player development, a younger player between 2015-2019 will demonstrate huge variance in two ways - young players are inconsistent game-to-game, but young players can also develop rapidly season-by-season. At the very least, these distributions try to describe variance, which shows room where a young player could go off or bust on a given night. Factoring season-by-season improvement will be hard - one would need to try to forecast a player's future stats rather than draw samples from a "fixed" distribution based on previous stats
###Code
stat_model_map = {"AST": skewnorm, "BLK": expon, "FGA": norm, "FGM": norm,
"FTA": skewnorm, "FTM": skewnorm, "PTS": norm, "REB": skewnorm,
"STL": skewnorm}
for player, player_id in player_id_map.items():
fig, axarray = plt.subplots(3,3)
df = pd.read_csv('data/{}.csv'.format(player.replace(" ","")))
for i, (stat, model) in enumerate(stat_model_map.items()):
row = i // 3
col = i % 3
axarray[row, col].hist(df[stat], alpha=0.3)
axarray[row, col].set_title(stat)
params = model.fit(df[stat])
axarray[row, col].hist(model.rvs(*params, size=len(df[stat])), alpha=0.3)
fig.suptitle(player)
fig.tight_layout()
###Output
_____no_output_____
###Markdown
At this point, for each player and box stat, we have a distribution that can describe their game-by-game performance. Maybe we can sample from this distribution 82 times (82 games per season) to get an idea of the fantasy points they'll yield (the fantasy points will depend on the league settings and how each league weights the box stats).To simulate a season for a player, we will model the distribution for each box stat, and sample from it 82 times. This is our simulated season.
###Code
simulated_season = pd.DataFrame()
for player, player_id in player_id_map.items():
df = pd.read_csv('data/{}.csv'.format(player.replace(" ","")))
simulated_player_log = {}
for stat, model in stat_model_map.items():
params = model.fit(df[stat])
sample = model.rvs(*params, size=82)
simulated_player_log[stat] = sample
simulated_player_log_series = pd.Series(data=simulated_player_log, name=player)
simulated_season = simulated_season.append(simulated_player_log_series)
###Output
_____no_output_____
###Markdown
In addition to getting an 82-list of ast, blk, fga, etc. We can compute an 82-list of fantasy points (point values will depend on the league, but the default args for `compute_fantasy_points` are pulled from ESPN head-to-head points league default categories
###Code
simulated_season = compute_fantasy_points(simulated_season)
simulated_season
###Output
_____no_output_____
###Markdown
To make things simpler to read, we will compress the dataframe into totals for the entire season, includingthe total fantasy points for that season
###Code
simulated_totals = simulated_season.copy()
for col in simulated_totals.columns:
simulated_totals[col] = [sum(a) for a in simulated_totals[col]]
simulated_totals.sort_values('FP', ascending=False)
###Output
_____no_output_____
###Markdown
Generally speaking, this method is in-line with many other fantasy predictions. James Harden, Anthony Davis, LeBron James, Karl-Anthony Towns, Steph Curry, Giannis, and Joel Embiid all top the list. In this "simulation" our sample size was 82 to match a season. We could repeat this simulation multiple times (so 82 * n times). That effectively increases our sample size from 82 to much larger.Sampling enough is always a question, so we'll address that by simulating multiple seasons. Discussion of the approach will follow later
###Code
def simulate_n_seasons(player_id_map, stat_model_map, n=5):
# For a season, we just want the player, FP, and the rank
# Initialize dictionary of dictionary of lists to store this information across "epochs"
epoch_results = {}
for player in player_id_map:
epoch_results[player] = {'FP':[], 'rank':[]}
for i in range(n):
# Just copy-pasted code for convenience in a notebook
# If this were a python script, I would probably put these functions in a module/library somewhere
# Model the distribution of a player's box stats, simulate 82 times, compute fantasy points
simulated_season = pd.DataFrame()
for player, player_id in player_id_map.items():
df = pd.read_csv('data/{}.csv'.format(player.replace(" ","")))
simulated_player_log = {}
for stat, model in stat_model_map.items():
params = model.fit(df[stat])
sample = model.rvs(*params, size=82)
simulated_player_log[stat] = sample
simulated_player_log_series = pd.Series(data=simulated_player_log, name=player)
simulated_season = simulated_season.append(simulated_player_log_series)
simulated_season = compute_fantasy_points(simulated_season)
simulated_totals = simulated_season.copy()
for col in simulated_totals.columns:
simulated_totals[col] = [sum(a) for a in simulated_totals[col]]
simulated_totals = simulated_totals.sort_values('FP', ascending=False)
# Store the fantasy points and player rank for that simulated season
for player in player_id_map:
epoch_results[player]['FP'].append(simulated_totals[simulated_totals.index==player]['FP'].values[0])
epoch_results[player]['rank'].append(simulated_totals.index.get_loc(player))
return epoch_results
epoch_results = simulate_n_seasons(player_id_map, stat_model_map, n=10)
pprint(epoch_results)
###Output
{'Anthony Davis': {'FP': [2600.2718925173745,
2845.699762026843,
2732.8142372134657,
2841.1189014237507,
2971.6018500231144,
2513.4197141216027,
2671.907808833641,
2771.33344794354,
2642.4401320506413,
2668.8391100382087],
'rank': [3, 0, 2, 1, 0, 5, 2, 2, 2, 3]},
'Bradley Beal': {'FP': [2017.9614666080413,
1898.010517178849,
1804.7941022033058,
1763.042431335895,
1730.5132495765397,
1715.734094440131,
1838.502289868321,
1867.0678145772936,
1718.2489707303305,
1669.67185265485],
'rank': [11, 12, 12, 12, 12, 12, 12, 12, 12, 12]},
'Damian Lillard': {'FP': [2382.8646505353163,
2048.068798160208,
2477.586060738951,
2175.7697065125562,
2118.5140365357565,
2247.094879780937,
2072.756774030209,
2139.1192107972674,
2370.629467489499,
2257.211660962338],
'rank': [7, 10, 6, 9, 9, 8, 10, 10, 7, 7]},
'Giannis Antetokounmpo': {'FP': [2576.219466934943,
2686.76587859472,
2493.6482163098117,
2702.8549155043497,
2569.2853027544083,
2551.6813152859013,
2414.5128757456737,
2561.3702273681874,
2499.1103752312756,
2510.8640790442428],
'rank': [4, 3, 5, 2, 4, 3, 6, 3, 5, 6]},
'James Harden': {'FP': [2887.670929514508,
2843.389071489731,
3116.7281411190593,
2857.6232728678133,
2804.0017687844643,
2737.302937823686,
3007.392134417434,
2919.5661859360953,
2967.3026370340576,
2964.2404529023775],
'rank': [0, 1, 0, 0, 1, 1, 0, 0, 0, 0]},
'Jimmy Butler': {'FP': [2161.113361146545,
1909.6945703788665,
1986.2703953730904,
2136.973720154527,
2232.480783438934,
2166.1726145299494,
1962.9186078450955,
2004.7241270670554,
2039.3267955068004,
1973.2565162804035],
'rank': [9, 11, 11, 11, 8, 11, 11, 11, 11, 11]},
'Joel Embiid': {'FP': [2800.8172929600287,
2558.645668264587,
2517.7715107689955,
2382.7918864392273,
2575.564309480633,
2513.594633760708,
2639.2018786988106,
2536.501391112381,
2502.800851773036,
2546.098737416667],
'rank': [1, 5, 4, 6, 3, 4, 3, 4, 4, 5]},
'Karl-Anthony Towns': {'FP': [2438.6012883732774,
2597.9460772777898,
2437.4608371185327,
2559.1161865080608,
2562.0978448642904,
2655.419027730967,
2479.0296092681992,
2469.6417151916476,
2552.6318141655534,
2706.9979324697306],
'rank': [6, 4, 7, 4, 5, 2, 4, 5, 3, 2]},
'Kawhi Leonard': {'FP': [2192.2405107386744,
2416.815987294494,
2274.141300566951,
2137.2178749787718,
2234.4505712447212,
2213.2129013592594,
2249.1630270595037,
2255.1592921722886,
2220.8331127315014,
2193.058252470087],
'rank': [8, 6, 8, 10, 7, 9, 7, 8, 9, 8]},
'LeBron James': {'FP': [2718.7719659019,
2830.2185612255066,
2796.158077485558,
2679.1235682729816,
2793.4255223009245,
2876.54356690619,
2712.108129400297,
2785.7145304012624,
2789.5298777236117,
2929.5895036201873],
'rank': [2, 2, 1, 3, 2, 0, 1, 1, 1, 1]},
'Nikola Jokic': {'FP': [2065.5848606919335,
2211.6735032023817,
2211.9020404775197,
2257.4205116181893,
2103.9926989504497,
2279.1497412203903,
2204.2763112313123,
2424.980161680876,
2294.3977539747652,
2097.2016861034704],
'rank': [10, 8, 9, 7, 10, 7, 8, 7, 8, 10]},
'Paul George': {'FP': [1954.6580000665872,
2067.468713136873,
2048.682238913504,
2201.483917859578,
1949.90816918642,
2169.3138128730848,
2140.273072194092,
2242.3376266501905,
2052.403480766016,
2142.499394706088],
'rank': [12, 9, 10, 8, 11, 10, 9, 9, 10, 9]},
'Stephen Curry': {'FP': [2505.2751490181836,
2316.5751250140206,
2567.5576627793876,
2546.082326528174,
2560.3301729832456,
2391.130817270027,
2446.09455104389,
2428.6777503086655,
2412.173845766393,
2577.680897739675],
'rank': [5, 7, 3, 5, 6, 6, 5, 6, 6, 4]}}
###Markdown
To make things prettier, we can just summarize the player ranks over all the simulated seasons, providing us an estimated average rank and error
###Code
def summarize_epoch_results(epoch_results):
summary_stats = {}
for player in epoch_results:
summary_stats[player] = {}
avg_rank = np.mean(epoch_results[player]['rank'])
std_rank = np.std(epoch_results[player]['rank'])
summary_stats[player]['rank'] = avg_rank
summary_stats[player]['err'] = std_rank
return summary_stats
summary_stats = summarize_epoch_results(epoch_results)
sorted(summary_stats.items(), key=lambda v: v[1]['rank'])
###Output
_____no_output_____ |
AIDA2_Files/Jas Files/58089_PrelimPS_Padilla.ipynb | ###Markdown
Topic02a : Prelim Problem Set IPadilla, Jasmine Clare B. Case 1Represent the following representations into its vectorized form using LaTeX.> **Problem 1.a. System of Linear Equations**$$\left\{ \begin{array}\\ -y+z=\frac{1}{32}\\ \frac{1}{2}x -2y=0 \\ -x + \frac{3}{7}z=\frac{4}{5} \end{array}\right. $$> **Problem 1.b. Linear Combination**$$ \cos{(\theta)}\hat{i} + \sin{(\theta)}\hat{j} - \csc{(2\theta)}\hat{k}$$> **Problem 1.c. Scenario**>>A conference has 200 student attendees, 45 professionals, and has 15 members of the panel. There is a team of 40 people on the organizing committee. Represent the *percent* composition of each *attendee* type of the conference in matrix form.Express your answers in LaTeX in the answer area. Problem 1.a$$\begin{bmatrix}0&-1&1\\\frac{1}{2}&-2&0\\-1&0&\frac{3}{7}\end{bmatrix}=\begin{bmatrix}\frac{1}{32}\\0\\\frac{4}{5}\end{bmatrix}$$ Problem 1.b$$\begin{bmatrix}\cos{(\theta)}i & \sin{(\theta)}j & - \csc{(2\theta)}k\end{bmatrix}$$ Problem 1.c$$Conference = \left[\begin{matrix}\frac{2}{3} & \frac{3}{20} & \frac{1}{20} & \frac{2}{15}\end{matrix}\;\middle|\;\begin{matrix}1 \end{matrix}\right]$$ Case 2> **Problem 2.a: Vector Magnitude**>The magnitude of a vector is usually computed as:$$||v|| = \sqrt{a_0^2 + a_1^2 + ... +a_n^2}$$Whereas $v$ is any vector and $a_k$ are its elements wherein $k$ is the size of $v$.Re-formulate $||v||$ as a function of an inner product. Further discuss this concept and provide your user-defined function.> **Problem 2.b: Angle Between Vectors**> Inner products can also be related to the Law of Cosines. The property suggests that:$$u\cdot v = ||u||\cdot||v||\cos(\theta)$$Whereas $u$ and $v$ are vectors that have the same sizes and $\theta$ is the angle between $u$ and $v$.> Explain the behavior of the dot product when the two vectors are perpendicular and when they are parallel. Problem 2.a: Vector MagnitudeThe magnitude of a vector is determined by its length. The length of a vector must be calculated first before computing its magnitude. Velocity, displacement, force, momentum, and other quantities with direction are examples of vector quantities.
###Code
import numpy as np
def vect_mag(k):
return np.sqrt(sum(a**2 for a in k))
vector = np.array([1, 2, 3, 4, 5])
v = vect_mag(vector)
print(vector, v, sep = '\n')
###Output
[1 2 3 4 5]
7.416198487095663
###Markdown
Problem 2.b: Angle Between VectorsGiven that vectors are quantities with direction, there would be something that would indicate that direction. In this case, it's an angle. The dot product when two vectors are perpendicular is null as cos(90) = 0.$$u\cdot v = ||u||\cdot||v||\cos(90)$$$$u\cdot v = ||u||\cdot||v||0$$$$u\cdot v = 0$$The dot product when two vectors are parallel is just the dot product of two vectors as cos(0) = 1.$$u\cdot v = ||u||\cdot||v||\cos(0)$$$$u\cdot v = ||u||\cdot||v||1$$$$u\cdot v = u\cdot v$$
###Code
def angle_vectors(u, v):
inner = np.inner(u, v)
norms = np.linalg.norm(u) * np.linalg.norm(v)
cos = inner / norms
rad = np.arccos(np.clip(cos, -1.0, 1.0))
deg = np.rad2deg(rad)
print("Radiant: ", rad)
print("Degree: ", deg)
return deg, rad
u = np.array([1, 2])
v = np.array([-1, -2])
print(u,v)
angle_vectors(u,v)
###Output
[1 2] [-1 -2]
Radiant: 3.1415926325163688
Degree: 179.99999879258172
|
notebooks/Defining_Custom_Weather_Models/Defining_custom_Weather_Models_in_RAiDER.ipynb | ###Markdown
Creating a new Weather Model in RAidER **Author**: Jeremy Maurer, David Bekaert, Simran Sangha, Yang Lei - Jet Propulsion Laboratory, California Institute of TechnologyThis notebook provides an overview of how to get started using the RAiDER package for estimating tropospheric RADAR delays, and other functionality included in the **raiderDelay.py** program. We give an example of how to download and process delays using ERA-5 and HRRR weather models for the Los Angeles region. In this notebook, we will demonstrate how to:- Define and use a custom weather model for use with RAiDER The initial setup (Prep A section) should be run at the start of the notebook. Potential Errors: - RAiDER needs to be installed to run this notebook Terminology: - *Weather model*: A reanalysis weather product defining temperature, pressure, and humidity on a regular grid in some coordinate system (e.g., at regular lat/lon intervals in the standard EPSG:4326 reference frame). Some initial setup
###Code
import gdal
import os
import numpy as np
import matplotlib.pyplot as plt
## Defining the home and data directories at the processing location
work_dir = os.path.abspath(os.getcwd())
tutorial_home_dir = os.path.abspath(os.getcwd())
print("Work directory: ", work_dir)
print("Tutorial directory: ", tutorial_home_dir)
# Enable GDAL/OGR exceptions
gdal.UseExceptions()
# Verifying if ARIA-tools is installed correctly
try:
import RAiDER
except:
raise RuntimeError('RAiDER is missing from your PYTHONPATH')
os.chdir(work_dir)
###Output
Work directory: /Users/jlmd9g/software/RAiDER-docs/notebooks
Tutorial directory: /Users/jlmd9g/software/RAiDER-docs/notebooks
###Markdown
RAiDER Readers Weather model readers provide the link between the raw weather model data (e.g. available from ECMWF, ERA-5, ERA-5T, GMAO, MERRA-2, HRRR), and the absolute delay calculation. Readers can be added by users to account for other models and custom formats. Here we provide an overview of the WeatherModel class object and requirements for writing one's own reader function. The WeatherModel class Functions to be overloaded:\_fetch: - Called by WeatherModel.fetch method- downloads or loads data from the source filesload_weather: - Called by the WeatherModel.load method- loads data from the raw weather model files and parses it into the WeatherModel format (see below) Defining a custom ReaderThe example below describes the minimum required attributes and methods for a custom model reader. Each model reader should call as a super-class the "WeatherModel" base class and should initialize the base class as shown in the example. This will initialize all of the required attributes, etc. and default values for non-required attributes. The minimum required class methods are ```__init__```, ```_fetch``` and ```load_weather```, and auxiliary methods and attributes can be defined as needed for accessing the data API, loading and manipulating the data, etc. Required data and formatRAiDER expects that your custom weather model reader will result in a Python object with attributes consistent with the WeatherModel class and the RAiDER convention. The required variables are: - \_lats, \_lons- \_p, \_t- \_rh OR \_q, matching the corresponding \_humidityTypeIn addition, you need three variables that capture the coordinates of the data cubes:- \_xs, \_ys, \_zsEach of the required variables should be a 3-D cube, all of the same shape, with axes ordered as (z, x, y), monotonically increasing***. \_lons and \_lats should also be 3D cubes, replicated in the z-dimension. The longitude '_lons' needs to vary between -180 and 180 (longitudes between 0 and 360 are not supported). The '_zs' variable should be topographic height, but the height variable passed with the weather model data is often the geopotential height, which must be converted to topographic height. The WeatherModel class has a helper function for this conversion, which can be called within the custom class as self._get_heights(lats, geo_hgt), where geo_hgt is geopotential height.
###Code
from RAiDER.models.weatherModel import WeatherModel
class customModelReader(WeatherModel):
def __init__(self):
WeatherModel.__init__(self)
# **Note**: The equation for refractivity uses e, ***<def>, but typically weather models provide
# either q (specific humidity) or rh (relative humidity). RAiDER computes e automatically from
# either of these.
self._humidityType = 'q' # can be "q" (specific humidity) or "rh" (relative humidity)
# This is useful if a single weather model provides data on both fixed pressure levels and
# fixed model levels (e.g., ECMWF). You can define different readers for both types
self._model_level_type = 'pl' # Default, pressure levels are "pl", and model levels are "ml"
# Tuple of min/max years where data is available.
# valid range of the dataset. Users need to specify the start date and end date (can be "present")
self._valid_range = (datetime.datetime(2016,7,15),"Present")
# Lag time between today and when today's data will be available for download.
# Can be specified in hours "hours=3" or in days "days=3"
self._lag_time = datetime.timedelta(hours=3)
# model constants (these three constants are borrowed from the ECMWF model
# These the k's in the expression for refractivity: k1*(P/T) + k2*(e/T) + k3*(e/T^2)
self._k1 = 0.776 # [K/Pa]
self._k2 = 0.233 # [K/Pa]
self._k3 = 3.75e3 # [K^2/Pa]
# horizontal grid spacing. These are NOT used for projection information, but are used to
# estimate a buffer region around your query points to ensure that a large enough area is
# downloaded
self._lat_res = 3./111 # grid spacing in latitude
self._lon_res = 3./111 # grid spacing in longitude
self._x_res = 3. # x-direction grid spacing in the native weather model projection
# (if the projection is in lat/lon, it is the same as "self._lon_res")
self._y_res = 3. # y-direction grid spacing in the weather model native projection
# (if the projection is in lat/lon, it is the same as "self._lat_res")
self._Name = 'ABCD' # name of the custom weather model (better to be capitalized)
# Projections in RAiDER are defined using pyproj (python wrapper around Proj)
# If the projection is defined with EPSG code, one can use "self._proj = CRS.from_epsg(4326)"
# to replace the following lines to get "self._proj".
# Below we show the example of HRRR model with the parameters of its Lambert Conformal Conic projection
lon0 = 262.5
lat0 = 38.5
lat1 = 38.5
lat2 = 38.5
x0 = 0
y0 = 0
earth_radius = 6371229
p1 = CRS('+proj=lcc +lat_1={lat1} +lat_2={lat2} +lat_0={lat0} +lon_0={lon0} +x_0={x0} +y_0={y0} +a={a} +b={a} +units=m +no_defs'.format(lat1=lat1, lat2=lat2, lat0=lat0, lon0=lon0, x0=x0, y0=y0, a=earth_radius))
self._proj = p1
# This function needs to be writen by the users and is used to e.g. download a file containing weather
# data (p, t, rh, etc.) from the weather model server, or iff your weather models always live
# locally, you can define logic here to read a subset of the files based on the input bounding box.
def _fetch(self, lats, lons, time, out):
'''
Fetch weather model data from the custom weather model "ABCD"
Parameters
----------
NDArray: lats - latitude of your query points
NDArray: lons - longitude of your query points
Python Datetime object: time - datatime object (year,month,day,hour,minute,second)
String: out - name of downloaded dataset file from the custom weather model server
'''
# The list of inputs is exact; RAiDER will not pass any additional keyword arguments to this function,
# and all of the inputs must be provided.
# bounding box plus a buffer using the helper function from the WeatherModel base class
#
# Set the "Nextra" argument to match the number of additional grid cells in your custom model
# to download outside of your query points. This is needed when ray-tracing slant delays, for
# the points on the edge.
#
# Nextra should be something like ceil(zref*tan(inc)/horizontal_grid_spacing), where zref is the
# assumed height of the troposphere (default 15 km), inc is the average inclination angle, and
# horizontal_grid_spacing is from your model in km.
#
#Example: Sentinel-1 (inc ~ 35 degrees), ERA-5 (grid spacing ~ 30 km) and the default zref (15 km),
# Nextra = 1.
lat_min, lat_max, lon_min, lon_max = self._get_ll_bounds(lats, lons, Nextra = 1)
self._bounds = (lat_min, lat_max, lon_min, lon_max)
# Even if you don't need to download files, you will need to assign the "self._files" attribute so
# that the "load_weather" method knows what file contains the data
#
# In this example, you would need to define an auxilliary function _download_abcd_file (see below)
self._files = self._download_abcd_file(out, time, self._bounds)
# This function gets called by RAiDER to read individual variables from your source file and pre-
# process the data from the file into the format expected by RAiDER (see main text above and
# "Returns" description below).
def load_weather(self, filename):
'''
Load weather model variables from the downloaded file named filename
Parameters
----------
filename - filename of the downloaded weather model file
Returns
-------
# Doesn't directly return anything, but assigns values to self.
# Data cubes: should be ordered as (z, x, y)
NDArray: _p - 3D data cube of pressure in Pa
NDArray: _t - 3D data cube of temperature in Kelvin
NDArray: _q - 3D data cube of specific humidity in ***; only one of _q or _rh is required
NDArray: _rh - 3D data cube of relative humidity in ***; only one of _q or _rh is required
NDArray: _lats - 3D data cube of latitude. Should be WGS-84 latitudes (EPSG: 4326)
NDArray: _lons - 3D data cube of longitude. Should be WGS-84 latitudes (EPSG: 4326)
NDArray: _xs - 3D cube of x-coordinates of the data cubes in the native weather model projection
NDArray: _ys - 3D cube of y-coordinates of the data cubes in the native weather model projection
NDArray: _zs - 3D cube of z-coordinates of the data cubes in the native weather model projection
'''
# In this case we have an auxiliary function "_makeDataCubes" to do the pre-processing
# Pre-processing loads the data available from the weather model file and manipulates it
# as needed to get the data cubes into the form prescribed above.
lats, lons, xs, ys, t, q, p, hgt = self._makeDataCubes(filename)
# **Note**: RAiDER provides helper functions for certain types of operations; e.g. for converting
# surface pressure and geopotential to pressure and geopotential height:
# z, p, hgt = self._calculategeoh(z, lnsp) # z is geopotential and lnsp is the natural log of surface pressure
# **Note**: ECMWF provides heights as geopotential (units m^2/s^2). For a similar custom model, one can use
# the following line to convert to geopotential height:
# hgt = z / self._g0
# if geopotential height is provided, one can use the following line to convert to
# topographic height, which is then automatically assigned to "self._zs":
# self._get_heights(lats, hgt) # where hgt is geopotential height = geopotential / gravity acceleration
# Otherwise, if topographic height is provided directly:
_zs = hgt
# depending
self._t = t
self._q = q
self._p = p
self._lats = lats
self._lons = lons
# _xs: x-direction grid coordinate in the native weather model projection (=_lons if projection is WGS-84)
# _ys: y-direction grid coordinate in the native weather model projection (=_lats if projection is WGS-84)
# _zs: z-direction grid coordinate. Must be topographic height in meters.
self._xs = xs
self._ys = ys
self._zs = _zs
###########
def _download_abcd_file(self, out, date_time, bounding_box):
'''
Example auxilliary function for fetching data
Can be a file download from a server, grabbing a local filename, or accessing a cloud-based API
Parameters
----------
out - filename to save data to
date_time - Python datatime object
bounding_box - bounding box for the region of interest
Output:
out - returned filename from input
'''
return None
def _makeDataCubes(self, filename):
'''
Example auxilliary function for data pre-processing
Read 3-D data cubes from 'filename'
Parameters
----------
filename - filename of the downloaded weather model file from the server
Returns
-------
lats - latitude (3-D data cube)
lons - longitude (3-D data cube)
xs - x-direction grid dimension of the native weather model coordinates (3-D data cube; if in lat/lon, _xs = _lons)
ys - y-direction grid dimension of the native weather model coordinates (3-D data cube; if in lat/lon, _ys = _lats)
t - temperature (3-D data cube)
q - humidity (3-D data cube; could be relative humidity or specific humidity)
p - pressure level (3-D data cube; could be pressure level (preferred) or surface pressure)
hgt - height (3-D data cube; could be geopotential height or topographic height (preferred))
'''
return None, None, None, None, None, None, None, None
###Output
_____no_output_____
###Markdown
The ```_fetch``` methodThe ```_fetch``` method gets called by the RAiDER to fetch download or read the data. As shown in the example script, this is where you can download the data from a server, etc. If your weather model always lives on your local machine (or can always be locally accessed) this method can be very simple, but should still be defined as it will always be called. In addition, the filename of the data should be returned so that RAiDER knows what file to load. The ```load_weather``` method```load_weather``` is like ```_fetch``` in that it always gets called during the "load" routine. After you have a file available for RAiDER to read, this method will pre-process the data from your file to match the inputs RAiDER expects. In particular, after running this method, your weather model reader object should contain the variables listed in the example above. Adding the reader to the weather model listModify the allowed list of weather models "allowed.py" under the directory of "tools/RAiDER/models" to include the custom "ABCD" model as below.
###Code
ALLOWED_MODELS = [
'ERA5',
'ERA5T',
'ERAI',
'MERRA2',
'WRF',
'HRRR',
'GMAO',
'HDF5',
'HRES',
'NCMR',
'ABCD'
]
###Output
_____no_output_____
###Markdown
Debugging your custom readerThe WeatherModel class has two built-in plots for debugging purposes: ```WeatherModel.plot(plotType='pqt', savefig=True)``` ```WeatherModel.plot(plotType='wh', savefig=True)``` These commands plot pressure/humidity/temperature and wet and hydrostatic refractivity for the weather model, and are created by default when running ```raiderDelay.py``` normally. When debugging your custom model reader, you can use the command line executable ```raiderWeatherModelDebug.py```, which can take the exactly same list of input variables as ```raiderDelay.py``` and just create the debugging plots.
###Code
# Replace "ABCD" with your custom weather model name
# add the --out option if you want your results to be written to a directory other than the current one
raiderWeatherModelDebug.py --date 20200103 --time 23:00:00 -b 39 40 -79 -78 --model ABCD --zref 15000 -v
###Output
_____no_output_____
###Markdown
You can also test your custom model reader by running the three example commands from the ```raiderDelay.py``` helper message (i.e. running ```raiderDelay.py``` with the ```-h``` option will show the three example commands) with the weather model name "ERA5" replaced with the newly-added custom one, e.g. "ABCD".
###Code
# Replace "ABCD" with your custom weather model name
# add the --out option if you want your results to be written to a directory other than the current one
raiderDelay.py --date 20200103 --time 23:00:00 -b 39 40 -79 -78 --model ABCD --zref 15000 -v
###Output
_____no_output_____
###Markdown
Creating a new Weather Model in RAiDER **Author**: Jeremy Maurer, David Bekaert, Simran Sangha, Yang Lei - Jet Propulsion Laboratory, California Institute of TechnologyThis notebook provides an overview of how to get started using the RAiDER package for estimating tropospheric RADAR delays, and other functionality included in the **raiderDelay.py** program. We give an example of how to download and process delays using ERA-5 and HRRR weather models for the Los Angeles region. In this notebook, we will demonstrate how to:- Define and use a custom weather model for use with RAiDER The initial setup (Prep A section) should be run at the start of the notebook. Potential Errors: - RAiDER needs to be installed to run this notebook Terminology: - *Weather model*: A reanalysis weather product defining temperature, pressure, and humidity on a regular grid in some coordinate system (e.g., at regular lat/lon intervals in the standard EPSG:4326 reference frame). Some initial setup
###Code
import gdal
import os
import numpy as np
import matplotlib.pyplot as plt
## Defining the home and data directories at the processing location
work_dir = os.path.abspath(os.getcwd())
tutorial_home_dir = os.path.abspath(os.getcwd())
print("Work directory: ", work_dir)
print("Tutorial directory: ", tutorial_home_dir)
# Enable GDAL/OGR exceptions
gdal.UseExceptions()
# Verifying if ARIA-tools is installed correctly
try:
import RAiDER
except:
raise RuntimeError('RAiDER is missing from your PYTHONPATH')
os.chdir(work_dir)
###Output
Work directory: /Users/jlmd9g/software/RAiDER-docs/notebooks
Tutorial directory: /Users/jlmd9g/software/RAiDER-docs/notebooks
###Markdown
RAiDER Readers Weather model readers provide the link between the raw weather model data (e.g. available from ECMWF, ERA-5, ERA-5T, GMAO, MERRA-2, HRRR), and the absolute delay calculation. Readers can be added by users to account for other models and custom formats. Here we provide an overview of the WeatherModel class object and requirements for writing one's own reader function. The WeatherModel class Functions to be overloaded:\_fetch: - Called by WeatherModel.fetch method- downloads or loads data from the source filesload_weather: - Called by the WeatherModel.load method- loads data from the raw weather model files and parses it into the WeatherModel format (see below) Defining a custom ReaderThe example below describes the minimum required attributes and methods for a custom model reader. Each model reader should call as a super-class the "WeatherModel" base class and should initialize the base class as shown in the example. This will initialize all of the required attributes, etc. and default values for non-required attributes. The minimum required class methods are ```__init__```, ```_fetch``` and ```load_weather```, and auxiliary methods and attributes can be defined as needed for accessing the data API, loading and manipulating the data, etc. Required data and formatRAiDER expects that your custom weather model reader will result in a Python object with attributes consistent with the WeatherModel class and the RAiDER convention. The required variables are: - \_lats, \_lons- \_p, \_t- \_rh OR \_q, matching the corresponding \_humidityTypeIn addition, you need three variables that capture the coordinates of the data cubes:- \_xs, \_ys, \_zsEach of the required variables should be a 3-D cube, all of the same shape, with axes ordered as (z, x, y), monotonically increasing***. \_lons and \_lats should also be 3D cubes, replicated in the z-dimension. The longitude '_lons' needs to vary between -180 and 180 (longitudes between 0 and 360 are not supported). The '_zs' variable should be topographic height, but the height variable passed with the weather model data is often the geopotential height, which must be converted to topographic height. The WeatherModel class has a helper function for this conversion, which can be called within the custom class as self._get_heights(lats, geo_hgt), where geo_hgt is geopotential height.
###Code
from RAiDER.models.weatherModel import WeatherModel
class customModelReader(WeatherModel):
def __init__(self):
WeatherModel.__init__(self)
# **Note**: The equation for refractivity uses e, ***<def>, but typically weather models provide
# either q (specific humidity) or rh (relative humidity). RAiDER computes e automatically from
# either of these.
self._humidityType = 'q' # can be "q" (specific humidity) or "rh" (relative humidity)
# This is useful if a single weather model provides data on both fixed pressure levels and
# fixed model levels (e.g., ECMWF). You can define different readers for both types
self._model_level_type = 'pl' # Default, pressure levels are "pl", and model levels are "ml"
# Tuple of min/max years where data is available.
# valid range of the dataset. Users need to specify the start date and end date (can be "present")
self._valid_range = (datetime.datetime(2016,7,15),"Present")
# Lag time between today and when today's data will be available for download.
# Can be specified in hours "hours=3" or in days "days=3"
self._lag_time = datetime.timedelta(hours=3)
# model constants (these three constants are borrowed from the ECMWF model
# These the k's in the expression for refractivity: k1*(P/T) + k2*(e/T) + k3*(e/T^2)
self._k1 = 0.776 # [K/Pa]
self._k2 = 0.233 # [K/Pa]
self._k3 = 3.75e3 # [K^2/Pa]
# horizontal grid spacing. These are NOT used for projection information, but are used to
# estimate a buffer region around your query points to ensure that a large enough area is
# downloaded
self._lat_res = 3./111 # grid spacing in latitude
self._lon_res = 3./111 # grid spacing in longitude
self._x_res = 3. # x-direction grid spacing in the native weather model projection
# (if the projection is in lat/lon, it is the same as "self._lon_res")
self._y_res = 3. # y-direction grid spacing in the weather model native projection
# (if the projection is in lat/lon, it is the same as "self._lat_res")
self._Name = 'ABCD' # name of the custom weather model (better to be capitalized)
# Projections in RAiDER are defined using pyproj (python wrapper around Proj)
# If the projection is defined with EPSG code, one can use "self._proj = CRS.from_epsg(4326)"
# to replace the following lines to get "self._proj".
# Below we show the example of HRRR model with the parameters of its Lambert Conformal Conic projection
lon0 = 262.5
lat0 = 38.5
lat1 = 38.5
lat2 = 38.5
x0 = 0
y0 = 0
earth_radius = 6371229
p1 = CRS('+proj=lcc +lat_1={lat1} +lat_2={lat2} +lat_0={lat0} +lon_0={lon0} +x_0={x0} +y_0={y0} +a={a} +b={a} +units=m +no_defs'.format(lat1=lat1, lat2=lat2, lat0=lat0, lon0=lon0, x0=x0, y0=y0, a=earth_radius))
self._proj = p1
# This function needs to be writen by the users and is used to e.g. download a file containing weather
# data (p, t, rh, etc.) from the weather model server, or iff your weather models always live
# locally, you can define logic here to read a subset of the files based on the input bounding box.
def _fetch(self, lats, lons, time, out):
'''
Fetch weather model data from the custom weather model "ABCD"
Parameters
----------
NDArray: lats - latitude of your query points
NDArray: lons - longitude of your query points
Python Datetime object: time - datatime object (year,month,day,hour,minute,second)
String: out - name of downloaded dataset file from the custom weather model server
'''
# The list of inputs is exact; RAiDER will not pass any additional keyword arguments to this function,
# and all of the inputs must be provided.
# bounding box plus a buffer using the helper function from the WeatherModel base class
#
# Set the "Nextra" argument to match the number of additional grid cells in your custom model
# to download outside of your query points. This is needed when ray-tracing slant delays, for
# the points on the edge.
#
# Nextra should be something like ceil(zref*tan(inc)/horizontal_grid_spacing), where zref is the
# assumed height of the troposphere (default 15 km), inc is the average inclination angle, and
# horizontal_grid_spacing is from your model in km.
#
#Example: Sentinel-1 (inc ~ 35 degrees), ERA-5 (grid spacing ~ 30 km) and the default zref (15 km),
# Nextra = 1.
lat_min, lat_max, lon_min, lon_max = self._get_ll_bounds(lats, lons, Nextra = 1)
self._bounds = (lat_min, lat_max, lon_min, lon_max)
# Even if you don't need to download files, you will need to assign the "self._files" attribute so
# that the "load_weather" method knows what file contains the data
#
# In this example, you would need to define an auxilliary function _download_abcd_file (see below)
self._files = self._download_abcd_file(out, time, self._bounds)
# This function gets called by RAiDER to read individual variables from your source file and pre-
# process the data from the file into the format expected by RAiDER (see main text above and
# "Returns" description below).
def load_weather(self, filename):
'''
Load weather model variables from the downloaded file named filename
Parameters
----------
filename - filename of the downloaded weather model file
Returns
-------
# Doesn't directly return anything, but assigns values to self.
# Data cubes: should be ordered as (z, x, y)
NDArray: _p - 3D data cube of pressure in Pa
NDArray: _t - 3D data cube of temperature in Kelvin
NDArray: _q - 3D data cube of specific humidity in ***; only one of _q or _rh is required
NDArray: _rh - 3D data cube of relative humidity in ***; only one of _q or _rh is required
NDArray: _lats - 3D data cube of latitude. Should be WGS-84 latitudes (EPSG: 4326)
NDArray: _lons - 3D data cube of longitude. Should be WGS-84 latitudes (EPSG: 4326)
NDArray: _xs - 3D cube of x-coordinates of the data cubes in the native weather model projection
NDArray: _ys - 3D cube of y-coordinates of the data cubes in the native weather model projection
NDArray: _zs - 3D cube of z-coordinates of the data cubes in the native weather model projection
'''
# In this case we have an auxiliary function "_makeDataCubes" to do the pre-processing
# Pre-processing loads the data available from the weather model file and manipulates it
# as needed to get the data cubes into the form prescribed above.
lats, lons, xs, ys, t, q, p, hgt = self._makeDataCubes(filename)
# **Note**: RAiDER provides helper functions for certain types of operations; e.g. for converting
# surface pressure and geopotential to pressure and geopotential height:
# z, p, hgt = self._calculategeoh(z, lnsp) # z is geopotential and lnsp is the natural log of surface pressure
# **Note**: ECMWF provides heights as geopotential (units m^2/s^2). For a similar custom model, one can use
# the following line to convert to geopotential height:
# hgt = z / self._g0
# if geopotential height is provided, one can use the following line to convert to
# topographic height, which is then automatically assigned to "self._zs":
# self._get_heights(lats, hgt) # where hgt is geopotential height = geopotential / gravity acceleration
# Otherwise, if topographic height is provided directly:
_zs = hgt
# depending
self._t = t
self._q = q
self._p = p
self._lats = lats
self._lons = lons
# _xs: x-direction grid coordinate in the native weather model projection (=_lons if projection is WGS-84)
# _ys: y-direction grid coordinate in the native weather model projection (=_lats if projection is WGS-84)
# _zs: z-direction grid coordinate. Must be topographic height in meters.
self._xs = xs
self._ys = ys
self._zs = _zs
###########
def _download_abcd_file(self, out, date_time, bounding_box):
'''
Example auxilliary function for fetching data
Can be a file download from a server, grabbing a local filename, or accessing a cloud-based API
Parameters
----------
out - filename to save data to
date_time - Python datatime object
bounding_box - bounding box for the region of interest
Output:
out - returned filename from input
'''
return None
def _makeDataCubes(self, filename):
'''
Example auxilliary function for data pre-processing
Read 3-D data cubes from 'filename'
Parameters
----------
filename - filename of the downloaded weather model file from the server
Returns
-------
lats - latitude (3-D data cube)
lons - longitude (3-D data cube)
xs - x-direction grid dimension of the native weather model coordinates (3-D data cube; if in lat/lon, _xs = _lons)
ys - y-direction grid dimension of the native weather model coordinates (3-D data cube; if in lat/lon, _ys = _lats)
t - temperature (3-D data cube)
q - humidity (3-D data cube; could be relative humidity or specific humidity)
p - pressure level (3-D data cube; could be pressure level (preferred) or surface pressure)
hgt - height (3-D data cube; could be geopotential height or topographic height (preferred))
'''
return None, None, None, None, None, None, None, None
###Output
_____no_output_____
###Markdown
The ```_fetch``` methodThe ```_fetch``` method gets called by the RAiDER to fetch download or read the data. As shown in the example script, this is where you can download the data from a server, etc. If your weather model always lives on your local machine (or can always be locally accessed) this method can be very simple, but should still be defined as it will always be called. In addition, the filename of the data should be returned so that RAiDER knows what file to load. The ```load_weather``` method```load_weather``` is like ```_fetch``` in that it always gets called during the "load" routine. After you have a file available for RAiDER to read, this method will pre-process the data from your file to match the inputs RAiDER expects. In particular, after running this method, your weather model reader object should contain the variables listed in the example above. Adding the reader to the weather model listModify the allowed list of weather models "allowed.py" under the directory of "tools/RAiDER/models" to include the custom "ABCD" model as below.
###Code
ALLOWED_MODELS = [
'ERA5',
'ERA5T',
'ERAI',
'MERRA2',
'WRF',
'HRRR',
'GMAO',
'HDF5',
'HRES',
'NCMR',
'ABCD'
]
###Output
_____no_output_____
###Markdown
Debugging your custom readerThe WeatherModel class has two built-in plots for debugging purposes: ```WeatherModel.plot(plotType='pqt', savefig=True)``` ```WeatherModel.plot(plotType='wh', savefig=True)``` These commands plot pressure/humidity/temperature and wet and hydrostatic refractivity for the weather model, and are created by default when running ```raiderDelay.py``` normally. When debugging your custom model reader, you can use the command line executable ```raiderWeatherModelDebug.py```, which can take the exactly same list of input variables as ```raiderDelay.py``` and just create the debugging plots.
###Code
# Replace "ABCD" with your custom weather model name
# add the --out option if you want your results to be written to a directory other than the current one
raiderWeatherModelDebug.py --date 20200103 --time 23:00:00 -b 39 40 -79 -78 --model ABCD --zref 15000 -v
###Output
_____no_output_____
###Markdown
You can also test your custom model reader by running the three example commands from the ```raiderDelay.py``` helper message (i.e. running ```raiderDelay.py``` with the ```-h``` option will show the three example commands) with the weather model name "ERA5" replaced with the newly-added custom one, e.g. "ABCD".
###Code
# Replace "ABCD" with your custom weather model name
# add the --out option if you want your results to be written to a directory other than the current one
raiderDelay.py --date 20200103 --time 23:00:00 -b 39 40 -79 -78 --model ABCD --zref 15000 -v
###Output
_____no_output_____ |
site/en/tutorials/images/classification.ipynb | ###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Image classification View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial shows how to classify cats or dogs from images. It builds an image classifier using a `tf.keras.Sequential` model and load data using `tf.keras.preprocessing.image.ImageDataGenerator`. You will get some practical experience and develop intuition for the following concepts:* Building _data input pipelines_ using the `tf.keras.preprocessing.image.ImageDataGenerator` class to efficiently work with data on disk to use with the model.* _Overfitting_ —How to identify and prevent it.* _Data augmentation_ and _dropout_ —Key techniques to fight overfitting in computer vision tasks to incorporate into the data pipeline and image classifier model.This tutorial follows a basic machine learning workflow:1. Examine and understand data2. Build an input pipeline3. Build the model4. Train the model5. Test the model6. Improve the model and repeat the process Import packages Let's start by importing the required packages. The `os` package is used to read files and directory structure, NumPy is used to convert python list to numpy array and to perform required matrix operations and `matplotlib.pyplot` to plot the graph and display images in the training and validation data.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
###Output
_____no_output_____
###Markdown
Import Tensorflow and the Keras classes needed to construct our model.
###Code
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import os
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Load data Begin by downloading the dataset. This tutorial uses a filtered version of Dogs vs Cats dataset from Kaggle. Download the archive version of the dataset and store it in the "/tmp/" directory.
###Code
_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)
PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')
###Output
_____no_output_____
###Markdown
The dataset has the following directory structure:cats_and_dogs_filtered|__ train |______ cats: [cat.0.jpg, cat.1.jpg, cat.2.jpg ....] |______ dogs: [dog.0.jpg, dog.1.jpg, dog.2.jpg ...]|__ validation |______ cats: [cat.2000.jpg, cat.2001.jpg, cat.2002.jpg ....] |______ dogs: [dog.2000.jpg, dog.2001.jpg, dog.2002.jpg ...] After extracting its contents, assign variables with the proper file path for the training and validation set.
###Code
train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')
train_cats_dir = os.path.join(train_dir, 'cats') # directory with our training cat pictures
train_dogs_dir = os.path.join(train_dir, 'dogs') # directory with our training dog pictures
validation_cats_dir = os.path.join(validation_dir, 'cats') # directory with our validation cat pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs') # directory with our validation dog pictures
###Output
_____no_output_____
###Markdown
Understand the data Let's look at how many cats and dogs images are in the training and validation directory:
###Code
num_cats_tr = len(os.listdir(train_cats_dir))
num_dogs_tr = len(os.listdir(train_dogs_dir))
num_cats_val = len(os.listdir(validation_cats_dir))
num_dogs_val = len(os.listdir(validation_dogs_dir))
total_train = num_cats_tr + num_dogs_tr
total_val = num_cats_val + num_dogs_val
print('total training cat images:', num_cats_tr)
print('total training dog images:', num_dogs_tr)
print('total validation cat images:', num_cats_val)
print('total validation dog images:', num_dogs_val)
print("--")
print("Total training images:", total_train)
print("Total validation images:", total_val)
###Output
_____no_output_____
###Markdown
For convenience, set up variables to use while pre-processing the dataset and training the network.
###Code
batch_size = 128
epochs = 15
IMG_HEIGHT = 150
IMG_WIDTH = 150
###Output
_____no_output_____
###Markdown
Data preparation Format the images into appropriately pre-processed floating point tensors before feeding to the network:1. Read images from the disk.2. Decode contents of these images and convert it into proper grid format as per their RGB content.3. Convert them into floating point tensors.4. Rescale the tensors from values between 0 and 255 to values between 0 and 1, as neural networks prefer to deal with small input values.Fortunately, all these tasks can be done with the `ImageDataGenerator` class provided by `tf.keras`. It can read images from disk and preprocess them into proper tensors. It will also set up generators that convert these images into batches of tensors—helpful when training the network.
###Code
train_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our training data
validation_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our validation data
###Output
_____no_output_____
###Markdown
After defining the generators for training and validation images, the `flow_from_directory` method load images from the disk, applies rescaling, and resizes the images into the required dimensions.
###Code
train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Visualize training images Visualize the training images by extracting a batch of images from the training generator—which is 32 images in this example—then plot five of them with `matplotlib`.
###Code
sample_training_images, _ = next(train_data_gen)
###Output
_____no_output_____
###Markdown
The `next` function returns a batch from the dataset. The return value of `next` function is in form of `(x_train, y_train)` where x_train is training features and y_train, its labels. Discard the labels to only visualize the training images.
###Code
# This function will plot images in the form of a grid with 1 row and 5 columns where images are placed in each column.
def plotImages(images_arr):
fig, axes = plt.subplots(1, 5, figsize=(20,20))
axes = axes.flatten()
for img, ax in zip( images_arr, axes):
ax.imshow(img)
ax.axis('off')
plt.tight_layout()
plt.show()
plotImages(sample_training_images[:5])
###Output
_____no_output_____
###Markdown
Create the model The model consists of three convolution blocks with a max pool layer in each of them. There's a fully connected layer with 512 units on top of it that is activated by a `relu`` activation function.
###Code
model = Sequential([
Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(512, activation='relu'),
Dense(1)
])
###Output
_____no_output_____
###Markdown
Compile the modelFor this tutorial, choose the *ADAM* optimizer and *binary cross entropy* loss function. To view training and validation accuracy for each training epoch, pass the `metrics` argument.
###Code
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Model summaryView all the layers of the network using the model's `summary` method:
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Train the model Use the `fit_generator` method of the `ImageDataGenerator` class to train the network.
###Code
history = model.fit_generator(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)
###Output
_____no_output_____
###Markdown
Visualize training results Now visualize the results after training the network.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss=history.history['loss']
val_loss=history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
As you can see from the plots, training accuracy and validation accuracy are off by large margin and the model has achieved only around **70%** accuracy on the validation set.Let's look at what went wrong and try to increase overall performance of the model. Overfitting In the plots above, the training accuracy is increasing linearly over time, whereas validation accuracy stalls around 70% in the training process. Also, the difference in accuracy between training and validation accuracy is noticeable—a sign of *overfitting*.When there are a small number of training examples, the model sometimes learns from noises or unwanted details from training examples—to an extent that it negatively impacts the performance of the model on new examples. This phenomenon is known as overfitting. It means that the model will have a difficult time generalizing on a new dataset.There are multiple ways to fight overfitting in the training process. In this tutorial, you'll use *data augmentation* and add *dropout* to our model. Data augmentation Overfitting generally occurs when there are a small number of training examples. One way to fix this problem is to augment the dataset so that it has a sufficient number of training examples. Data augmentation takes the approach of generating more training data from existing training samples by augmenting the samples using random transformations that yield believable-looking images. The goal is the model will never see the exact same picture twice during training. This helps expose the model to more aspects of the data and generalize better.Implement this in `tf.keras` using the `ImageDataGenerator` class. Pass different transformations to the dataset and it will take care of applying it during the training process. Augment and visualize data Begin by applying random horizontal flip augmentation to the dataset and see how individual images look like after the transformation. Apply horizontal flip Pass `horizontal_flip` as an argument to the `ImageDataGenerator` class and set it to `True` to apply this augmentation.
###Code
image_gen = ImageDataGenerator(rescale=1./255, horizontal_flip=True)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
###Output
_____no_output_____
###Markdown
Take one sample image from the training examples and repeat it five times so that the augmentation is applied to the same image five times.
###Code
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
# Re-use the same custom plotting function defined and used
# above to visualize the training images
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Randomly rotate the image Let's take a look at a different augmentation called rotation and apply 45 degrees of rotation randomly to the training examples.
###Code
image_gen = ImageDataGenerator(rescale=1./255, rotation_range=45)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Apply zoom augmentation Apply a zoom augmentation to the dataset to zoom images up to 50% randomly.
###Code
# zoom_range from 0 - 1 where 1 = 100%.
image_gen = ImageDataGenerator(rescale=1./255, zoom_range=0.5) #
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Put it all together Apply all the previous augmentations. Here, you applied rescale, 45 degree rotation, width shift, height shift, horizontal flip and zoom augmentation to the training images.
###Code
image_gen_train = ImageDataGenerator(
rescale=1./255,
rotation_range=45,
width_shift_range=.15,
height_shift_range=.15,
horizontal_flip=True,
zoom_range=0.5
)
train_data_gen = image_gen_train.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Visualize how a single image would look five different times when passing these augmentations randomly to the dataset.
###Code
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Create validation data generator Generally, only apply data augmentation to the training examples. In this case, only rescale the validation images and convert them into batches using `ImageDataGenerator`.
###Code
image_gen_val = ImageDataGenerator(rescale=1./255)
val_data_gen = image_gen_val.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Dropout Another technique to reduce overfitting is to introduce *dropout* to the network. It is a form of *regularization* that forces the weights in the network to take only small values, which makes the distribution of weight values more regular and the network can reduce overfitting on small training examples. Dropout is one of the regularization technique used in this tutorialWhen you apply dropout to a layer it randomly drops out (set to zero) number of output units from the applied layer during the training process. Dropout takes a fractional number as its input value, in the form such as 0.1, 0.2, 0.4, etc. This means dropping out 10%, 20% or 40% of the output units randomly from the applied layer.When appling 0.1 dropout to a certain layer, it randomly kills 10% of the output units in each training epoch.Create a network architecture with this new dropout feature and apply it to different convolutions and fully-connected layers. Creating a new network with Dropouts Here, you apply dropout to first and last max pool layers. Applying dropout will randomly set 20% of the neurons to zero during each training epoch. This helps to avoid overfitting on the training dataset.
###Code
model_new = Sequential([
Conv2D(16, 3, padding='same', activation='relu',
input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Dropout(0.2),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Dropout(0.2),
Flatten(),
Dense(512, activation='relu'),
Dense(1)
])
###Output
_____no_output_____
###Markdown
Compile the model After introducing dropouts to the network, compile the model and view the layers summary.
###Code
model_new.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
model_new.summary()
###Output
_____no_output_____
###Markdown
Train the model After successfully introducing data augmentations to the training examples and adding dropouts to the network, train this new network:
###Code
history = model_new.fit_generator(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)
###Output
_____no_output_____
###Markdown
Visualize the model Visualize the new model after training, you can see that there is significantly less overfitting than before. The accuracy should go up after training the model for more epochs.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Image classification View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial shows how to classify images of flowers. It creates an image classifier using a `tf.keras.Sequential` model, and loads data using `tf.keras.utils.image_dataset_from_directory`. You will gain practical experience with the following concepts:* Efficiently loading a dataset off disk.* Identifying overfitting and applying techniques to mitigate it, including data augmentation and dropout.This tutorial follows a basic machine learning workflow:1. Examine and understand data2. Build an input pipeline3. Build the model4. Train the model5. Test the model6. Improve the model and repeat the process Import TensorFlow and other libraries
###Code
import matplotlib.pyplot as plt
import numpy as np
import os
import PIL
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
###Output
_____no_output_____
###Markdown
Download and explore the dataset This tutorial uses a dataset of about 3,700 photos of flowers. The dataset contains five sub-directories, one per class:```flower_photo/ daisy/ dandelion/ roses/ sunflowers/ tulips/```
###Code
import pathlib
dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
data_dir = tf.keras.utils.get_file('flower_photos', origin=dataset_url, untar=True)
data_dir = pathlib.Path(data_dir)
###Output
_____no_output_____
###Markdown
After downloading, you should now have a copy of the dataset available. There are 3,670 total images:
###Code
image_count = len(list(data_dir.glob('*/*.jpg')))
print(image_count)
###Output
_____no_output_____
###Markdown
Here are some roses:
###Code
roses = list(data_dir.glob('roses/*'))
PIL.Image.open(str(roses[0]))
PIL.Image.open(str(roses[1]))
###Output
_____no_output_____
###Markdown
And some tulips:
###Code
tulips = list(data_dir.glob('tulips/*'))
PIL.Image.open(str(tulips[0]))
PIL.Image.open(str(tulips[1]))
###Output
_____no_output_____
###Markdown
Load data using a Keras utilityLet's load these images off disk using the helpful `tf.keras.utils.image_dataset_from_directory` utility. This will take you from a directory of images on disk to a `tf.data.Dataset` in just a couple lines of code. If you like, you can also write your own data loading code from scratch by visiting the [Load and preprocess images](../load_data/images.ipynb) tutorial. Create a dataset Define some parameters for the loader:
###Code
batch_size = 32
img_height = 180
img_width = 180
###Output
_____no_output_____
###Markdown
It's good practice to use a validation split when developing your model. Let's use 80% of the images for training, and 20% for validation.
###Code
train_ds = tf.keras.utils.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="training",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
val_ds = tf.keras.utils.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
###Output
_____no_output_____
###Markdown
You can find the class names in the `class_names` attribute on these datasets. These correspond to the directory names in alphabetical order.
###Code
class_names = train_ds.class_names
print(class_names)
###Output
_____no_output_____
###Markdown
Visualize the dataHere are the first nine images from the training dataset:
###Code
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 10))
for images, labels in train_ds.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(images[i].numpy().astype("uint8"))
plt.title(class_names[labels[i]])
plt.axis("off")
###Output
_____no_output_____
###Markdown
You will train a model using these datasets by passing them to `Model.fit` in a moment. If you like, you can also manually iterate over the dataset and retrieve batches of images:
###Code
for image_batch, labels_batch in train_ds:
print(image_batch.shape)
print(labels_batch.shape)
break
###Output
_____no_output_____
###Markdown
The `image_batch` is a tensor of the shape `(32, 180, 180, 3)`. This is a batch of 32 images of shape `180x180x3` (the last dimension refers to color channels RGB). The `label_batch` is a tensor of the shape `(32,)`, these are corresponding labels to the 32 images.You can call `.numpy()` on the `image_batch` and `labels_batch` tensors to convert them to a `numpy.ndarray`. Configure the dataset for performanceLet's make sure to use buffered prefetching so you can yield data from disk without having I/O become blocking. These are two important methods you should use when loading data:- `Dataset.cache` keeps the images in memory after they're loaded off disk during the first epoch. This will ensure the dataset does not become a bottleneck while training your model. If your dataset is too large to fit into memory, you can also use this method to create a performant on-disk cache.- `Dataset.prefetch` overlaps data preprocessing and model execution while training.Interested readers can learn more about both methods, as well as how to cache data to disk in the *Prefetching* section of the [Better performance with the tf.data API](../../guide/data_performance.ipynb) guide.
###Code
AUTOTUNE = tf.data.AUTOTUNE
train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
###Output
_____no_output_____
###Markdown
Standardize the data The RGB channel values are in the `[0, 255]` range. This is not ideal for a neural network; in general you should seek to make your input values small.Here, you will standardize values to be in the `[0, 1]` range by using `tf.keras.layers.Rescaling`:
###Code
normalization_layer = layers.Rescaling(1./255)
###Output
_____no_output_____
###Markdown
There are two ways to use this layer. You can apply it to the dataset by calling `Dataset.map`:
###Code
normalized_ds = train_ds.map(lambda x, y: (normalization_layer(x), y))
image_batch, labels_batch = next(iter(normalized_ds))
first_image = image_batch[0]
# Notice the pixel values are now in `[0,1]`.
print(np.min(first_image), np.max(first_image))
###Output
_____no_output_____
###Markdown
Or, you can include the layer inside your model definition, which can simplify deployment. Let's use the second approach here. Note: You previously resized images using the `image_size` argument of `tf.keras.utils.image_dataset_from_directory`. If you want to include the resizing logic in your model as well, you can use the `tf.keras.layers.Resizing` layer. Create the modelThe [Sequential](../../guide/keras/sequential_model.ipynb) model consists of three convolution blocks (`tf.keras.layers.Conv2D`) with a max pooling layer (`tf.keras.layers.MaxPooling2D`) in each of them. There's a fully-connected layer (`tf.keras.layers.Dense`) with 128 units on top of it that is activated by a ReLU activation function (`'relu'`). This model has not been tuned for high accuracy—the goal of this tutorial is to show a standard approach.
###Code
num_classes = 5
model = Sequential([
layers.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
###Output
_____no_output_____
###Markdown
Compile the modelFor this tutorial, choose the `tf.keras.optimizers.Adam` optimizer and `tf.keras.losses.SparseCategoricalCrossentropy` loss function. To view training and validation accuracy for each training epoch, pass the `metrics` argument to `Model.compile`.
###Code
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Model summaryView all the layers of the network using the model's `Model.summary` method:
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Train the model
###Code
epochs=10
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
###Output
_____no_output_____
###Markdown
Visualize training results Create plots of loss and accuracy on the training and validation sets:
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
The plots show that training accuracy and validation accuracy are off by large margins, and the model has achieved only around 60% accuracy on the validation set.Let's inspect what went wrong and try to increase the overall performance of the model. Overfitting In the plots above, the training accuracy is increasing linearly over time, whereas validation accuracy stalls around 60% in the training process. Also, the difference in accuracy between training and validation accuracy is noticeable—a sign of [overfitting](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit).When there are a small number of training examples, the model sometimes learns from noises or unwanted details from training examples—to an extent that it negatively impacts the performance of the model on new examples. This phenomenon is known as overfitting. It means that the model will have a difficult time generalizing on a new dataset.There are multiple ways to fight overfitting in the training process. In this tutorial, you'll use *data augmentation* and add *Dropout* to your model. Data augmentation Overfitting generally occurs when there are a small number of training examples. [Data augmentation](./data_augmentation.ipynb) takes the approach of generating additional training data from your existing examples by augmenting them using random transformations that yield believable-looking images. This helps expose the model to more aspects of the data and generalize better.You will implement data augmentation using the following Keras preprocessing layers: `tf.keras.layers.RandomFlip`, `tf.keras.layers.RandomRotation`, and `tf.keras.layers.RandomZoom`. These can be included inside your model like other layers, and run on the GPU.
###Code
data_augmentation = keras.Sequential(
[
layers.RandomFlip("horizontal",
input_shape=(img_height,
img_width,
3)),
layers.RandomRotation(0.1),
layers.RandomZoom(0.1),
]
)
###Output
_____no_output_____
###Markdown
Let's visualize what a few augmented examples look like by applying data augmentation to the same image several times:
###Code
plt.figure(figsize=(10, 10))
for images, _ in train_ds.take(1):
for i in range(9):
augmented_images = data_augmentation(images)
ax = plt.subplot(3, 3, i + 1)
plt.imshow(augmented_images[0].numpy().astype("uint8"))
plt.axis("off")
###Output
_____no_output_____
###Markdown
You will use data augmentation to train a model in a moment. DropoutAnother technique to reduce overfitting is to introduce [dropout](https://developers.google.com/machine-learning/glossarydropout_regularization) regularization to the network.When you apply dropout to a layer, it randomly drops out (by setting the activation to zero) a number of output units from the layer during the training process. Dropout takes a fractional number as its input value, in the form such as 0.1, 0.2, 0.4, etc. This means dropping out 10%, 20% or 40% of the output units randomly from the applied layer.Let's create a new neural network with `tf.keras.layers.Dropout` before training it using the augmented images:
###Code
model = Sequential([
data_augmentation,
layers.Rescaling(1./255),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Dropout(0.2),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
###Output
_____no_output_____
###Markdown
Compile and train the model
###Code
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.summary()
epochs = 15
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
###Output
_____no_output_____
###Markdown
Visualize training resultsAfter applying data augmentation and `tf.keras.layers.Dropout`, there is less overfitting than before, and training and validation accuracy are closer aligned:
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
Predict on new data Finally, let's use our model to classify an image that wasn't included in the training or validation sets. Note: Data augmentation and dropout layers are inactive at inference time.
###Code
sunflower_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/592px-Red_sunflower.jpg"
sunflower_path = tf.keras.utils.get_file('Red_sunflower', origin=sunflower_url)
img = tf.keras.utils.load_img(
sunflower_path, target_size=(img_height, img_width)
)
img_array = tf.keras.utils.img_to_array(img)
img_array = tf.expand_dims(img_array, 0) # Create a batch
predictions = model.predict(img_array)
score = tf.nn.softmax(predictions[0])
print(
"This image most likely belongs to {} with a {:.2f} percent confidence."
.format(class_names[np.argmax(score)], 100 * np.max(score))
)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Image classification View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial shows how to classify cats or dogs from images. It builds an image classifier using a `tf.keras.Sequential` model and load data using `tf.keras.preprocessing.image.ImageDataGenerator`. You will get some practical experience and develop intuition for the following concepts:* Building _data input pipelines_ using the `tf.keras.preprocessing.image.ImageDataGenerator` class to efficiently work with data on disk to use with the model.* _Overfitting_ —How to identify and prevent it.* _Data augmentation_ and _dropout_ —Key techniques to fight overfitting in computer vision tasks to incorporate into the data pipeline and image classifier model.This tutorial follows a basic machine learning workflow:1. Examine and understand data2. Build an input pipeline3. Build the model4. Train the model5. Test the model6. Improve the model and repeat the process Import packages Let's start by importing the required packages. The `os` package is used to read files and directory structure, NumPy is used to convert python list to numpy array and to perform required matrix operations and `matplotlib.pyplot` to plot the graph and display images in the training and validation data.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
###Output
_____no_output_____
###Markdown
Import Tensorflow and the Keras classes needed to construct our model.
###Code
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import os
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Load data Begin by downloading the dataset. This tutorial uses a filtered version of Dogs vs Cats dataset from Kaggle. Download the archive version of the dataset and store it in the "/tmp/" directory.
###Code
_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)
PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')
###Output
_____no_output_____
###Markdown
The dataset has the following directory structure:cats_and_dogs_filtered|__ train |______ cats: [cat.0.jpg, cat.1.jpg, cat.2.jpg ....] |______ dogs: [dog.0.jpg, dog.1.jpg, dog.2.jpg ...]|__ validation |______ cats: [cat.2000.jpg, cat.2001.jpg, cat.2002.jpg ....] |______ dogs: [dog.2000.jpg, dog.2001.jpg, dog.2002.jpg ...] After extracting its contents, assign variables with the proper file path for the training and validation set.
###Code
train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')
train_cats_dir = os.path.join(train_dir, 'cats') # directory with our training cat pictures
train_dogs_dir = os.path.join(train_dir, 'dogs') # directory with our training dog pictures
validation_cats_dir = os.path.join(validation_dir, 'cats') # directory with our validation cat pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs') # directory with our validation dog pictures
###Output
_____no_output_____
###Markdown
Understand the data Let's look at how many cats and dogs images are in the training and validation directory:
###Code
num_cats_tr = len(os.listdir(train_cats_dir))
num_dogs_tr = len(os.listdir(train_dogs_dir))
num_cats_val = len(os.listdir(validation_cats_dir))
num_dogs_val = len(os.listdir(validation_dogs_dir))
total_train = num_cats_tr + num_dogs_tr
total_val = num_cats_val + num_dogs_val
print('total training cat images:', num_cats_tr)
print('total training dog images:', num_dogs_tr)
print('total validation cat images:', num_cats_val)
print('total validation dog images:', num_dogs_val)
print("--")
print("Total training images:", total_train)
print("Total validation images:", total_val)
###Output
_____no_output_____
###Markdown
For convenience, set up variables to use while pre-processing the dataset and training the network.
###Code
batch_size = 128
epochs = 15
IMG_HEIGHT = 150
IMG_WIDTH = 150
###Output
_____no_output_____
###Markdown
Data preparation Format the images into appropriately pre-processed floating point tensors before feeding to the network:1. Read images from the disk.2. Decode contents of these images and convert it into proper grid format as per their RGB content.3. Convert them into floating point tensors.4. Rescale the tensors from values between 0 and 255 to values between 0 and 1, as neural networks prefer to deal with small input values.Fortunately, all these tasks can be done with the `ImageDataGenerator` class provided by `tf.keras`. It can read images from disk and preprocess them into proper tensors. It will also set up generators that convert these images into batches of tensors—helpful when training the network.
###Code
train_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our training data
validation_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our validation data
###Output
_____no_output_____
###Markdown
After defining the generators for training and validation images, the `flow_from_directory` method load images from the disk, applies rescaling, and resizes the images into the required dimensions.
###Code
train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Visualize training images Visualize the training images by extracting a batch of images from the training generator—which is 32 images in this example—then plot five of them with `matplotlib`.
###Code
sample_training_images, _ = next(train_data_gen)
###Output
_____no_output_____
###Markdown
The `next` function returns a batch from the dataset. The return value of `next` function is in form of `(x_train, y_train)` where x_train is training features and y_train, its labels. Discard the labels to only visualize the training images.
###Code
# This function will plot images in the form of a grid with 1 row and 5 columns where images are placed in each column.
def plotImages(images_arr):
fig, axes = plt.subplots(1, 5, figsize=(20,20))
axes = axes.flatten()
for img, ax in zip( images_arr, axes):
ax.imshow(img)
ax.axis('off')
plt.tight_layout()
plt.show()
plotImages(sample_training_images[:5])
###Output
_____no_output_____
###Markdown
Create the model The model consists of three convolution blocks with a max pool layer in each of them. There's a fully connected layer with 512 units on top of it that is activated by a `relu` activation function. The model outputs class probabilities based on binary classification by the `sigmoid` activation function.
###Code
model = Sequential([
Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(512, activation='relu'),
Dense(1, activation='sigmoid')
])
###Output
_____no_output_____
###Markdown
Compile the modelFor this tutorial, choose the *ADAM* optimizer and *binary cross entropy* loss function. To view training and validation accuracy for each training epoch, pass the `metrics` argument.
###Code
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Model summaryView all the layers of the network using the model's `summary` method:
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Train the model Use the `fit_generator` method of the `ImageDataGenerator` class to train the network.
###Code
history = model.fit_generator(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)
###Output
_____no_output_____
###Markdown
Visualize training results Now visualize the results after training the network.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
As you can see from the plots, training accuracy and validation accuracy are off by large margin and the model has achieved only around **70%** accuracy on the validation set.Let's look at what went wrong and try to increase overall performance of the model. Overfitting In the plots above, the training accuracy is increasing linearly over time, whereas validation accuracy stalls around 70% in the training process. Also, the difference in accuracy between training and validation accuracy is noticeable—a sign of *overfitting*.When there are a small number of training examples, the model sometimes learns from noises or unwanted details from training examples—to an extent that it negatively impacts the performance of the model on new examples. This phenomenon is known as overfitting. It means that the model will have a difficult time generalizing on a new dataset.There are multiple ways to fight overfitting in the training process. In this tutorial, you'll use *data augmentation* and add *dropout* to our model. Data augmentation Overfitting generally occurs when there are a small number of training examples. One way to fix this problem is to augment the dataset so that it has a sufficient number of training examples. Data augmentation takes the approach of generating more training data from existing training samples by augmenting the samples using random transformations that yield believable-looking images. The goal is the model will never see the exact same picture twice during training. This helps expose the model to more aspects of the data and generalize better.Implement this in `tf.keras` using the `ImageDataGenerator` class. Pass different transformations to the dataset and it will take care of applying it during the training process. Augment and visualize data Begin by applying random horizontal flip augmentation to the dataset and see how individual images look like after the transformation. Apply horizontal flip Pass `horizontal_flip` as an argument to the `ImageDataGenerator` class and set it to `True` to apply this augmentation.
###Code
image_gen = ImageDataGenerator(rescale=1./255, horizontal_flip=True)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
###Output
_____no_output_____
###Markdown
Take one sample image from the training examples and repeat it five times so that the augmentation is applied to the same image five times.
###Code
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
# Re-use the same custom plotting function defined and used
# above to visualize the training images
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Randomly rotate the image Let's take a look at a different augmentation called rotation and apply 45 degrees of rotation randomly to the training examples.
###Code
image_gen = ImageDataGenerator(rescale=1./255, rotation_range=45)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Apply zoom augmentation Apply a zoom augmentation to the dataset to zoom images up to 50% randomly.
###Code
image_gen = ImageDataGenerator(rescale=1./255, zoom_range=0.5)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Put it all together Apply all the previous augmentations. Here, you applied rescale, 45 degree rotation, width shift, height shift, horizontal flip and zoom augmentation to the training images.
###Code
image_gen_train = ImageDataGenerator(
rescale=1./255,
rotation_range=45,
width_shift_range=.15,
height_shift_range=.15,
horizontal_flip=True,
zoom_range=0.5
)
train_data_gen = image_gen_train.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Visualize how a single image would look five different times when passing these augmentations randomly to the dataset.
###Code
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Create validation data generator Generally, only apply data augmentation to the training examples. In this case, only rescale the validation images and convert them into batches using `ImageDataGenerator`.
###Code
image_gen_val = ImageDataGenerator(rescale=1./255)
val_data_gen = image_gen_val.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Dropout Another technique to reduce overfitting is to introduce *dropout* to the network. It is a form of *regularization* that forces the weights in the network to take only small values, which makes the distribution of weight values more regular and the network can reduce overfitting on small training examples. Dropout is one of the regularization technique used in this tutorialWhen you apply dropout to a layer it randomly drops out (set to zero) number of output units from the applied layer during the training process. Dropout takes a fractional number as its input value, in the form such as 0.1, 0.2, 0.4, etc. This means dropping out 10%, 20% or 40% of the output units randomly from the applied layer.When appling 0.1 dropout to a certain layer, it randomly kills 10% of the output units in each training epoch.Create a network architecture with this new dropout feature and apply it to different convolutions and fully-connected layers. Creating a new network with Dropouts Here, you apply dropout to first and last max pool layers. Applying dropout will randomly set 20% of the neurons to zero during each training epoch. This helps to avoid overfitting on the training dataset.
###Code
model_new = Sequential([
Conv2D(16, 3, padding='same', activation='relu',
input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Dropout(0.2),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Dropout(0.2),
Flatten(),
Dense(512, activation='relu'),
Dense(1, activation='sigmoid')
])
###Output
_____no_output_____
###Markdown
Compile the model After introducing dropouts to the network, compile the model and view the layers summary.
###Code
model_new.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
model_new.summary()
###Output
_____no_output_____
###Markdown
Train the model After successfully introducing data augmentations to the training examples and adding dropouts to the network, train this new network:
###Code
history = model_new.fit_generator(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)
###Output
_____no_output_____
###Markdown
Visualize the model Visualize the new model after training, you can see that there is significantly less overfitting than before. The accuracy should go up after training the model for more epochs.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Image classification View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial shows how to classify cats or dogs from images. It builds an image classifier using a `tf.keras.Sequential` model and load data using `tf.keras.preprocessing.image.ImageDataGenerator`. You will get some practical experience and develop intuition for the following concepts:* Building _data input pipelines_ using the `tf.keras.preprocessing.image.ImageDataGenerator` class to efficiently work with data on disk to use with the model.* _Overfitting_ —How to identify and prevent it.* _Data augmentation_ and _dropout_ —Key techniques to fight overfitting in computer vision tasks to incorporate into the data pipeline and image classifier model.This tutorial follows a basic machine learning workflow:1. Examine and understand data2. Build an input pipeline3. Build the model4. Train the model5. Test the model6. Improve the model and repeat the process Import packages Let's start by importing the required packages. The `os` package is used to read files and directory structure, NumPy is used to convert python list to numpy array and to perform required matrix operations and `matplotlib.pyplot` to plot the graph and display images in the training and validation data. Import Tensorflow and the Keras classes needed to construct our model.
###Code
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import os
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Load data Begin by downloading the dataset. This tutorial uses a filtered version of Dogs vs Cats dataset from Kaggle. Download the archive version of the dataset and store it in the "/tmp/" directory.
###Code
_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)
PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')
###Output
_____no_output_____
###Markdown
The dataset has the following directory structure:cats_and_dogs_filtered|__ train |______ cats: [cat.0.jpg, cat.1.jpg, cat.2.jpg ....] |______ dogs: [dog.0.jpg, dog.1.jpg, dog.2.jpg ...]|__ validation |______ cats: [cat.2000.jpg, cat.2001.jpg, cat.2002.jpg ....] |______ dogs: [dog.2000.jpg, dog.2001.jpg, dog.2002.jpg ...] After extracting its contents, assign variables with the proper file path for the training and validation set.
###Code
train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')
train_cats_dir = os.path.join(train_dir, 'cats') # directory with our training cat pictures
train_dogs_dir = os.path.join(train_dir, 'dogs') # directory with our training dog pictures
validation_cats_dir = os.path.join(validation_dir, 'cats') # directory with our validation cat pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs') # directory with our validation dog pictures
###Output
_____no_output_____
###Markdown
Understand the data Let's look at how many cats and dogs images are in the training and validation directory:
###Code
num_cats_tr = len(os.listdir(train_cats_dir))
num_dogs_tr = len(os.listdir(train_dogs_dir))
num_cats_val = len(os.listdir(validation_cats_dir))
num_dogs_val = len(os.listdir(validation_dogs_dir))
total_train = num_cats_tr + num_dogs_tr
total_val = num_cats_val + num_dogs_val
print('total training cat images:', num_cats_tr)
print('total training dog images:', num_dogs_tr)
print('total validation cat images:', num_cats_val)
print('total validation dog images:', num_dogs_val)
print("--")
print("Total training images:", total_train)
print("Total validation images:", total_val)
###Output
_____no_output_____
###Markdown
For convenience, set up variables to use while pre-processing the dataset and training the network.
###Code
batch_size = 128
epochs = 15
IMG_HEIGHT = 150
IMG_WIDTH = 150
###Output
_____no_output_____
###Markdown
Data preparation Format the images into appropriately pre-processed floating point tensors before feeding to the network:1. Read images from the disk.2. Decode contents of these images and convert it into proper grid format as per their RGB content.3. Convert them into floating point tensors.4. Rescale the tensors from values between 0 and 255 to values between 0 and 1, as neural networks prefer to deal with small input values.Fortunately, all these tasks can be done with the `ImageDataGenerator` class provided by `tf.keras`. It can read images from disk and preprocess them into proper tensors. It will also set up generators that convert these images into batches of tensors—helpful when training the network.
###Code
train_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our training data
validation_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our validation data
###Output
_____no_output_____
###Markdown
After defining the generators for training and validation images, the `flow_from_directory` method load images from the disk, applies rescaling, and resizes the images into the required dimensions.
###Code
train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Visualize training images Visualize the training images by extracting a batch of images from the training generator—which are 128 images in this example—then plot five of them with `matplotlib`.
###Code
sample_training_images, _ = next(train_data_gen)
###Output
_____no_output_____
###Markdown
The `next` function returns a batch from the dataset. The return value of `next` function is in form of `(x_train, y_train)` where x_train is training features and y_train, its labels. Discard the labels to only visualize the training images.
###Code
# This function will plot images in the form of a grid with 1 row and 5 columns where images are placed in each column.
def plotImages(images_arr):
fig, axes = plt.subplots(1, 5, figsize=(20,20))
axes = axes.flatten()
for img, ax in zip( images_arr, axes):
ax.imshow(img)
ax.axis('off')
plt.tight_layout()
plt.show()
plotImages(sample_training_images[:5])
###Output
_____no_output_____
###Markdown
Create the model The model consists of three convolution blocks with a max pool layer in each of them. There's a fully connected layer with 512 units on top of it that is activated by a `relu` activation function.
###Code
model = Sequential([
Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(512, activation='relu'),
Dense(1)
])
###Output
_____no_output_____
###Markdown
Compile the modelFor this tutorial, choose the *ADAM* optimizer and *binary cross entropy* loss function. To view training and validation accuracy for each training epoch, pass the `metrics` argument.
###Code
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Model summaryView all the layers of the network using the model's `summary` method:
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Train the model Use the `fit_generator` method of the `ImageDataGenerator` class to train the network.
###Code
history = model.fit_generator(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)
###Output
_____no_output_____
###Markdown
Visualize training results Now visualize the results after training the network.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss=history.history['loss']
val_loss=history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
As you can see from the plots, training accuracy and validation accuracy are off by large margin and the model has achieved only around **70%** accuracy on the validation set.Let's look at what went wrong and try to increase overall performance of the model. Overfitting In the plots above, the training accuracy is increasing linearly over time, whereas validation accuracy stalls around 70% in the training process. Also, the difference in accuracy between training and validation accuracy is noticeable—a sign of *overfitting*.When there are a small number of training examples, the model sometimes learns from noises or unwanted details from training examples—to an extent that it negatively impacts the performance of the model on new examples. This phenomenon is known as overfitting. It means that the model will have a difficult time generalizing on a new dataset.There are multiple ways to fight overfitting in the training process. In this tutorial, you'll use *data augmentation* and add *dropout* to our model. Data augmentation Overfitting generally occurs when there are a small number of training examples. One way to fix this problem is to augment the dataset so that it has a sufficient number of training examples. Data augmentation takes the approach of generating more training data from existing training samples by augmenting the samples using random transformations that yield believable-looking images. The goal is the model will never see the exact same picture twice during training. This helps expose the model to more aspects of the data and generalize better.Implement this in `tf.keras` using the `ImageDataGenerator` class. Pass different transformations to the dataset and it will take care of applying it during the training process. Augment and visualize data Begin by applying random horizontal flip augmentation to the dataset and see how individual images look like after the transformation. Apply horizontal flip Pass `horizontal_flip` as an argument to the `ImageDataGenerator` class and set it to `True` to apply this augmentation.
###Code
image_gen = ImageDataGenerator(rescale=1./255, horizontal_flip=True)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
###Output
_____no_output_____
###Markdown
Take one sample image from the training examples and repeat it five times so that the augmentation is applied to the same image five times.
###Code
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
# Re-use the same custom plotting function defined and used
# above to visualize the training images
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Randomly rotate the image Let's take a look at a different augmentation called rotation and apply 45 degrees of rotation randomly to the training examples.
###Code
image_gen = ImageDataGenerator(rescale=1./255, rotation_range=45)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Apply zoom augmentation Apply a zoom augmentation to the dataset to zoom images up to 50% randomly.
###Code
# zoom_range from 0 - 1 where 1 = 100%.
image_gen = ImageDataGenerator(rescale=1./255, zoom_range=0.5) #
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Put it all together Apply all the previous augmentations. Here, you applied rescale, 45 degree rotation, width shift, height shift, horizontal flip and zoom augmentation to the training images.
###Code
image_gen_train = ImageDataGenerator(
rescale=1./255,
rotation_range=45,
width_shift_range=.15,
height_shift_range=.15,
horizontal_flip=True,
zoom_range=0.5
)
train_data_gen = image_gen_train.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Visualize how a single image would look five different times when passing these augmentations randomly to the dataset.
###Code
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Create validation data generator Generally, only apply data augmentation to the training examples. In this case, only rescale the validation images and convert them into batches using `ImageDataGenerator`.
###Code
image_gen_val = ImageDataGenerator(rescale=1./255)
val_data_gen = image_gen_val.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Dropout Another technique to reduce overfitting is to introduce *dropout* to the network. It is a form of *regularization* that forces the weights in the network to take only small values, which makes the distribution of weight values more regular and the network can reduce overfitting on small training examples. Dropout is one of the regularization technique used in this tutorialWhen you apply dropout to a layer it randomly drops out (set to zero) number of output units from the applied layer during the training process. Dropout takes a fractional number as its input value, in the form such as 0.1, 0.2, 0.4, etc. This means dropping out 10%, 20% or 40% of the output units randomly from the applied layer.When appling 0.1 dropout to a certain layer, it randomly kills 10% of the output units in each training epoch.Create a network architecture with this new dropout feature and apply it to different convolutions and fully-connected layers. Creating a new network with Dropouts Here, you apply dropout to first and last max pool layers. Applying dropout will randomly set 20% of the neurons to zero during each training epoch. This helps to avoid overfitting on the training dataset.
###Code
model_new = Sequential([
Conv2D(16, 3, padding='same', activation='relu',
input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Dropout(0.2),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Dropout(0.2),
Flatten(),
Dense(512, activation='relu'),
Dense(1)
])
###Output
_____no_output_____
###Markdown
Compile the model After introducing dropouts to the network, compile the model and view the layers summary.
###Code
model_new.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
model_new.summary()
###Output
_____no_output_____
###Markdown
Train the model After successfully introducing data augmentations to the training examples and adding dropouts to the network, train this new network:
###Code
history = model_new.fit_generator(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)
###Output
_____no_output_____
###Markdown
Visualize the model Visualize the new model after training, you can see that there is significantly less overfitting than before. The accuracy should go up after training the model for more epochs.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Image classification View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial shows how to classify images of flowers. It creates an image classifier using a `tf.keras.Sequential` model, and loads data using `tf.keras.utils.image_dataset_from_directory`. You will gain practical experience with the following concepts:* Efficiently loading a dataset off disk.* Identifying overfitting and applying techniques to mitigate it, including data augmentation and dropout.This tutorial follows a basic machine learning workflow:1. Examine and understand data2. Build an input pipeline3. Build the model4. Train the model5. Test the model6. Improve the model and repeat the process Import TensorFlow and other libraries
###Code
import matplotlib.pyplot as plt
import numpy as np
import os
import PIL
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
###Output
_____no_output_____
###Markdown
Download and explore the dataset This tutorial uses a dataset of about 3,700 photos of flowers. The dataset contains five sub-directories, one per class:```flower_photo/ daisy/ dandelion/ roses/ sunflowers/ tulips/```
###Code
import pathlib
dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
data_dir = tf.keras.utils.get_file('flower_photos', origin=dataset_url, untar=True)
data_dir = pathlib.Path(data_dir)
###Output
_____no_output_____
###Markdown
After downloading, you should now have a copy of the dataset available. There are 3,670 total images:
###Code
image_count = len(list(data_dir.glob('*/*.jpg')))
print(image_count)
###Output
_____no_output_____
###Markdown
Here are some roses:
###Code
roses = list(data_dir.glob('roses/*'))
PIL.Image.open(str(roses[0]))
PIL.Image.open(str(roses[1]))
###Output
_____no_output_____
###Markdown
And some tulips:
###Code
tulips = list(data_dir.glob('tulips/*'))
PIL.Image.open(str(tulips[0]))
PIL.Image.open(str(tulips[1]))
###Output
_____no_output_____
###Markdown
Load data using a Keras utilityLet's load these images off disk using the helpful `tf.keras.utils.image_dataset_from_directory` utility. This will take you from a directory of images on disk to a `tf.data.Dataset` in just a couple lines of code. If you like, you can also write your own data loading code from scratch by visiting the [Load and preprocess images](../load_data/images.ipynb) tutorial. Create a dataset Define some parameters for the loader:
###Code
batch_size = 32
img_height = 180
img_width = 180
###Output
_____no_output_____
###Markdown
It's good practice to use a validation split when developing your model. Let's use 80% of the images for training, and 20% for validation.
###Code
train_ds = tf.keras.utils.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="training",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
val_ds = tf.keras.utils.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
###Output
_____no_output_____
###Markdown
You can find the class names in the `class_names` attribute on these datasets. These correspond to the directory names in alphabetical order.
###Code
class_names = train_ds.class_names
print(class_names)
###Output
_____no_output_____
###Markdown
Visualize the dataHere are the first nine images from the training dataset:
###Code
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 10))
for images, labels in train_ds.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(images[i].numpy().astype("uint8"))
plt.title(class_names[labels[i]])
plt.axis("off")
###Output
_____no_output_____
###Markdown
You will train a model using these datasets by passing them to `Model.fit` in a moment. If you like, you can also manually iterate over the dataset and retrieve batches of images:
###Code
for image_batch, labels_batch in train_ds:
print(image_batch.shape)
print(labels_batch.shape)
break
###Output
_____no_output_____
###Markdown
The `image_batch` is a tensor of the shape `(32, 180, 180, 3)`. This is a batch of 32 images of shape `180x180x3` (the last dimension refers to color channels RGB). The `label_batch` is a tensor of the shape `(32,)`, these are corresponding labels to the 32 images.You can call `.numpy()` on the `image_batch` and `labels_batch` tensors to convert them to a `numpy.ndarray`. Configure the dataset for performanceLet's make sure to use buffered prefetching so you can yield data from disk without having I/O become blocking. These are two important methods you should use when loading data:- `Dataset.cache` keeps the images in memory after they're loaded off disk during the first epoch. This will ensure the dataset does not become a bottleneck while training your model. If your dataset is too large to fit into memory, you can also use this method to create a performant on-disk cache.- `Dataset.prefetch` overlaps data preprocessing and model execution while training.Interested readers can learn more about both methods, as well as how to cache data to disk in the *Prefetching* section of the [Better performance with the tf.data API](../../guide/data_performance.ipynb) guide.
###Code
AUTOTUNE = tf.data.AUTOTUNE
train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
###Output
_____no_output_____
###Markdown
Standardize the data The RGB channel values are in the `[0, 255]` range. This is not ideal for a neural network; in general you should seek to make your input values small.Here, you will standardize values to be in the `[0, 1]` range by using `tf.keras.layers.Rescaling`:
###Code
normalization_layer = layers.Rescaling(1./255)
###Output
_____no_output_____
###Markdown
There are two ways to use this layer. You can apply it to the dataset by calling `Dataset.map`:
###Code
normalized_ds = train_ds.map(lambda x, y: (normalization_layer(x), y))
image_batch, labels_batch = next(iter(normalized_ds))
first_image = image_batch[0]
# Notice the pixel values are now in `[0,1]`.
print(np.min(first_image), np.max(first_image))
###Output
_____no_output_____
###Markdown
Or, you can include the layer inside your model definition, which can simplify deployment. Let's use the second approach here. Note: You previously resized images using the `image_size` argument of `tf.keras.utils.image_dataset_from_directory`. If you want to include the resizing logic in your model as well, you can use the `tf.keras.layers.Resizing` layer. Create the modelThe [Sequential](https://www.tensorflow.org/guide/keras/sequential_model) model consists of three convolution blocks (`tf.keras.layers.Conv2D`) with a max pooling layer (`tf.keras.layers.MaxPooling2D`) in each of them. There's a fully-connected layer (`tf.keras.layers.Dense`) with 128 units on top of it that is activated by a ReLU activation function (`'relu'`). This model has not been tuned for high accuracy—the goal of this tutorial is to show a standard approach.
###Code
num_classes = len(class_names)
model = Sequential([
layers.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
###Output
_____no_output_____
###Markdown
Compile the modelFor this tutorial, choose the `tf.keras.optimizers.Adam` optimizer and `tf.keras.losses.SparseCategoricalCrossentropy` loss function. To view training and validation accuracy for each training epoch, pass the `metrics` argument to `Model.compile`.
###Code
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Model summaryView all the layers of the network using the model's `Model.summary` method:
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Train the model
###Code
epochs=10
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
###Output
_____no_output_____
###Markdown
Visualize training results Create plots of loss and accuracy on the training and validation sets:
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
The plots show that training accuracy and validation accuracy are off by large margins, and the model has achieved only around 60% accuracy on the validation set.Let's inspect what went wrong and try to increase the overall performance of the model. Overfitting In the plots above, the training accuracy is increasing linearly over time, whereas validation accuracy stalls around 60% in the training process. Also, the difference in accuracy between training and validation accuracy is noticeable—a sign of [overfitting](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit).When there are a small number of training examples, the model sometimes learns from noises or unwanted details from training examples—to an extent that it negatively impacts the performance of the model on new examples. This phenomenon is known as overfitting. It means that the model will have a difficult time generalizing on a new dataset.There are multiple ways to fight overfitting in the training process. In this tutorial, you'll use *data augmentation* and add *Dropout* to your model. Data augmentation Overfitting generally occurs when there are a small number of training examples. [Data augmentation](./data_augmentation.ipynb) takes the approach of generating additional training data from your existing examples by augmenting them using random transformations that yield believable-looking images. This helps expose the model to more aspects of the data and generalize better.You will implement data augmentation using the following Keras preprocessing layers: `tf.keras.layers.RandomFlip`, `tf.keras.layers.RandomRotation`, and `tf.keras.layers.RandomZoom`. These can be included inside your model like other layers, and run on the GPU.
###Code
data_augmentation = keras.Sequential(
[
layers.RandomFlip("horizontal",
input_shape=(img_height,
img_width,
3)),
layers.RandomRotation(0.1),
layers.RandomZoom(0.1),
]
)
###Output
_____no_output_____
###Markdown
Let's visualize what a few augmented examples look like by applying data augmentation to the same image several times:
###Code
plt.figure(figsize=(10, 10))
for images, _ in train_ds.take(1):
for i in range(9):
augmented_images = data_augmentation(images)
ax = plt.subplot(3, 3, i + 1)
plt.imshow(augmented_images[0].numpy().astype("uint8"))
plt.axis("off")
###Output
_____no_output_____
###Markdown
You will use data augmentation to train a model in a moment. DropoutAnother technique to reduce overfitting is to introduce [dropout](https://developers.google.com/machine-learning/glossarydropout_regularization) regularization to the network.When you apply dropout to a layer, it randomly drops out (by setting the activation to zero) a number of output units from the layer during the training process. Dropout takes a fractional number as its input value, in the form such as 0.1, 0.2, 0.4, etc. This means dropping out 10%, 20% or 40% of the output units randomly from the applied layer.Let's create a new neural network with `tf.keras.layers.Dropout` before training it using the augmented images:
###Code
model = Sequential([
data_augmentation,
layers.Rescaling(1./255),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Dropout(0.2),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
###Output
_____no_output_____
###Markdown
Compile and train the model
###Code
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.summary()
epochs = 15
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
###Output
_____no_output_____
###Markdown
Visualize training resultsAfter applying data augmentation and `tf.keras.layers.Dropout`, there is less overfitting than before, and training and validation accuracy are closer aligned:
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
Predict on new data Finally, let's use our model to classify an image that wasn't included in the training or validation sets. Note: Data augmentation and dropout layers are inactive at inference time.
###Code
sunflower_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/592px-Red_sunflower.jpg"
sunflower_path = tf.keras.utils.get_file('Red_sunflower', origin=sunflower_url)
img = tf.keras.utils.load_img(
sunflower_path, target_size=(img_height, img_width)
)
img_array = tf.keras.utils.img_to_array(img)
img_array = tf.expand_dims(img_array, 0) # Create a batch
predictions = model.predict(img_array)
score = tf.nn.softmax(predictions[0])
print(
"This image most likely belongs to {} with a {:.2f} percent confidence."
.format(class_names[np.argmax(score)], 100 * np.max(score))
)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Image classification View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial shows how to classify cats or dogs from images. It builds an image classifier using a `tf.keras.Sequential` model and load data using `tf.keras.preprocessing.image.ImageDataGenerator`. You will get some practical experience and develop intuition for the following concepts:* Building _data input pipelines_ using the `tf.keras.preprocessing.image.ImageDataGenerator` class to efficiently work with data on disk to use with the model.* _Overfitting_ —How to identify and prevent it.* _Data augmentation_ and _dropout_ —Key techniques to fight overfitting in computer vision tasks to incorporate into the data pipeline and image classifier model.This tutorial follows a basic machine learning workflow:1. Examine and understand data2. Build an input pipeline3. Build the model4. Train the model5. Test the model6. Improve the model and repeat the process Import packages Let's start by importing the required packages. The `os` package is used to read files and directory structure, NumPy is used to convert python list to numpy array and to perform required matrix operations and `matplotlib.pyplot` to plot the graph and display images in the training and validation data.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
###Output
_____no_output_____
###Markdown
Import Tensorflow and the Keras classes needed to construct our model.
###Code
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import os
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Load data Begin by downloading the dataset. This tutorial uses a filtered version of Dogs vs Cats dataset from Kaggle. Download the archive version of the dataset and store it in the "/tmp/" directory.
###Code
_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)
PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')
###Output
_____no_output_____
###Markdown
The dataset has the following directory structure:cats_and_dogs_filtered|__ train |______ cats: [cat.0.jpg, cat.1.jpg, cat.2.jpg ....] |______ dogs: [dog.0.jpg, dog.1.jpg, dog.2.jpg ...]|__ validation |______ cats: [cat.2000.jpg, cat.2001.jpg, cat.2002.jpg ....] |______ dogs: [dog.2000.jpg, dog.2001.jpg, dog.2002.jpg ...] After extracting its contents, assign variables with the proper file path for the training and validation set.
###Code
train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')
train_cats_dir = os.path.join(train_dir, 'cats') # directory with our training cat pictures
train_dogs_dir = os.path.join(train_dir, 'dogs') # directory with our training dog pictures
validation_cats_dir = os.path.join(validation_dir, 'cats') # directory with our validation cat pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs') # directory with our validation dog pictures
###Output
_____no_output_____
###Markdown
Understand the data Let's look at how many cats and dogs images are in the training and validation directory:
###Code
num_cats_tr = len(os.listdir(train_cats_dir))
num_dogs_tr = len(os.listdir(train_dogs_dir))
num_cats_val = len(os.listdir(validation_cats_dir))
num_dogs_val = len(os.listdir(validation_dogs_dir))
total_train = num_cats_tr + num_dogs_tr
total_val = num_cats_val + num_dogs_val
print('total training cat images:', num_cats_tr)
print('total training dog images:', num_dogs_tr)
print('total validation cat images:', num_cats_val)
print('total validation dog images:', num_dogs_val)
print("--")
print("Total training images:", total_train)
print("Total validation images:", total_val)
###Output
_____no_output_____
###Markdown
For convenience, set up variables to use while pre-processing the dataset and training the network.
###Code
batch_size = 128
epochs = 15
IMG_HEIGHT = 150
IMG_WIDTH = 150
###Output
_____no_output_____
###Markdown
Data preparation Format the images into appropriately pre-processed floating point tensors before feeding to the network:1. Read images from the disk.2. Decode contents of these images and convert it into proper grid format as per their RGB content.3. Convert them into floating point tensors.4. Rescale the tensors from values between 0 and 255 to values between 0 and 1, as neural networks prefer to deal with small input values.Fortunately, all these tasks can be done with the `ImageDataGenerator` class provided by `tf.keras`. It can read images from disk and preprocess them into proper tensors. It will also set up generators that convert these images into batches of tensors—helpful when training the network.
###Code
train_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our training data
validation_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our validation data
###Output
_____no_output_____
###Markdown
After defining the generators for training and validation images, the `flow_from_directory` method load images from the disk, applies rescaling, and resizes the images into the required dimensions.
###Code
train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Visualize training images Visualize the training images by extracting a batch of images from the training generator—which is 32 images in this example—then plot five of them with `matplotlib`.
###Code
sample_training_images, _ = next(train_data_gen)
###Output
_____no_output_____
###Markdown
The `next` function returns a batch from the dataset. The return value of `next` function is in form of `(x_train, y_train)` where x_train is training features and y_train, its labels. Discard the labels to only visualize the training images.
###Code
# This function will plot images in the form of a grid with 1 row and 5 columns where images are placed in each column.
def plotImages(images_arr):
fig, axes = plt.subplots(1, 5, figsize=(20,20))
axes = axes.flatten()
for img, ax in zip( images_arr, axes):
ax.imshow(img)
ax.axis('off')
plt.tight_layout()
plt.show()
plotImages(sample_training_images[:5])
###Output
_____no_output_____
###Markdown
Create the model The model consists of three convolution blocks with a max pool layer in each of them. There's a fully connected layer with 512 units on top of it thatr is activated by a `relu` activation function. The model outputs class probabilities based on binary classification by the `sigmoid` activation function.
###Code
model = Sequential([
Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(512, activation='relu'),
Dense(1, activation='sigmoid')
])
###Output
_____no_output_____
###Markdown
Compile the modelFor this tutorial, choose the *ADAM* optimizer and *binary cross entropy* loss function. To view training and validation accuracy for each training epoch, pass the `metrics` argument.
###Code
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Model summaryView all the layers of the network using the model's `summary` method:
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Train the model Use the `fit_generator` method of the `ImageDataGenerator` class to train the network.
###Code
history = model.fit_generator(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)
###Output
_____no_output_____
###Markdown
Visualize training results Now visualize the results after training the network.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
As you can see from the plots, training accuracy and validation accuracy are off by large margin and the model has achieved only around **70%** accuracy on the validation set.Let's look at what went wrong and try to increase overall performance of the model. Overfitting In the plots above, the training accuracy is increasing linearly over time, whereas validation accuracy stalls around 70% in the training process. Also, the difference in accuracy between training and validation accuracy is noticeable—a sign of *overfitting*.When there are a small number of training examples, the model sometimes learns from noises or unwanted details from training examples—to an extent that it negatively impacts the performance of the model on new examples. This phenomenon is known as overfitting. It means that the model will have a difficult time generalizing on a new dataset.There are multiple ways to fight overfitting in the training process. In this tutorial, you'll use *data augmentation* and add *dropout* to our model. Data augmentation Overfitting generally occurs when there are a small number of training examples. One way to fix this problem is to augment the dataset so that it has a sufficient number of training examples. Data augmentation takes the approach of generating more training data from existing training samples by augmenting the samples using random transformations that yield believable-looking images. The goal is the model will never see the exact same picture twice during training. This helps expose the model to more aspects of the data and generalize better.Implement this in `tf.keras` using the `ImageDataGenerator` class. Pass different transformations to the dataset and it will take care of applying it during the training process. Augment and visualize data Begin by applying random horizontal flip augmentation to the dataset and see how individual images look like after the transformation. Apply horizontal flip Pass `horizontal_flip` as an argument to the `ImageDataGenerator` class and set it to `True` to apply this augmentation.
###Code
image_gen = ImageDataGenerator(rescale=1./255, horizontal_flip=True)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
###Output
_____no_output_____
###Markdown
Take one sample image from the training examples and repeat it five times so that the augmentation is applied to the same image five times.
###Code
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
# Re-use the same custom plotting function defined and used
# above to visualize the training images
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Randomly rotate the image Let's take a look at a different augmentation called rotation and apply 45 degrees of rotation randomly to the training examples.
###Code
image_gen = ImageDataGenerator(rescale=1./255, rotation_range=45)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Apply zoom augmentation Apply a zoom augmentation to the dataset to zoom images up to 50% randomly.
###Code
image_gen = ImageDataGenerator(rescale=1./255, zoom_range=0.5)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Put it all together Apply all the previous augmentations. Here, you applied rescale, 45 degree rotation, width shift, height shift, horizontal flip and zoom augmentation to the training images.
###Code
image_gen_train = ImageDataGenerator(
rescale=1./255,
rotation_range=45,
width_shift_range=.15,
height_shift_range=.15,
horizontal_flip=True,
zoom_range=0.5
)
train_data_gen = image_gen_train.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Visualize how a single image would look five different times when passing these augmentations randomly to the dataset.
###Code
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Create validation data generator Generally, only apply data augmentation to the training examples. In this case, only rescale the validation images and convert them into batches using `ImageDataGenerator`.
###Code
image_gen_val = ImageDataGenerator(rescale=1./255)
val_data_gen = image_gen_val.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Dropout Another technique to reduce overfitting is to introduce *dropout* to the network. It is a form of *regularization* that forces the weights in the network to take only small values, which makes the distribution of weight values more regular and the network can reduce overfitting on small training examples. Dropout is one of the regularization technique used in this tutorialWhen you apply dropout to a layer it randomly drops out (set to zero) number of output units from the applied layer during the training process. Dropout takes a fractional number as its input value, in the form such as 0.1, 0.2, 0.4, etc. This means dropping out 10%, 20% or 40% of the output units randomly from the applied layer.When appling 0.1 dropout to a certain layer, it randomly kills 10% of the output units in each training epoch.Create a network architecture with this new dropout feature and apply it to different convolutions and fully-connected layers. Creating a new network with Dropouts Here, you apply dropout to first and last max pool layers. Applying dropout will randomly set 20% of the neurons to zero during each training epoch. This helps to avoid overfitting on the training dataset.
###Code
model_new = Sequential([
Conv2D(16, 3, padding='same', activation='relu',
input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Dropout(0.2),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Dropout(0.2),
Flatten(),
Dense(512, activation='relu'),
Dense(1, activation='sigmoid')
])
###Output
_____no_output_____
###Markdown
Compile the model After introducing dropouts to the network, compile the model and view the layers summary.
###Code
model_new.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
model_new.summary()
###Output
_____no_output_____
###Markdown
Train the model After successfully introducing data augmentations to the training examples and adding dropouts to the network, train this new network:
###Code
history = model_new.fit_generator(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)
###Output
_____no_output_____
###Markdown
Visualize the model Visualize the new model after training, you can see that there is significantly less overfitting than before. The accuracy should go up after training the model for more epochs.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Image classification View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial shows how to classify images of flowers. It creates an image classifier using a `tf.keras.Sequential` model, and loads data using `tf.keras.utils.image_dataset_from_directory`. You will gain practical experience with the following concepts:* Efficiently loading a dataset off disk.* Identifying overfitting and applying techniques to mitigate it, including data augmentation and dropout.This tutorial follows a basic machine learning workflow:1. Examine and understand data2. Build an input pipeline3. Build the model4. Train the model5. Test the model6. Improve the model and repeat the process Import TensorFlow and other libraries
###Code
import matplotlib.pyplot as plt
import numpy as np
import os
import PIL
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
###Output
_____no_output_____
###Markdown
Download and explore the dataset This tutorial uses a dataset of about 3,700 photos of flowers. The dataset contains five sub-directories, one per class:```flower_photo/ daisy/ dandelion/ roses/ sunflowers/ tulips/```
###Code
import pathlib
dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
data_dir = tf.keras.utils.get_file('flower_photos', origin=dataset_url, untar=True)
data_dir = pathlib.Path(data_dir)
###Output
_____no_output_____
###Markdown
After downloading, you should now have a copy of the dataset available. There are 3,670 total images:
###Code
image_count = len(list(data_dir.glob('*/*.jpg')))
print(image_count)
###Output
_____no_output_____
###Markdown
Here are some roses:
###Code
roses = list(data_dir.glob('roses/*'))
PIL.Image.open(str(roses[0]))
PIL.Image.open(str(roses[1]))
###Output
_____no_output_____
###Markdown
And some tulips:
###Code
tulips = list(data_dir.glob('tulips/*'))
PIL.Image.open(str(tulips[0]))
PIL.Image.open(str(tulips[1]))
###Output
_____no_output_____
###Markdown
Load data using a Keras utilityLet's load these images off disk using the helpful `tf.keras.utils.image_dataset_from_directory` utility. This will take you from a directory of images on disk to a `tf.data.Dataset` in just a couple lines of code. If you like, you can also write your own data loading code from scratch by visiting the [Load and preprocess images](../load_data/images.ipynb) tutorial. Create a dataset Define some parameters for the loader:
###Code
batch_size = 32
img_height = 180
img_width = 180
###Output
_____no_output_____
###Markdown
It's good practice to use a validation split when developing your model. Let's use 80% of the images for training, and 20% for validation.
###Code
train_ds = tf.keras.utils.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="training",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
val_ds = tf.keras.utils.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
###Output
_____no_output_____
###Markdown
You can find the class names in the `class_names` attribute on these datasets. These correspond to the directory names in alphabetical order.
###Code
class_names = train_ds.class_names
print(class_names)
###Output
_____no_output_____
###Markdown
Visualize the dataHere are the first nine images from the training dataset:
###Code
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 10))
for images, labels in train_ds.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(images[i].numpy().astype("uint8"))
plt.title(class_names[labels[i]])
plt.axis("off")
###Output
_____no_output_____
###Markdown
You will train a model using these datasets by passing them to `Model.fit` in a moment. If you like, you can also manually iterate over the dataset and retrieve batches of images:
###Code
for image_batch, labels_batch in train_ds:
print(image_batch.shape)
print(labels_batch.shape)
break
###Output
_____no_output_____
###Markdown
The `image_batch` is a tensor of the shape `(32, 180, 180, 3)`. This is a batch of 32 images of shape `180x180x3` (the last dimension refers to color channels RGB). The `label_batch` is a tensor of the shape `(32,)`, these are corresponding labels to the 32 images.You can call `.numpy()` on the `image_batch` and `labels_batch` tensors to convert them to a `numpy.ndarray`. Configure the dataset for performanceLet's make sure to use buffered prefetching so you can yield data from disk without having I/O become blocking. These are two important methods you should use when loading data:- `Dataset.cache` keeps the images in memory after they're loaded off disk during the first epoch. This will ensure the dataset does not become a bottleneck while training your model. If your dataset is too large to fit into memory, you can also use this method to create a performant on-disk cache.- `Dataset.prefetch` overlaps data preprocessing and model execution while training.Interested readers can learn more about both methods, as well as how to cache data to disk in the *Prefetching* section of the [Better performance with the tf.data API](../../guide/data_performance.ipynb) guide.
###Code
AUTOTUNE = tf.data.AUTOTUNE
train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
###Output
_____no_output_____
###Markdown
Standardize the data The RGB channel values are in the `[0, 255]` range. This is not ideal for a neural network; in general you should seek to make your input values small.Here, you will standardize values to be in the `[0, 1]` range by using `tf.keras.layers.Rescaling`:
###Code
normalization_layer = layers.Rescaling(1./255)
###Output
_____no_output_____
###Markdown
There are two ways to use this layer. You can apply it to the dataset by calling map:
###Code
normalized_ds = train_ds.map(lambda x, y: (normalization_layer(x), y))
image_batch, labels_batch = next(iter(normalized_ds))
first_image = image_batch[0]
# Notice the pixels values are now in `[0,1]`.
print(np.min(first_image), np.max(first_image))
###Output
_____no_output_____
###Markdown
Or, you can include the layer inside your model definition, which can simplify deployment. Let's use the second approach here. Note: You previously resized images using the `image_size` argument of `tf.keras.utils.image_dataset_from_directory`. If you want to include the resizing logic in your model as well, you can use the `tf.keras.layers.Resizing` layer. Create the modelThe [Sequential](../../guide/keras/sequential_model.ipynb) model consists of three convolution blocks (`tf.keras.layers.Conv2D`) with a max pooling layer (`tf.keras.layers.MaxPooling2D`) in each of them. There's a fully-connected layer (`tf.keras.layers.Dense`) with 128 units on top of it that is activated by a ReLU activation function (`'relu'`). This model has not been tuned for high accuracy—the goal of this tutorial is to show a standard approach.
###Code
num_classes = 5
model = Sequential([
layers.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
###Output
_____no_output_____
###Markdown
Compile the modelFor this tutorial, choose the `tf.keras.optimizers.Adam` optimizer and `tf.keras.losses.SparseCategoricalCrossentropy` loss function. To view training and validation accuracy for each training epoch, pass the `metrics` argument to `Model.compile`.
###Code
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Model summaryView all the layers of the network using the model's `Model.summary` method:
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Train the model
###Code
epochs=10
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
###Output
_____no_output_____
###Markdown
Visualize training results Create plots of loss and accuracy on the training and validation sets:
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
The plots show that training accuracy and validation accuracy are off by large margins, and the model has achieved only around 60% accuracy on the validation set.Let's inspect what went wrong and try to increase the overall performance of the model. Overfitting In the plots above, the training accuracy is increasing linearly over time, whereas validation accuracy stalls around 60% in the training process. Also, the difference in accuracy between training and validation accuracy is noticeable—a sign of [overfitting](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit).When there are a small number of training examples, the model sometimes learns from noises or unwanted details from training examples—to an extent that it negatively impacts the performance of the model on new examples. This phenomenon is known as overfitting. It means that the model will have a difficult time generalizing on a new dataset.There are multiple ways to fight overfitting in the training process. In this tutorial, you'll use *data augmentation* and add *Dropout* to your model. Data augmentation Overfitting generally occurs when there are a small number of training examples. [Data augmentation](./data_augmentation.ipynb) takes the approach of generating additional training data from your existing examples by augmenting them using random transformations that yield believable-looking images. This helps expose the model to more aspects of the data and generalize better.You will implement data augmentation using the following Keras preprocessing layers: `tf.keras.layers.RandomFlip`, `tf.keras.layers.RandomRotation`, and `tf.keras.layers.RandomZoom`. These can be included inside your model like other layers, and run on the GPU.
###Code
data_augmentation = keras.Sequential(
[
layers.RandomFlip("horizontal",
input_shape=(img_height,
img_width,
3)),
layers.RandomRotation(0.1),
layers.RandomZoom(0.1),
]
)
###Output
_____no_output_____
###Markdown
Let's visualize what a few augmented examples look like by applying data augmentation to the same image several times:
###Code
plt.figure(figsize=(10, 10))
for images, _ in train_ds.take(1):
for i in range(9):
augmented_images = data_augmentation(images)
ax = plt.subplot(3, 3, i + 1)
plt.imshow(augmented_images[0].numpy().astype("uint8"))
plt.axis("off")
###Output
_____no_output_____
###Markdown
You will use data augmentation to train a model in a moment. DropoutAnother technique to reduce overfitting is to introduce [dropout](https://developers.google.com/machine-learning/glossarydropout_regularization) regularization to the network.When you apply dropout to a layer, it randomly drops out (by setting the activation to zero) a number of output units from the layer during the training process. Dropout takes a fractional number as its input value, in the form such as 0.1, 0.2, 0.4, etc. This means dropping out 10%, 20% or 40% of the output units randomly from the applied layer.Let's create a new neural network with `tf.keras.layers.Dropout` before training it using the augmented images:
###Code
model = Sequential([
data_augmentation,
layers.Rescaling(1./255),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Dropout(0.2),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
###Output
_____no_output_____
###Markdown
Compile and train the model
###Code
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.summary()
epochs = 15
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
###Output
_____no_output_____
###Markdown
Visualize training resultsAfter applying data augmentation and `tf.keras.layers.Dropout`, there is less overfitting than before, and training and validation accuracy are closer aligned:
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
Predict on new data Finally, let's use our model to classify an image that wasn't included in the training or validation sets. Note: Data augmentation and dropout layers are inactive at inference time.
###Code
sunflower_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/592px-Red_sunflower.jpg"
sunflower_path = tf.keras.utils.get_file('Red_sunflower', origin=sunflower_url)
img = tf.keras.utils.load_img(
sunflower_path, target_size=(img_height, img_width)
)
img_array = tf.keras.utils.img_to_array(img)
img_array = tf.expand_dims(img_array, 0) # Create a batch
predictions = model.predict(img_array)
score = tf.nn.softmax(predictions[0])
print(
"This image most likely belongs to {} with a {:.2f} percent confidence."
.format(class_names[np.argmax(score)], 100 * np.max(score))
)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Image classification View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial shows how to classify cats or dogs from images. It builds an image classifier using a `tf.keras.Sequential` model and load data using `tf.keras.preprocessing.image.ImageDataGenerator`. You will get some practical experience and develop intuition for the following concepts:* Building _data input pipelines_ using the `tf.keras.preprocessing.image.ImageDataGenerator` class to efficiently work with data on disk to use with the model.* _Overfitting_ —How to identify and prevent it.* _Data augmentation_ and _dropout_ —Key techniques to fight overfitting in computer vision tasks to incorporate into the data pipeline and image classifier model.This tutorial follows a basic machine learning workflow:1. Examine and understand data2. Build an input pipeline3. Build the model4. Train the model5. Test the model6. Improve the model and repeat the process Import packages Let's start by importing the required packages. The `os` package is used to read files and directory structure, NumPy is used to convert python list to numpy array and to perform required matrix operations and `matplotlib.pyplot` to plot the graph and display images in the training and validation data.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
###Output
_____no_output_____
###Markdown
Import Tensorflow and the Keras classes needed to construct our model.
###Code
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import os
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Load data Begin by downloading the dataset. This tutorial uses a filtered version of Dogs vs Cats dataset from Kaggle. Download the archive version of the dataset and store it in the "/tmp/" directory.
###Code
_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)
PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')
###Output
_____no_output_____
###Markdown
The dataset has the following directory structure:cats_and_dogs_filtered|__ train |______ cats: [cat.0.jpg, cat.1.jpg, cat.2.jpg ....] |______ dogs: [dog.0.jpg, dog.1.jpg, dog.2.jpg ...]|__ validation |______ cats: [cat.2000.jpg, cat.2001.jpg, cat.2002.jpg ....] |______ dogs: [dog.2000.jpg, dog.2001.jpg, dog.2002.jpg ...] After extracting its contents, assign variables with the proper file path for the training and validation set.
###Code
train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')
train_cats_dir = os.path.join(train_dir, 'cats') # directory with our training cat pictures
train_dogs_dir = os.path.join(train_dir, 'dogs') # directory with our training dog pictures
validation_cats_dir = os.path.join(validation_dir, 'cats') # directory with our validation cat pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs') # directory with our validation dog pictures
###Output
_____no_output_____
###Markdown
Understand the data Let's look at how many cats and dogs images are in the training and validation directory:
###Code
num_cats_tr = len(os.listdir(train_cats_dir))
num_dogs_tr = len(os.listdir(train_dogs_dir))
num_cats_val = len(os.listdir(validation_cats_dir))
num_dogs_val = len(os.listdir(validation_dogs_dir))
total_train = num_cats_tr + num_dogs_tr
total_val = num_cats_val + num_dogs_val
print('total training cat images:', num_cats_tr)
print('total training dog images:', num_dogs_tr)
print('total validation cat images:', num_cats_val)
print('total validation dog images:', num_dogs_val)
print("--")
print("Total training images:", total_train)
print("Total validation images:", total_val)
###Output
_____no_output_____
###Markdown
For convenience, set up variables to use while pre-processing the dataset and training the network.
###Code
batch_size = 128
epochs = 15
IMG_HEIGHT = 150
IMG_WIDTH = 150
###Output
_____no_output_____
###Markdown
Data preparation Format the images into appropriately pre-processed floating point tensors before feeding to the network:1. Read images from the disk.2. Decode contents of these images and convert it into proper grid format as per their RGB content.3. Convert them into floating point tensors.4. Rescale the tensors from values between 0 and 255 to values between 0 and 1, as neural networks prefer to deal with small input values.Fortunately, all these tasks can be done with the `ImageDataGenerator` class provided by `tf.keras`. It can read images from disk and preprocess them into proper tensors. It will also set up generators that convert these images into batches of tensors—helpful when training the network.
###Code
train_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our training data
validation_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our validation data
###Output
_____no_output_____
###Markdown
After defining the generators for training and validation images, the `flow_from_directory` method load images from the disk, applies rescaling, and resizes the images into the required dimensions.
###Code
train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Visualize training images Visualize the training images by extracting a batch of images from the training generator—which is 32 images in this example—then plot five of them with `matplotlib`.
###Code
sample_training_images, _ = next(train_data_gen)
###Output
_____no_output_____
###Markdown
The `next` function returns a batch from the dataset. The return value of `next` function is in form of `(x_train, y_train)` where x_train is training features and y_train, its labels. Discard the labels to only visualize the training images.
###Code
# This function will plot images in the form of a grid with 1 row and 5 columns where images are placed in each column.
def plotImages(images_arr):
fig, axes = plt.subplots(1, 5, figsize=(20,20))
axes = axes.flatten()
for img, ax in zip( images_arr, axes):
ax.imshow(img)
ax.axis('off')
plt.tight_layout()
plt.show()
plotImages(sample_training_images[:5])
###Output
_____no_output_____
###Markdown
Create the model The model consists of three convolution blocks with a max pool layer in each of them. There's a fully connected layer with 512 units on top of it that is activated by a `relu` activation function.
###Code
model = Sequential([
Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(512, activation='relu'),
Dense(1)
])
###Output
_____no_output_____
###Markdown
Compile the modelFor this tutorial, choose the *ADAM* optimizer and *binary cross entropy* loss function. To view training and validation accuracy for each training epoch, pass the `metrics` argument.
###Code
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Model summaryView all the layers of the network using the model's `summary` method:
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Train the model Use the `fit_generator` method of the `ImageDataGenerator` class to train the network.
###Code
history = model.fit_generator(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)
###Output
_____no_output_____
###Markdown
Visualize training results Now visualize the results after training the network.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss=history.history['loss']
val_loss=history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
As you can see from the plots, training accuracy and validation accuracy are off by large margin and the model has achieved only around **70%** accuracy on the validation set.Let's look at what went wrong and try to increase overall performance of the model. Overfitting In the plots above, the training accuracy is increasing linearly over time, whereas validation accuracy stalls around 70% in the training process. Also, the difference in accuracy between training and validation accuracy is noticeable—a sign of *overfitting*.When there are a small number of training examples, the model sometimes learns from noises or unwanted details from training examples—to an extent that it negatively impacts the performance of the model on new examples. This phenomenon is known as overfitting. It means that the model will have a difficult time generalizing on a new dataset.There are multiple ways to fight overfitting in the training process. In this tutorial, you'll use *data augmentation* and add *dropout* to our model. Data augmentation Overfitting generally occurs when there are a small number of training examples. One way to fix this problem is to augment the dataset so that it has a sufficient number of training examples. Data augmentation takes the approach of generating more training data from existing training samples by augmenting the samples using random transformations that yield believable-looking images. The goal is the model will never see the exact same picture twice during training. This helps expose the model to more aspects of the data and generalize better.Implement this in `tf.keras` using the `ImageDataGenerator` class. Pass different transformations to the dataset and it will take care of applying it during the training process. Augment and visualize data Begin by applying random horizontal flip augmentation to the dataset and see how individual images look like after the transformation. Apply horizontal flip Pass `horizontal_flip` as an argument to the `ImageDataGenerator` class and set it to `True` to apply this augmentation.
###Code
image_gen = ImageDataGenerator(rescale=1./255, horizontal_flip=True)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
###Output
_____no_output_____
###Markdown
Take one sample image from the training examples and repeat it five times so that the augmentation is applied to the same image five times.
###Code
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
# Re-use the same custom plotting function defined and used
# above to visualize the training images
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Image classification View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial shows how to classify cats or dogs from images. It builds an image classifier using a `tf.keras.Sequential` model and load data using `tf.keras.preprocessing.image.ImageDataGenerator`. You will get some practical experience and develop intuition for the following concepts:* Building _data input pipelines_ using the `tf.keras.preprocessing.image.ImageDataGenerator` class to efficiently work with data on disk to use with the model.* _Overfitting_ —How to identify and prevent it.* _Data augmentation_ and _dropout_ —Key techniques to fight overfitting in computer vision tasks to incorporate into the data pipeline and image classifier model.This tutorial follows a basic machine learning workflow:1. Examine and understand data2. Build an input pipeline3. Build the model4. Train the model5. Test the model6. Improve the model and repeat the process Import packages Let's start by importing the required packages. The `os` package is used to read files and directory structure, NumPy is used to convert python list to numpy array and to perform required matrix operations and `matplotlib.pyplot` to plot the graph and display images in the training and validation data.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
###Output
_____no_output_____
###Markdown
Import Tensorflow and the Keras classes needed to construct our model.
###Code
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import os
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Load data Begin by downloading the dataset. This tutorial uses a filtered version of Dogs vs Cats dataset from Kaggle. Download the archive version of the dataset and store it in the "/tmp/" directory.
###Code
_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)
PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')
###Output
_____no_output_____
###Markdown
The dataset has the following directory structure:cats_and_dogs_filtered|__ train |______ cats: [cat.0.jpg, cat.1.jpg, cat.2.jpg ....] |______ dogs: [dog.0.jpg, dog.1.jpg, dog.2.jpg ...]|__ validation |______ cats: [cat.2000.jpg, cat.2001.jpg, cat.2002.jpg ....] |______ dogs: [dog.2000.jpg, dog.2001.jpg, dog.2002.jpg ...] After extracting its contents, assign variables with the proper file path for the training and validation set.
###Code
train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')
train_cats_dir = os.path.join(train_dir, 'cats') # directory with our training cat pictures
train_dogs_dir = os.path.join(train_dir, 'dogs') # directory with our training dog pictures
validation_cats_dir = os.path.join(validation_dir, 'cats') # directory with our validation cat pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs') # directory with our validation dog pictures
###Output
_____no_output_____
###Markdown
Understand the data Let's look at how many cats and dogs images are in the training and validation directory:
###Code
num_cats_tr = len(os.listdir(train_cats_dir))
num_dogs_tr = len(os.listdir(train_dogs_dir))
num_cats_val = len(os.listdir(validation_cats_dir))
num_dogs_val = len(os.listdir(validation_dogs_dir))
total_train = num_cats_tr + num_dogs_tr
total_val = num_cats_val + num_dogs_val
print('total training cat images:', num_cats_tr)
print('total training dog images:', num_dogs_tr)
print('total validation cat images:', num_cats_val)
print('total validation dog images:', num_dogs_val)
print("--")
print("Total training images:", total_train)
print("Total validation images:", total_val)
###Output
_____no_output_____
###Markdown
For convenience, set up variables to use while pre-processing the dataset and training the network.
###Code
batch_size = 128
epochs = 15
IMG_HEIGHT = 150
IMG_WIDTH = 150
###Output
_____no_output_____
###Markdown
Data preparation Format the images into appropriately pre-processed floating point tensors before feeding to the network:1. Read images from the disk.2. Decode contents of these images and convert it into proper grid format as per their RGB content.3. Convert them into floating point tensors.4. Rescale the tensors from values between 0 and 255 to values between 0 and 1, as neural networks prefer to deal with small input values.Fortunately, all these tasks can be done with the `ImageDataGenerator` class provided by `tf.keras`. It can read images from disk and preprocess them into proper tensors. It will also set up generators that convert these images into batches of tensors—helpful when training the network.
###Code
train_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our training data
validation_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our validation data
###Output
_____no_output_____
###Markdown
After defining the generators for training and validation images, the `flow_from_directory` method load images from the disk, applies rescaling, and resizes the images into the required dimensions.
###Code
train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Visualize training images Visualize the training images by extracting a batch of images from the training generator—which is 32 images in this example—then plot five of them with `matplotlib`.
###Code
sample_training_images, _ = next(train_data_gen)
###Output
_____no_output_____
###Markdown
The `next` function returns a batch from the dataset. The return value of `next` function is in form of `(x_train, y_train)` where x_train is training features and y_train, its labels. Discard the labels to only visualize the training images.
###Code
# This function will plot images in the form of a grid with 1 row and 5 columns where images are placed in each column.
def plotImages(images_arr):
fig, axes = plt.subplots(1, 5, figsize=(20,20))
axes = axes.flatten()
for img, ax in zip( images_arr, axes):
ax.imshow(img)
ax.axis('off')
plt.tight_layout()
plt.show()
plotImages(sample_training_images[:5])
###Output
_____no_output_____
###Markdown
Create the model The model consists of three convolution blocks with a max pool layer in each of them. There's a fully connected layer with 512 units on top of it that is activated by a `relu` activation function.
###Code
model = Sequential([
Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(512, activation='relu'),
Dense(1)
])
###Output
_____no_output_____
###Markdown
Compile the modelFor this tutorial, choose the *ADAM* optimizer and *binary cross entropy* loss function. To view training and validation accuracy for each training epoch, pass the `metrics` argument.
###Code
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Model summaryView all the layers of the network using the model's `summary` method:
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Train the model Use the `fit_generator` method of the `ImageDataGenerator` class to train the network.
###Code
history = model.fit_generator(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)
###Output
_____no_output_____
###Markdown
Visualize training results Now visualize the results after training the network.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss=history.history['loss']
val_loss=history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
As you can see from the plots, training accuracy and validation accuracy are off by large margin and the model has achieved only around **70%** accuracy on the validation set.Let's look at what went wrong and try to increase overall performance of the model. Overfitting In the plots above, the training accuracy is increasing linearly over time, whereas validation accuracy stalls around 70% in the training process. Also, the difference in accuracy between training and validation accuracy is noticeable—a sign of *overfitting*.When there are a small number of training examples, the model sometimes learns from noises or unwanted details from training examples—to an extent that it negatively impacts the performance of the model on new examples. This phenomenon is known as overfitting. It means that the model will have a difficult time generalizing on a new dataset.There are multiple ways to fight overfitting in the training process. In this tutorial, you'll use *data augmentation* and add *dropout* to our model. Data augmentation Overfitting generally occurs when there are a small number of training examples. One way to fix this problem is to augment the dataset so that it has a sufficient number of training examples. Data augmentation takes the approach of generating more training data from existing training samples by augmenting the samples using random transformations that yield believable-looking images. The goal is the model will never see the exact same picture twice during training. This helps expose the model to more aspects of the data and generalize better.Implement this in `tf.keras` using the `ImageDataGenerator` class. Pass different transformations to the dataset and it will take care of applying it during the training process. Augment and visualize data Begin by applying random horizontal flip augmentation to the dataset and see how individual images look like after the transformation. Apply horizontal flip Pass `horizontal_flip` as an argument to the `ImageDataGenerator` class and set it to `True` to apply this augmentation.
###Code
image_gen = ImageDataGenerator(rescale=1./255, horizontal_flip=True)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
###Output
_____no_output_____
###Markdown
Take one sample image from the training examples and repeat it five times so that the augmentation is applied to the same image five times.
###Code
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
# Re-use the same custom plotting function defined and used
# above to visualize the training images
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Randomly rotate the image Let's take a look at a different augmentation called rotation and apply 45 degrees of rotation randomly to the training examples.
###Code
image_gen = ImageDataGenerator(rescale=1./255, rotation_range=45)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Apply zoom augmentation Apply a zoom augmentation to the dataset to zoom images up to 50% randomly.
###Code
# zoom_range from 0 - 1 where 1 = 100%.
image_gen = ImageDataGenerator(rescale=1./255, zoom_range=0.5) #
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Put it all together Apply all the previous augmentations. Here, you applied rescale, 45 degree rotation, width shift, height shift, horizontal flip and zoom augmentation to the training images.
###Code
image_gen_train = ImageDataGenerator(
rescale=1./255,
rotation_range=45,
width_shift_range=.15,
height_shift_range=.15,
horizontal_flip=True,
zoom_range=0.5
)
train_data_gen = image_gen_train.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Visualize how a single image would look five different times when passing these augmentations randomly to the dataset.
###Code
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Create validation data generator Generally, only apply data augmentation to the training examples. In this case, only rescale the validation images and convert them into batches using `ImageDataGenerator`.
###Code
image_gen_val = ImageDataGenerator(rescale=1./255)
val_data_gen = image_gen_val.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Dropout Another technique to reduce overfitting is to introduce *dropout* to the network. It is a form of *regularization* that forces the weights in the network to take only small values, which makes the distribution of weight values more regular and the network can reduce overfitting on small training examples. Dropout is one of the regularization technique used in this tutorialWhen you apply dropout to a layer it randomly drops out (set to zero) number of output units from the applied layer during the training process. Dropout takes a fractional number as its input value, in the form such as 0.1, 0.2, 0.4, etc. This means dropping out 10%, 20% or 40% of the output units randomly from the applied layer.When appling 0.1 dropout to a certain layer, it randomly kills 10% of the output units in each training epoch.Create a network architecture with this new dropout feature and apply it to different convolutions and fully-connected layers. Creating a new network with Dropouts Here, you apply dropout to first and last max pool layers. Applying dropout will randomly set 20% of the neurons to zero during each training epoch. This helps to avoid overfitting on the training dataset.
###Code
model_new = Sequential([
Conv2D(16, 3, padding='same', activation='relu',
input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Dropout(0.2),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Dropout(0.2),
Flatten(),
Dense(512, activation='relu'),
Dense(1)
])
###Output
_____no_output_____
###Markdown
Compile the model After introducing dropouts to the network, compile the model and view the layers summary.
###Code
model_new.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
model_new.summary()
###Output
_____no_output_____
###Markdown
Train the model After successfully introducing data augmentations to the training examples and adding dropouts to the network, train this new network:
###Code
history = model_new.fit_generator(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)
###Output
_____no_output_____
###Markdown
Visualize the model Visualize the new model after training, you can see that there is significantly less overfitting than before. The accuracy should go up after training the model for more epochs.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Image classification View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial shows how to classify cats or dogs from images. It builds an image classifier using a `tf.keras.Sequential` model and load data using `tf.keras.preprocessing.image.ImageDataGenerator`. You will get some practical experience and develop intuition for the following concepts:* Building _data input pipelines_ using the `tf.keras.preprocessing.image.ImageDataGenerator` class to efficiently work with data on disk to use with the model.* _Overfitting_ —How to identify and prevent it.* _Data augmentation_ and _dropout_ —Key techniques to fight overfitting in computer vision tasks to incorporate into the data pipeline and image classifier model.This tutorial follows a basic machine learning workflow:1. Examine and understand data2. Build an input pipeline3. Build the model4. Train the model5. Test the model6. Improve the model and repeat the process Import packages Let's start by importing the required packages. The `os` package is used to read files and directory structure, NumPy is used to convert python list to numpy array and to perform required matrix operations and `matplotlib.pyplot` to plot the graph and display images in the training and validation data.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
###Output
_____no_output_____
###Markdown
Import Tensorflow and the Keras classes needed to construct our model.
###Code
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import os
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Load data Begin by downloading the dataset. This tutorial uses a filtered version of Dogs vs Cats dataset from Kaggle. Download the archive version of the dataset and store it in the "/tmp/" directory.
###Code
_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)
PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')
###Output
_____no_output_____
###Markdown
The dataset has the following directory structure:cats_and_dogs_filtered|__ train |______ cats: [cat.0.jpg, cat.1.jpg, cat.2.jpg ....] |______ dogs: [dog.0.jpg, dog.1.jpg, dog.2.jpg ...]|__ validation |______ cats: [cat.2000.jpg, cat.2001.jpg, cat.2002.jpg ....] |______ dogs: [dog.2000.jpg, dog.2001.jpg, dog.2002.jpg ...] After extracting its contents, assign variables with the proper file path for the training and validation set.
###Code
train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')
train_cats_dir = os.path.join(train_dir, 'cats') # directory with our training cat pictures
train_dogs_dir = os.path.join(train_dir, 'dogs') # directory with our training dog pictures
validation_cats_dir = os.path.join(validation_dir, 'cats') # directory with our validation cat pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs') # directory with our validation dog pictures
###Output
_____no_output_____
###Markdown
Understand the data Let's look at how many cats and dogs images are in the training and validation directory:
###Code
num_cats_tr = len(os.listdir(train_cats_dir))
num_dogs_tr = len(os.listdir(train_dogs_dir))
num_cats_val = len(os.listdir(validation_cats_dir))
num_dogs_val = len(os.listdir(validation_dogs_dir))
total_train = num_cats_tr + num_dogs_tr
total_val = num_cats_val + num_dogs_val
print('total training cat images:', num_cats_tr)
print('total training dog images:', num_dogs_tr)
print('total validation cat images:', num_cats_val)
print('total validation dog images:', num_dogs_val)
print("--")
print("Total training images:", total_train)
print("Total validation images:", total_val)
###Output
_____no_output_____
###Markdown
For convenience, set up variables to use while pre-processing the dataset and training the network.
###Code
batch_size = 128
epochs = 15
IMG_HEIGHT = 150
IMG_WIDTH = 150
###Output
_____no_output_____
###Markdown
Data preparation Format the images into appropriately pre-processed floating point tensors before feeding to the network:1. Read images from the disk.2. Decode contents of these images and convert it into proper grid format as per their RGB content.3. Convert them into floating point tensors.4. Rescale the tensors from values between 0 and 255 to values between 0 and 1, as neural networks prefer to deal with small input values.Fortunately, all these tasks can be done with the `ImageDataGenerator` class provided by `tf.keras`. It can read images from disk and preprocess them into proper tensors. It will also set up generators that convert these images into batches of tensors—helpful when training the network.
###Code
train_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our training data
validation_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our validation data
###Output
_____no_output_____
###Markdown
After defining the generators for training and validation images, the `flow_from_directory` method load images from the disk, applies rescaling, and resizes the images into the required dimensions.
###Code
train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Visualize training images Visualize the training images by extracting a batch of images from the training generator—which is 32 images in this example—then plot five of them with `matplotlib`.
###Code
sample_training_images, _ = next(train_data_gen)
###Output
_____no_output_____
###Markdown
The `next` function returns a batch from the dataset. The return value of `next` function is in form of `(x_train, y_train)` where x_train is training features and y_train, its labels. Discard the labels to only visualize the training images.
###Code
# This function will plot images in the form of a grid with 1 row and 5 columns where images are placed in each column.
def plotImages(images_arr):
fig, axes = plt.subplots(1, 5, figsize=(20,20))
axes = axes.flatten()
for img, ax in zip( images_arr, axes):
ax.imshow(img)
ax.axis('off')
plt.tight_layout()
plt.show()
plotImages(sample_training_images[:5])
###Output
_____no_output_____
###Markdown
Create the model The model consists of three convolution blocks with a max pool layer in each of them. There's a fully connected layer with 512 units on top of it that is activated by a `relu` activation function. The model outputs class probabilities based on binary classification by the `sigmoid` activation function.
###Code
model = Sequential([
Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(512, activation='relu'),
Dense(1, activation='sigmoid')
])
###Output
_____no_output_____
###Markdown
Compile the modelFor this tutorial, choose the *ADAM* optimizer and *binary cross entropy* loss function. To view training and validation accuracy for each training epoch, pass the `metrics` argument.
###Code
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Model summaryView all the layers of the network using the model's `summary` method:
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Train the model Use the `fit_generator` method of the `ImageDataGenerator` class to train the network.
###Code
history = model.fit_generator(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)
###Output
_____no_output_____
###Markdown
Visualize training results Now visualize the results after training the network.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
As you can see from the plots, training accuracy and validation accuracy are off by large margin and the model has achieved only around **70%** accuracy on the validation set.Let's look at what went wrong and try to increase overall performance of the model. Overfitting In the plots above, the training accuracy is increasing linearly over time, whereas validation accuracy stalls around 70% in the training process. Also, the difference in accuracy between training and validation accuracy is noticeable—a sign of *overfitting*.When there are a small number of training examples, the model sometimes learns from noises or unwanted details from training examples—to an extent that it negatively impacts the performance of the model on new examples. This phenomenon is known as overfitting. It means that the model will have a difficult time generalizing on a new dataset.There are multiple ways to fight overfitting in the training process. In this tutorial, you'll use *data augmentation* and add *dropout* to our model. Data augmentation Overfitting generally occurs when there are a small number of training examples. One way to fix this problem is to augment the dataset so that it has a sufficient number of training examples. Data augmentation takes the approach of generating more training data from existing training samples by augmenting the samples using random transformations that yield believable-looking images. The goal is the model will never see the exact same picture twice during training. This helps expose the model to more aspects of the data and generalize better.Implement this in `tf.keras` using the `ImageDataGenerator` class. Pass different transformations to the dataset and it will take care of applying it during the training process. Augment and visualize data Begin by applying random horizontal flip augmentation to the dataset and see how individual images look like after the transformation. Apply horizontal flip Pass `horizontal_flip` as an argument to the `ImageDataGenerator` class and set it to `True` to apply this augmentation.
###Code
image_gen = ImageDataGenerator(rescale=1./255, horizontal_flip=True)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
###Output
_____no_output_____
###Markdown
Take one sample image from the training examples and repeat it five times so that the augmentation is applied to the same image five times.
###Code
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
# Re-use the same custom plotting function defined and used
# above to visualize the training images
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Randomly rotate the image Let's take a look at a different augmentation called rotation and apply 45 degrees of rotation randomly to the training examples.
###Code
image_gen = ImageDataGenerator(rescale=1./255, rotation_range=45)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Apply zoom augmentation Apply a zoom augmentation to the dataset to zoom images up to 50% randomly.
###Code
# zoom_range from 0 - 1 where 1 = 100%.
image_gen = ImageDataGenerator(rescale=1./255, zoom_range=0.5) #
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Put it all together Apply all the previous augmentations. Here, you applied rescale, 45 degree rotation, width shift, height shift, horizontal flip and zoom augmentation to the training images.
###Code
image_gen_train = ImageDataGenerator(
rescale=1./255,
rotation_range=45,
width_shift_range=.15,
height_shift_range=.15,
horizontal_flip=True,
zoom_range=0.5
)
train_data_gen = image_gen_train.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Visualize how a single image would look five different times when passing these augmentations randomly to the dataset.
###Code
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Create validation data generator Generally, only apply data augmentation to the training examples. In this case, only rescale the validation images and convert them into batches using `ImageDataGenerator`.
###Code
image_gen_val = ImageDataGenerator(rescale=1./255)
val_data_gen = image_gen_val.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Dropout Another technique to reduce overfitting is to introduce *dropout* to the network. It is a form of *regularization* that forces the weights in the network to take only small values, which makes the distribution of weight values more regular and the network can reduce overfitting on small training examples. Dropout is one of the regularization technique used in this tutorialWhen you apply dropout to a layer it randomly drops out (set to zero) number of output units from the applied layer during the training process. Dropout takes a fractional number as its input value, in the form such as 0.1, 0.2, 0.4, etc. This means dropping out 10%, 20% or 40% of the output units randomly from the applied layer.When appling 0.1 dropout to a certain layer, it randomly kills 10% of the output units in each training epoch.Create a network architecture with this new dropout feature and apply it to different convolutions and fully-connected layers. Creating a new network with Dropouts Here, you apply dropout to first and last max pool layers. Applying dropout will randomly set 20% of the neurons to zero during each training epoch. This helps to avoid overfitting on the training dataset.
###Code
model_new = Sequential([
Conv2D(16, 3, padding='same', activation='relu',
input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Dropout(0.2),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Dropout(0.2),
Flatten(),
Dense(512, activation='relu'),
Dense(1, activation='sigmoid')
])
###Output
_____no_output_____
###Markdown
Compile the model After introducing dropouts to the network, compile the model and view the layers summary.
###Code
model_new.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
model_new.summary()
###Output
_____no_output_____
###Markdown
Train the model After successfully introducing data augmentations to the training examples and adding dropouts to the network, train this new network:
###Code
history = model_new.fit_generator(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)
###Output
_____no_output_____
###Markdown
Visualize the model Visualize the new model after training, you can see that there is significantly less overfitting than before. The accuracy should go up after training the model for more epochs.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Image classification View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial shows how to classify cats or dogs from images. It builds an image classifier using a `tf.keras.Sequential` model and load data using `tf.keras.preprocessing.image.ImageDataGenerator`. You will get some practical experience and develop intuition for the following concepts:* Building _data input pipelines_ using the `tf.keras.preprocessing.image.ImageDataGenerator` class to efficiently work with data on disk to use with the model.* _Overfitting_ —How to identify and prevent it.* _Data augmentation_ and _dropout_ —Key techniques to fight overfitting in computer vision tasks to incorporate into the data pipeline and image classifier model.This tutorial follows a basic machine learning workflow:1. Examine and understand data2. Build an input pipeline3. Build the model4. Train the model5. Test the model6. Improve the model and repeat the process Import packages Let's start by importing the required packages. The `os` package is used to read files and directory structure, NumPy is used to convert python list to numpy array and to perform required matrix operations and `matplotlib.pyplot` to plot the graph and display images in the training and validation data. Import Tensorflow and the Keras classes needed to construct our model.
###Code
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import os
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Load data Begin by downloading the dataset. This tutorial uses a filtered version of Dogs vs Cats dataset from Kaggle. Download the archive version of the dataset and store it in the "/tmp/" directory.
###Code
_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)
PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')
###Output
_____no_output_____
###Markdown
The dataset has the following directory structure:cats_and_dogs_filtered|__ train |______ cats: [cat.0.jpg, cat.1.jpg, cat.2.jpg ....] |______ dogs: [dog.0.jpg, dog.1.jpg, dog.2.jpg ...]|__ validation |______ cats: [cat.2000.jpg, cat.2001.jpg, cat.2002.jpg ....] |______ dogs: [dog.2000.jpg, dog.2001.jpg, dog.2002.jpg ...] After extracting its contents, assign variables with the proper file path for the training and validation set.
###Code
train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')
train_cats_dir = os.path.join(train_dir, 'cats') # directory with our training cat pictures
train_dogs_dir = os.path.join(train_dir, 'dogs') # directory with our training dog pictures
validation_cats_dir = os.path.join(validation_dir, 'cats') # directory with our validation cat pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs') # directory with our validation dog pictures
###Output
_____no_output_____
###Markdown
Understand the data Let's look at how many cats and dogs images are in the training and validation directory:
###Code
num_cats_tr = len(os.listdir(train_cats_dir))
num_dogs_tr = len(os.listdir(train_dogs_dir))
num_cats_val = len(os.listdir(validation_cats_dir))
num_dogs_val = len(os.listdir(validation_dogs_dir))
total_train = num_cats_tr + num_dogs_tr
total_val = num_cats_val + num_dogs_val
print('total training cat images:', num_cats_tr)
print('total training dog images:', num_dogs_tr)
print('total validation cat images:', num_cats_val)
print('total validation dog images:', num_dogs_val)
print("--")
print("Total training images:", total_train)
print("Total validation images:", total_val)
###Output
_____no_output_____
###Markdown
For convenience, set up variables to use while pre-processing the dataset and training the network.
###Code
batch_size = 128
epochs = 15
IMG_HEIGHT = 150
IMG_WIDTH = 150
###Output
_____no_output_____
###Markdown
Data preparation Format the images into appropriately pre-processed floating point tensors before feeding to the network:1. Read images from the disk.2. Decode contents of these images and convert it into proper grid format as per their RGB content.3. Convert them into floating point tensors.4. Rescale the tensors from values between 0 and 255 to values between 0 and 1, as neural networks prefer to deal with small input values.Fortunately, all these tasks can be done with the `ImageDataGenerator` class provided by `tf.keras`. It can read images from disk and preprocess them into proper tensors. It will also set up generators that convert these images into batches of tensors—helpful when training the network.
###Code
train_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our training data
validation_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our validation data
###Output
_____no_output_____
###Markdown
After defining the generators for training and validation images, the `flow_from_directory` method load images from the disk, applies rescaling, and resizes the images into the required dimensions.
###Code
train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Visualize training images Visualize the training images by extracting a batch of images from the training generator—which are 128 images in this example—then plot five of them with `matplotlib`.
###Code
sample_training_images, _ = next(train_data_gen)
###Output
_____no_output_____
###Markdown
The `next` function returns a batch from the dataset. The return value of `next` function is in form of `(x_train, y_train)` where x_train is training features and y_train, its labels. Discard the labels to only visualize the training images.
###Code
# This function will plot images in the form of a grid with 1 row and 5 columns where images are placed in each column.
def plotImages(images_arr):
fig, axes = plt.subplots(1, 5, figsize=(20,20))
axes = axes.flatten()
for img, ax in zip( images_arr, axes):
ax.imshow(img)
ax.axis('off')
plt.tight_layout()
plt.show()
plotImages(sample_training_images[:5])
###Output
_____no_output_____
###Markdown
Create the model The model consists of three convolution blocks with a max pool layer in each of them. There's a fully connected layer with 512 units on top of it that is activated by a `relu` activation function.
###Code
model = Sequential([
Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(512, activation='relu'),
Dense(1)
])
###Output
_____no_output_____
###Markdown
Compile the modelFor this tutorial, choose the *ADAM* optimizer and *binary cross entropy* loss function. To view training and validation accuracy for each training epoch, pass the `metrics` argument.
###Code
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Model summaryView all the layers of the network using the model's `summary` method:
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Train the model
###Code
history = model.fit(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)
###Output
_____no_output_____
###Markdown
Visualize training results Now visualize the results after training the network.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss=history.history['loss']
val_loss=history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
As you can see from the plots, training accuracy and validation accuracy are off by large margin and the model has achieved only around **70%** accuracy on the validation set.Let's look at what went wrong and try to increase overall performance of the model. Overfitting In the plots above, the training accuracy is increasing linearly over time, whereas validation accuracy stalls around 70% in the training process. Also, the difference in accuracy between training and validation accuracy is noticeable—a sign of *overfitting*.When there are a small number of training examples, the model sometimes learns from noises or unwanted details from training examples—to an extent that it negatively impacts the performance of the model on new examples. This phenomenon is known as overfitting. It means that the model will have a difficult time generalizing on a new dataset.There are multiple ways to fight overfitting in the training process. In this tutorial, you'll use *data augmentation* and add *dropout* to our model. Data augmentation Overfitting generally occurs when there are a small number of training examples. One way to fix this problem is to augment the dataset so that it has a sufficient number of training examples. Data augmentation takes the approach of generating more training data from existing training samples by augmenting the samples using random transformations that yield believable-looking images. The goal is the model will never see the exact same picture twice during training. This helps expose the model to more aspects of the data and generalize better.Implement this in `tf.keras` using the `ImageDataGenerator` class. Pass different transformations to the dataset and it will take care of applying it during the training process. Augment and visualize data Begin by applying random horizontal flip augmentation to the dataset and see how individual images look like after the transformation. Apply horizontal flip Pass `horizontal_flip` as an argument to the `ImageDataGenerator` class and set it to `True` to apply this augmentation.
###Code
image_gen = ImageDataGenerator(rescale=1./255, horizontal_flip=True)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
###Output
_____no_output_____
###Markdown
Take one sample image from the training examples and repeat it five times so that the augmentation is applied to the same image five times.
###Code
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
# Re-use the same custom plotting function defined and used
# above to visualize the training images
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Randomly rotate the image Let's take a look at a different augmentation called rotation and apply 45 degrees of rotation randomly to the training examples.
###Code
image_gen = ImageDataGenerator(rescale=1./255, rotation_range=45)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Apply zoom augmentation Apply a zoom augmentation to the dataset to zoom images up to 50% randomly.
###Code
# zoom_range from 0 - 1 where 1 = 100%.
image_gen = ImageDataGenerator(rescale=1./255, zoom_range=0.5) #
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Put it all together Apply all the previous augmentations. Here, you applied rescale, 45 degree rotation, width shift, height shift, horizontal flip and zoom augmentation to the training images.
###Code
image_gen_train = ImageDataGenerator(
rescale=1./255,
rotation_range=45,
width_shift_range=.15,
height_shift_range=.15,
horizontal_flip=True,
zoom_range=0.5
)
train_data_gen = image_gen_train.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Visualize how a single image would look five different times when passing these augmentations randomly to the dataset.
###Code
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Create validation data generator Generally, only apply data augmentation to the training examples. In this case, only rescale the validation images and convert them into batches using `ImageDataGenerator`.
###Code
image_gen_val = ImageDataGenerator(rescale=1./255)
val_data_gen = image_gen_val.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Dropout Another technique to reduce overfitting is to introduce *dropout* to the network. It is a form of *regularization* that forces the weights in the network to take only small values, which makes the distribution of weight values more regular and the network can reduce overfitting on small training examples. Dropout is one of the regularization technique used in this tutorialWhen you apply dropout to a layer it randomly drops out (set to zero) number of output units from the applied layer during the training process. Dropout takes a fractional number as its input value, in the form such as 0.1, 0.2, 0.4, etc. This means dropping out 10%, 20% or 40% of the output units randomly from the applied layer.When appling 0.1 dropout to a certain layer, it randomly kills 10% of the output units in each training epoch.Create a network architecture with this new dropout feature and apply it to different convolutions and fully-connected layers. Creating a new network with Dropouts Here, you apply dropout to first and last max pool layers. Applying dropout will randomly set 20% of the neurons to zero during each training epoch. This helps to avoid overfitting on the training dataset.
###Code
model_new = Sequential([
Conv2D(16, 3, padding='same', activation='relu',
input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Dropout(0.2),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Dropout(0.2),
Flatten(),
Dense(512, activation='relu'),
Dense(1)
])
###Output
_____no_output_____
###Markdown
Compile the model After introducing dropouts to the network, compile the model and view the layers summary.
###Code
model_new.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
model_new.summary()
###Output
_____no_output_____
###Markdown
Train the model After successfully introducing data augmentations to the training examples and adding dropouts to the network, train this new network:
###Code
history = model_new.fit(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)
###Output
_____no_output_____
###Markdown
Visualize the model Visualize the new model after training, you can see that there is significantly less overfitting than before. The accuracy should go up after training the model for more epochs.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Image classification View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial shows how to classify images of flowers. It creates an image classifier using a `keras.Sequential` model, and loads data using `preprocessing.image_dataset_from_directory`. You will gain practical experience with the following concepts:* Efficiently loading a dataset off disk.* Identifying overfitting and applying techniques to mitigate it, including data augmentation and Dropout.This tutorial follows a basic machine learning workflow:1. Examine and understand data2. Build an input pipeline3. Build the model4. Train the model5. Test the model6. Improve the model and repeat the process
###Code
!pip install tf-nightly
###Output
_____no_output_____
###Markdown
Import TensorFlow and other libraries
###Code
import matplotlib.pyplot as plt
import numpy as np
import os
import PIL
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
###Output
_____no_output_____
###Markdown
Download and explore the dataset This tutorial uses a dataset of about 3,700 photos of flowers. The dataset contains 5 sub-directories, one per class:```flower_photo/ daisy/ dandelion/ roses/ sunflowers/ tulips/```
###Code
import pathlib
dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
data_dir = tf.keras.utils.get_file('flower_photos', origin=dataset_url, untar=True)
data_dir = pathlib.Path(data_dir)
###Output
_____no_output_____
###Markdown
After downloading, you should now have a copy of the dataset available. There are 3,670 total images:
###Code
image_count = len(list(data_dir.glob('*/*.jpg')))
print(image_count)
###Output
_____no_output_____
###Markdown
Here are some roses:
###Code
roses = list(data_dir.glob('roses/*'))
PIL.Image.open(str(roses[0]))
PIL.Image.open(str(roses[1]))
###Output
_____no_output_____
###Markdown
And some tulips:
###Code
tulips = list(data_dir.glob('tulips/*'))
PIL.Image.open(str(tulips[0]))
PIL.Image.open(str(tulips[1]))
###Output
_____no_output_____
###Markdown
Load using keras.preprocessingLet's load these images off disk using the helpful [image_dataset_from_directory](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory) utility. This will take you from a directory of images on disk to a `tf.data.Dataset` in just a couple lines of code. If you like, you can also write your own data loading code from scratch by visiting the [load images](https://www.tensorflow.org/tutorials/load_data/images) tutorial. Create a dataset Define some parameters for the loader:
###Code
batch_size = 32
img_height = 180
img_width = 180
###Output
_____no_output_____
###Markdown
It's good practice to use a validation split when developing your model. We will use 80% of the images for training, and 20% for validation.
###Code
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="training",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
###Output
_____no_output_____
###Markdown
You can find the class names in the `class_names` attribute on these datasets. These correspond to the directory names in alphabetical order.
###Code
class_names = train_ds.class_names
print(class_names)
###Output
_____no_output_____
###Markdown
Visualize the dataHere are the first 9 images from the training dataset.
###Code
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 10))
for images, labels in train_ds.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(images[i].numpy().astype("uint8"))
plt.title(class_names[labels[i]])
plt.axis("off")
###Output
_____no_output_____
###Markdown
You will train a model using these datasets by passing them to `model.fit` in a moment. If you like, you can also manually iterate over the dataset and retrieve batches of images:
###Code
for image_batch, labels_batch in train_ds:
print(image_batch.shape)
print(labels_batch.shape)
break
###Output
_____no_output_____
###Markdown
The `image_batch` is a tensor of the shape `(32, 180, 180, 3)`. This is a batch of 32 images of shape `180x180x3` (the last dimension referes to color channels RGB). The `label_batch` is a tensor of the shape `(32,)`, these are corresponding labels to the 32 images. You can call `.numpy()` on the `image_batch` and `labels_batch` tensors to convert them to a `numpy.ndarray`. Configure the dataset for performanceLet's make sure to use buffered prefetching so we can yield data from disk without having I/O become blocking. These are two important methods you should use when loading data.`Dataset.cache()` keeps the images in memory after they're loaded off disk during the first epoch. This will ensure the dataset does not become a bottleneck while training your model. If your dataset is too large to fit into memory, you can also use this method to create a performant on-disk cache.`Dataset.prefetch()` overlaps data preprocessing and model execution while training. Interested readers can learn more about both methods, as well as how to cache data to disk in the [data performance guide](https://www.tensorflow.org/guide/data_performanceprefetching).
###Code
AUTOTUNE = tf.data.experimental.AUTOTUNE
train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
###Output
_____no_output_____
###Markdown
Standardize the data The RGB channel values are in the `[0, 255]` range. This is not ideal for a neural network; in general you should seek to make your input values small. Here, we will standardize values to be in the `[0, 1]` by using a Rescaling layer.
###Code
normalization_layer = layers.experimental.preprocessing.Rescaling(1./255)
###Output
_____no_output_____
###Markdown
Note: The Keras Preprocesing utilities and layers introduced in this section are currently experimental and may change. There are two ways to use this layer. You can apply it to the dataset by calling map:
###Code
normalized_ds = train_ds.map(lambda x, y: (normalization_layer(x), y))
image_batch, labels_batch = next(iter(normalized_ds))
first_image = image_batch[0]
# Notice the pixels values are now in `[0,1]`.
print(np.min(first_image), np.max(first_image))
###Output
_____no_output_____
###Markdown
Or, you can include the layer inside your model definition, which can simplify deployment. We will use the second approach here. Note: we previously resized images using the `image_size` argument of `image_dataset_from_directory`. If you want to include the resizing logic in your model as well, you can use the [Resizing](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/Resizing) layer. Create the modelThe model consists of three convolution blocks with a max pool layer in each of them. There's a fully connected layer with 128 units on top of it that is activated by a `relu` activation function. This model has not been tuned for high accuracy, the goal of this tutorial is to show a standard approach.
###Code
num_classes = 5
model = Sequential([
layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
###Output
_____no_output_____
###Markdown
Compile the modelFor this tutorial, choose the `optimizers.Adam` optimizer and `losses.SparseCategoricalCrossentropy` loss function. To view training and validation accuracy for each training epoch, pass the `metrics` argument.
###Code
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Model summaryView all the layers of the network using the model's `summary` method:
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Train the model
###Code
epochs=10
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
###Output
_____no_output_____
###Markdown
Visualize training results Create plots of loss and accuracy on the training and validation sets.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss=history.history['loss']
val_loss=history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
As you can see from the plots, training accuracy and validation accuracy are off by large margin and the model has achieved only around 60% accuracy on the validation set.Let's look at what went wrong and try to increase the overall performance of the model. Overfitting In the plots above, the training accuracy is increasing linearly over time, whereas validation accuracy stalls around 60% in the training process. Also, the difference in accuracy between training and validation accuracy is noticeable—a sign of [overfitting](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit).When there are a small number of training examples, the model sometimes learns from noises or unwanted details from training examples—to an extent that it negatively impacts the performance of the model on new examples. This phenomenon is known as overfitting. It means that the model will have a difficult time generalizing on a new dataset.There are multiple ways to fight overfitting in the training process. In this tutorial, you'll use *data augmentation* and add *Dropout* to your model. Data augmentation Overfitting generally occurs when there are a small number of training examples. [Data augmentation](https://www.tensorflow.org/tutorials/images/data_augmentation) takes the approach of generating additional training data from your existing examples by augmenting then using random transformations that yield believable-looking images. This helps expose the model to more aspects of the data and generalize better.We will implement data augmentation using experimental [Keras Preprocessing Layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/?version=nightly). These can be included inside your model like other layers, and run on the GPU.
###Code
data_augmentation = keras.Sequential(
[
layers.experimental.preprocessing.RandomFlip("horizontal",
input_shape=(img_height,
img_width,
3)),
layers.experimental.preprocessing.RandomRotation(0.1),
layers.experimental.preprocessing.RandomZoom(0.1),
]
)
###Output
_____no_output_____
###Markdown
Let's visualize what a few augmented examples look like by applying data augmentation to the same image several times:
###Code
plt.figure(figsize=(10, 10))
for images, _ in train_ds.take(1):
for i in range(9):
augmented_images = data_augmentation(images)
ax = plt.subplot(3, 3, i + 1)
plt.imshow(augmented_images[0].numpy().astype("uint8"))
plt.axis("off")
###Output
_____no_output_____
###Markdown
We will use data augmentation to train a model in a moment. DropoutAnother technique to reduce overfitting is to introduce [Dropout](https://developers.google.com/machine-learning/glossarydropout_regularization) to the network, a form of *regularization*.When you apply Dropout to a layer it randomly drops out (by setting the activation to zero) a number of output units from the layer during the training process. Dropout takes a fractional number as its input value, in the form such as 0.1, 0.2, 0.4, etc. This means dropping out 10%, 20% or 40% of the output units randomly from the applied layer.Let's create a new neural network using `layers.Dropout`, then train it using augmented images.
###Code
model = Sequential([
data_augmentation,
layers.experimental.preprocessing.Rescaling(1./255),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Dropout(0.2),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
###Output
_____no_output_____
###Markdown
Compile and train the model
###Code
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.summary()
epochs = 15
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
###Output
_____no_output_____
###Markdown
Visualize training resultsAfter applying data augmentation and Dropout, there is less overfitting than before, and training and validation accuracy are closer aligned.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
Predict on new data Finally, let's use our model to classify an image that wasn't included in the training or validation sets. Note: Data augmentation and Dropout layers are inactive at inference time.
###Code
sunflower_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/592px-Red_sunflower.jpg"
sunflower_path = tf.keras.utils.get_file('Red_sunflower', origin=sunflower_url)
img = keras.preprocessing.image.load_img(
sunflower_path, target_size=(img_height, img_width)
)
img_array = keras.preprocessing.image.img_to_array(img)
img_array = tf.expand_dims(img_array, 0) # Create a batch
predictions = model.predict(img_array)
score = tf.nn.softmax(predictions[0])
print(
"This image most likely belongs to {} with a {:.2f} percent confidence."
.format(class_names[np.argmax(score)], 100 * np.max(score))
)
###Output
_____no_output_____
###Markdown
Randomly rotate the image Let's take a look at a different augmentation called rotation and apply 45 degrees of rotation randomly to the training examples.
###Code
image_gen = ImageDataGenerator(rescale=1./255, rotation_range=45)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Apply zoom augmentation Apply a zoom augmentation to the dataset to zoom images up to 50% randomly.
###Code
# zoom_range from 0 - 1 where 1 = 100%.
image_gen = ImageDataGenerator(rescale=1./255, zoom_range=0.5) #
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Put it all together Apply all the previous augmentations. Here, you applied rescale, 45 degree rotation, width shift, height shift, horizontal flip and zoom augmentation to the training images.
###Code
image_gen_train = ImageDataGenerator(
rescale=1./255,
rotation_range=45,
width_shift_range=.15,
height_shift_range=.15,
horizontal_flip=True,
zoom_range=0.5
)
train_data_gen = image_gen_train.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Visualize how a single image would look five different times when passing these augmentations randomly to the dataset.
###Code
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Create validation data generator Generally, only apply data augmentation to the training examples. In this case, only rescale the validation images and convert them into batches using `ImageDataGenerator`.
###Code
image_gen_val = ImageDataGenerator(rescale=1./255)
val_data_gen = image_gen_val.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Dropout Another technique to reduce overfitting is to introduce *dropout* to the network. It is a form of *regularization* that forces the weights in the network to take only small values, which makes the distribution of weight values more regular and the network can reduce overfitting on small training examples. Dropout is one of the regularization technique used in this tutorialWhen you apply dropout to a layer it randomly drops out (set to zero) number of output units from the applied layer during the training process. Dropout takes a fractional number as its input value, in the form such as 0.1, 0.2, 0.4, etc. This means dropping out 10%, 20% or 40% of the output units randomly from the applied layer.When appling 0.1 dropout to a certain layer, it randomly kills 10% of the output units in each training epoch.Create a network architecture with this new dropout feature and apply it to different convolutions and fully-connected layers. Creating a new network with Dropouts Here, you apply dropout to first and last max pool layers. Applying dropout will randomly set 20% of the neurons to zero during each training epoch. This helps to avoid overfitting on the training dataset.
###Code
model_new = Sequential([
Conv2D(16, 3, padding='same', activation='relu',
input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Dropout(0.2),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Dropout(0.2),
Flatten(),
Dense(512, activation='relu'),
Dense(1)
])
###Output
_____no_output_____
###Markdown
Compile the model After introducing dropouts to the network, compile the model and view the layers summary.
###Code
model_new.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
model_new.summary()
###Output
_____no_output_____
###Markdown
Train the model After successfully introducing data augmentations to the training examples and adding dropouts to the network, train this new network:
###Code
history = model_new.fit_generator(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)
###Output
_____no_output_____
###Markdown
Visualize the model Visualize the new model after training, you can see that there is significantly less overfitting than before. The accuracy should go up after training the model for more epochs.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Image classification View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial shows how to classify cats or dogs from images. It builds an image classifier using a `tf.keras.Sequential` model and load data using `tf.keras.preprocessing.image.ImageDataGenerator`. You will get some practical experience and develop intuition for the following concepts:* Building _data input pipelines_ using the `tf.keras.preprocessing.image.ImageDataGenerator` class to efficiently work with data on disk to use with the model.* _Overfitting_ —How to identify and prevent it.* _Data augmentation_ and _dropout_ —Key techniques to fight overfitting in computer vision tasks to incorporate into the data pipeline and image classifier model.This tutorial follows a basic machine learning workflow:1. Examine and understand data2. Build an input pipeline3. Build the model4. Train the model5. Test the model6. Improve the model and repeat the process Import packages Let's start by importing the required packages. The `os` package is used to read files and directory structure, NumPy is used to convert python list to numpy array and to perform required matrix operations and `matplotlib.pyplot` to plot the graph and display images in the training and validation data.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
###Output
_____no_output_____
###Markdown
Import Tensorflow and the Keras classes needed to construct our model.
###Code
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import os
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Load data Begin by downloading the dataset. This tutorial uses a filtered version of Dogs vs Cats dataset from Kaggle. Download the archive version of the dataset and store it in the "/tmp/" directory.
###Code
_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)
PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')
###Output
_____no_output_____
###Markdown
The dataset has the following directory structure:cats_and_dogs_filtered|__ train |______ cats: [cat.0.jpg, cat.1.jpg, cat.2.jpg ....] |______ dogs: [dog.0.jpg, dog.1.jpg, dog.2.jpg ...]|__ validation |______ cats: [cat.2000.jpg, cat.2001.jpg, cat.2002.jpg ....] |______ dogs: [dog.2000.jpg, dog.2001.jpg, dog.2002.jpg ...] After extracting its contents, assign variables with the proper file path for the training and validation set.
###Code
train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')
train_cats_dir = os.path.join(train_dir, 'cats') # directory with our training cat pictures
train_dogs_dir = os.path.join(train_dir, 'dogs') # directory with our training dog pictures
validation_cats_dir = os.path.join(validation_dir, 'cats') # directory with our validation cat pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs') # directory with our validation dog pictures
###Output
_____no_output_____
###Markdown
Understand the data Let's look at how many cats and dogs images are in the training and validation directory:
###Code
num_cats_tr = len(os.listdir(train_cats_dir))
num_dogs_tr = len(os.listdir(train_dogs_dir))
num_cats_val = len(os.listdir(validation_cats_dir))
num_dogs_val = len(os.listdir(validation_dogs_dir))
total_train = num_cats_tr + num_dogs_tr
total_val = num_cats_val + num_dogs_val
print('total training cat images:', num_cats_tr)
print('total training dog images:', num_dogs_tr)
print('total validation cat images:', num_cats_val)
print('total validation dog images:', num_dogs_val)
print("--")
print("Total training images:", total_train)
print("Total validation images:", total_val)
###Output
_____no_output_____
###Markdown
For convenience, set up variables to use while pre-processing the dataset and training the network.
###Code
batch_size = 128
epochs = 15
IMG_HEIGHT = 150
IMG_WIDTH = 150
###Output
_____no_output_____
###Markdown
Data preparation Format the images into appropriately pre-processed floating point tensors before feeding to the network:1. Read images from the disk.2. Decode contents of these images and convert it into proper grid format as per their RGB content.3. Convert them into floating point tensors.4. Rescale the tensors from values between 0 and 255 to values between 0 and 1, as neural networks prefer to deal with small input values.Fortunately, all these tasks can be done with the `ImageDataGenerator` class provided by `tf.keras`. It can read images from disk and preprocess them into proper tensors. It will also set up generators that convert these images into batches of tensors—helpful when training the network.
###Code
train_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our training data
validation_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our validation data
###Output
_____no_output_____
###Markdown
After defining the generators for training and validation images, the `flow_from_directory` method load images from the disk, applies rescaling, and resizes the images into the required dimensions.
###Code
train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Visualize training images Visualize the training images by extracting a batch of images from the training generator—which is 32 images in this example—then plot five of them with `matplotlib`.
###Code
sample_training_images, _ = next(train_data_gen)
###Output
_____no_output_____
###Markdown
The `next` function returns a batch from the dataset. The return value of `next` function is in form of `(x_train, y_train)` where x_train is training features and y_train, its labels. Discard the labels to only visualize the training images.
###Code
# This function will plot images in the form of a grid with 1 row and 5 columns where images are placed in each column.
def plotImages(images_arr):
fig, axes = plt.subplots(1, 5, figsize=(20,20))
axes = axes.flatten()
for img, ax in zip( images_arr, axes):
ax.imshow(img)
ax.axis('off')
plt.tight_layout()
plt.show()
plotImages(sample_training_images[:5])
###Output
_____no_output_____
###Markdown
Create the model The model consists of three convolution blocks with a max pool layer in each of them. There's a fully connected layer with 512 units on top of it that is activated by a `relu`` activation function.
###Code
model = Sequential([
Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(512, activation='relu'),
Dense(1)
])
###Output
_____no_output_____
###Markdown
Compile the modelFor this tutorial, choose the *ADAM* optimizer and *binary cross entropy* loss function. To view training and validation accuracy for each training epoch, pass the `metrics` argument.
###Code
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Model summaryView all the layers of the network using the model's `summary` method:
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Train the model Use the `fit_generator` method of the `ImageDataGenerator` class to train the network.
###Code
history = model.fit_generator(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)
###Output
_____no_output_____
###Markdown
Visualize training results Now visualize the results after training the network.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss=history.history['loss']
val_loss=history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
As you can see from the plots, training accuracy and validation accuracy are off by large margin and the model has achieved only around **70%** accuracy on the validation set.Let's look at what went wrong and try to increase overall performance of the model. Overfitting In the plots above, the training accuracy is increasing linearly over time, whereas validation accuracy stalls around 70% in the training process. Also, the difference in accuracy between training and validation accuracy is noticeable—a sign of *overfitting*.When there are a small number of training examples, the model sometimes learns from noises or unwanted details from training examples—to an extent that it negatively impacts the performance of the model on new examples. This phenomenon is known as overfitting. It means that the model will have a difficult time generalizing on a new dataset.There are multiple ways to fight overfitting in the training process. In this tutorial, you'll use *data augmentation* and add *dropout* to our model. Data augmentation Overfitting generally occurs when there are a small number of training examples. One way to fix this problem is to augment the dataset so that it has a sufficient number of training examples. Data augmentation takes the approach of generating more training data from existing training samples by augmenting the samples using random transformations that yield believable-looking images. The goal is the model will never see the exact same picture twice during training. This helps expose the model to more aspects of the data and generalize better.Implement this in `tf.keras` using the `ImageDataGenerator` class. Pass different transformations to the dataset and it will take care of applying it during the training process. Augment and visualize data Begin by applying random horizontal flip augmentation to the dataset and see how individual images look like after the transformation. Apply horizontal flip Pass `horizontal_flip` as an argument to the `ImageDataGenerator` class and set it to `True` to apply this augmentation.
###Code
image_gen = ImageDataGenerator(rescale=1./255, horizontal_flip=True)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
###Output
_____no_output_____
###Markdown
Take one sample image from the training examples and repeat it five times so that the augmentation is applied to the same image five times.
###Code
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
# Re-use the same custom plotting function defined and used
# above to visualize the training images
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Randomly rotate the image Let's take a look at a different augmentation called rotation and apply 45 degrees of rotation randomly to the training examples.
###Code
image_gen = ImageDataGenerator(rescale=1./255, rotation_range=45)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Apply zoom augmentation Apply a zoom augmentation to the dataset to zoom images up to 50% randomly.
###Code
# zoom_range from 0 - 1 where 1 = 100%.
image_gen = ImageDataGenerator(rescale=1./255, zoom_range=0.5) #
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Put it all together Apply all the previous augmentations. Here, you applied rescale, 45 degree rotation, width shift, height shift, horizontal flip and zoom augmentation to the training images.
###Code
image_gen_train = ImageDataGenerator(
rescale=1./255,
rotation_range=45,
width_shift_range=.15,
height_shift_range=.15,
horizontal_flip=True,
zoom_range=0.5
)
train_data_gen = image_gen_train.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Visualize how a single image would look five different times when passing these augmentations randomly to the dataset.
###Code
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Create validation data generator Generally, only apply data augmentation to the training examples. In this case, only rescale the validation images and convert them into batches using `ImageDataGenerator`.
###Code
image_gen_val = ImageDataGenerator(rescale=1./255)
val_data_gen = image_gen_val.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Dropout Another technique to reduce overfitting is to introduce *dropout* to the network. It is a form of *regularization* that forces the weights in the network to take only small values, which makes the distribution of weight values more regular and the network can reduce overfitting on small training examples. Dropout is one of the regularization technique used in this tutorialWhen you apply dropout to a layer it randomly drops out (set to zero) number of output units from the applied layer during the training process. Dropout takes a fractional number as its input value, in the form such as 0.1, 0.2, 0.4, etc. This means dropping out 10%, 20% or 40% of the output units randomly from the applied layer.When appling 0.1 dropout to a certain layer, it randomly kills 10% of the output units in each training epoch.Create a network architecture with this new dropout feature and apply it to different convolutions and fully-connected layers. Creating a new network with Dropouts Here, you apply dropout to first and last max pool layers. Applying dropout will randomly set 20% of the neurons to zero during each training epoch. This helps to avoid overfitting on the training dataset.
###Code
model_new = Sequential([
Conv2D(16, 3, padding='same', activation='relu',
input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Dropout(0.2),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Dropout(0.2),
Flatten(),
Dense(512, activation='relu'),
Dense(1)
])
###Output
_____no_output_____
###Markdown
Compile the model After introducing dropouts to the network, compile the model and view the layers summary.
###Code
model_new.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
model_new.summary()
###Output
_____no_output_____
###Markdown
Train the model After successfully introducing data augmentations to the training examples and adding dropouts to the network, train this new network:
###Code
history = model_new.fit_generator(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)
###Output
_____no_output_____
###Markdown
Visualize the model Visualize the new model after training, you can see that there is significantly less overfitting than before. The accuracy should go up after training the model for more epochs.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Image classification View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial shows how to classify images of flowers. It creates an image classifier using a `keras.Sequential` model, and loads data using `preprocessing.image_dataset_from_directory`. You will gain practical experience with the following concepts:* Efficiently loading a dataset off disk.* Identifying overfitting and applying techniques to mitigate it, including data augmentation and Dropout.This tutorial follows a basic machine learning workflow:1. Examine and understand data2. Build an input pipeline3. Build the model4. Train the model5. Test the model6. Improve the model and repeat the process
###Code
!pip install tf-nightly
###Output
_____no_output_____
###Markdown
Import TensorFlow and other libraries
###Code
import matplotlib.pyplot as plt
import numpy as np
import os
import PIL
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
###Output
_____no_output_____
###Markdown
Download and explore the dataset This tutorial uses a dataset of about 3,700 photos of flowers. The dataset contains 5 sub-directories, one per class:```flower_photo/ daisy/ dandelion/ roses/ sunflowers/ tulips/```
###Code
import pathlib
dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
data_dir = tf.keras.utils.get_file('flower_photos', origin=dataset_url, untar=True)
data_dir = pathlib.Path(data_dir)
###Output
_____no_output_____
###Markdown
After downloading, you should now have a copy of the dataset available. There are 3,670 total images:
###Code
image_count = len(list(data_dir.glob('*/*.jpg')))
print(image_count)
###Output
_____no_output_____
###Markdown
Here are some roses:
###Code
roses = list(data_dir.glob('roses/*'))
PIL.Image.open(str(roses[0]))
PIL.Image.open(str(roses[1]))
###Output
_____no_output_____
###Markdown
And some tulips:
###Code
tulips = list(data_dir.glob('tulips/*'))
PIL.Image.open(str(tulips[0]))
PIL.Image.open(str(tulips[1]))
###Output
_____no_output_____
###Markdown
Load using keras.preprocessingLet's load these images off disk using the helpful [image_dataset_from_directory](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory) utility. This will take you from a directory of images on disk to a `tf.data.Dataset` in just a couple lines of code. If you like, you can also write your own data loading code from scratch by visiting the [load images](https://www.tensorflow.org/tutorials/load_data/images) tutorial. Create a dataset Define some parameters for the loader:
###Code
batch_size = 32
img_height = 180
img_width = 180
###Output
_____no_output_____
###Markdown
It's good practice to use a validation split when developing your model. We will use 80% of the images for training, and 20% for validation.
###Code
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="training",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
###Output
_____no_output_____
###Markdown
You can find the class names in the `class_names` attribute on these datasets. These correspond to the directory names in alphabetical order.
###Code
class_names = train_ds.class_names
print(class_names)
###Output
_____no_output_____
###Markdown
Visualize the dataHere are the first 9 images from the training dataset.
###Code
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 10))
for images, labels in train_ds.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(images[i].numpy().astype("uint8"))
plt.title(class_names[labels[i]])
plt.axis("off")
###Output
_____no_output_____
###Markdown
You will train a model using these datasets by passing them to `model.fit` in a moment. If you like, you can also manually iterate over the dataset and retrieve batches of images:
###Code
for image_batch, labels_batch in train_ds:
print(image_batch.shape)
print(labels_batch.shape)
break
###Output
_____no_output_____
###Markdown
The `image_batch` is a tensor of the shape `(32, 180, 180, 3)`. This is a batch of 32 images of shape `180x180x3` (the last dimension referes to color channels RGB). The `label_batch` is a tensor of the shape `(32,)`, these are corresponding labels to the 32 images. You can call `.numpy()` on the `image_batch` and `labels_batch` tensors to convert them to a `numpy.ndarray`. Configure the dataset for performanceLet's make sure to use buffered prefetching so we can yield data from disk without having I/O become blocking. These are two important methods you should use when loading data.`Dataset.cache()` keeps the images in memory after they're loaded off disk during the first epoch. This will ensure the dataset does not become a bottleneck while training your model. If your dataset is too large to fit into memory, you can also use this method to create a performant on-disk cache.`Dataset.prefetch()` overlaps data preprocessing and model execution while training. Interested readers can learn more about both methods, as well as how to cache data to disk in the [data performance guide](https://www.tensorflow.org/guide/data_performanceprefetching).
###Code
AUTOTUNE = tf.data.experimental.AUTOTUNE
train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
###Output
_____no_output_____
###Markdown
Standardize the data The RGB channel values are in the `[0, 255]` range. This is not ideal for a neural network; in general you should seek to make your input values small. Here, we will standardize values to be in the `[0, 1]` by using a Rescaling layer.
###Code
normalization_layer = layers.experimental.preprocessing.Rescaling(1./255)
###Output
_____no_output_____
###Markdown
Note: The Keras Preprocesing utilities and layers introduced in this section are currently experimental and may change. There are two ways to use this layer. You can apply it to the dataset by calling map:
###Code
normalized_ds = train_ds.map(lambda x, y: (normalization_layer(x), y))
image_batch, labels_batch = next(iter(normalized_ds))
first_image = image_batch[0]
# Notice the pixels values are now in `[0,1]`.
print(np.min(first_image), np.max(first_image))
###Output
_____no_output_____
###Markdown
Or, you can include the layer inside your model definition, which can simplify deployment. We will use the second approach here. Note: we previously resized images using the `image_size` argument of `image_dataset_from_directory`. If you want to include the resizing logic in your model as well, you can use the [Resizing](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/Resizing) layer. Create the modelThe model consists of three convolution blocks with a max pool layer in each of them. There's a fully connected layer with 128 units on top of it that is activated by a `relu` activation function. This model has not been tuned for high accuracy, the goal of this tutorial is to show a standard approach.
###Code
num_classes = 5
model = Sequential([
layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
###Output
_____no_output_____
###Markdown
Compile the modelFor this tutorial, choose the `optimizers.Adam` optimizer and `losses.SparseCategoricalCrossentropy` loss function. To view training and validation accuracy for each training epoch, pass the `metrics` argument.
###Code
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Model summaryView all the layers of the network using the model's `summary` method:
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Train the model
###Code
epochs=10
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
###Output
_____no_output_____
###Markdown
Visualize training results Create plots of loss and accuracy on the training and validation sets.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss=history.history['loss']
val_loss=history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
As you can see from the plots, training accuracy and validation accuracy are off by large margin and the model has achieved only around 60% accuracy on the validation set.Let's look at what went wrong and try to increase the overall performance of the model. Overfitting In the plots above, the training accuracy is increasing linearly over time, whereas validation accuracy stalls around 60% in the training process. Also, the difference in accuracy between training and validation accuracy is noticeable—a sign of [overfitting](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit).When there are a small number of training examples, the model sometimes learns from noises or unwanted details from training examples—to an extent that it negatively impacts the performance of the model on new examples. This phenomenon is known as overfitting. It means that the model will have a difficult time generalizing on a new dataset.There are multiple ways to fight overfitting in the training process. In this tutorial, you'll use *data augmentation* and add *Dropout* to your model. Data augmentation Overfitting generally occurs when there are a small number of training examples. [Data augmentation](https://www.tensorflow.org/tutorials/images/data_augmentation) takes the approach of generating additional training data from your existing examples by augmenting then using random transformations that yield believable-looking images. This helps expose the model to more aspects of the data and generalize better.We will implement data augmentation using experimental [Keras Preprocessing Layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/?version=nightly). These can be included inside your model like other layers, and run on the GPU.
###Code
data_augmentation = keras.Sequential(
[
layers.experimental.preprocessing.RandomFlip("horizontal",
input_shape=(img_height,
img_width,
3)),
layers.experimental.preprocessing.RandomRotation(0.1),
layers.experimental.preprocessing.RandomZoom(0.1),
]
)
###Output
_____no_output_____
###Markdown
Let's visualize what a few augmented examples look like by applying data augmentation to the same image several times:
###Code
plt.figure(figsize=(10, 10))
for images, _ in train_ds.take(1):
for i in range(9):
augmented_images = data_augmentation(images)
ax = plt.subplot(3, 3, i + 1)
plt.imshow(augmented_images[0].numpy().astype("uint8"))
plt.axis("off")
###Output
_____no_output_____
###Markdown
We will use data augmentation to train a model in a moment. DropoutAnother technique to reduce overfitting is to introduce [Dropout](https://developers.google.com/machine-learning/glossarydropout_regularization) to the network, a form of *regularization*.When you apply Dropout to a layer it randomly drops out (by setting the activation to zero) a number of output units from the layer during the training process. Dropout takes a fractional number as its input value, in the form such as 0.1, 0.2, 0.4, etc. This means dropping out 10%, 20% or 40% of the output units randomly from the applied layer.Let's create a new neural network using `layers.Dropout`, then train it using augmented images.
###Code
model = Sequential([
data_augmentation,
layers.experimental.preprocessing.Rescaling(1./255),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Dropout(0.2),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
###Output
_____no_output_____
###Markdown
Compile and train the model
###Code
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.summary()
epochs = 15
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
###Output
_____no_output_____
###Markdown
Visualize training resultsAfter applying data augmentation and Dropout, there is less overfitting than before, and training and validation accuracy are closer aligned.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
Predict on new data Finally, let's use our model to classify an image that wasn't included in the training or validation sets. Note: Data augmentation and Dropout layers are inactive at inference time.
###Code
sunflower_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/592px-Red_sunflower.jpg"
sunflower_path = tf.keras.utils.get_file('Red_sunflower', origin=sunflower_url)
img = keras.preprocessing.image.load_img(
sunflower_path, target_size=(img_height, img_width)
)
img_array = keras.preprocessing.image.img_to_array(img)
img_array = tf.expand_dims(img_array, 0) # Create a batch
predictions = model.predict(img_array)
score = tf.nn.softmax(predictions[0])
print(
"This image most likely belongs to {} with a {:.2f} percent confidence."
.format(class_names[np.argmax(score)], 100 * np.max(score))
)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Image classification View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial shows how to classify images of flowers. It creates an image classifier using a `keras.Sequential` model, and loads data using `preprocessing.image_dataset_from_directory`. You will gain practical experience with the following concepts:* Efficiently loading a dataset off disk.* Identifying overfitting and applying techniques to mitigate it, including data augmentation and Dropout.This tutorial follows a basic machine learning workflow:1. Examine and understand data2. Build an input pipeline3. Build the model4. Train the model5. Test the model6. Improve the model and repeat the process Import TensorFlow and other libraries
###Code
import matplotlib.pyplot as plt
import numpy as np
import os
import PIL
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
###Output
_____no_output_____
###Markdown
Download and explore the dataset This tutorial uses a dataset of about 3,700 photos of flowers. The dataset contains 5 sub-directories, one per class:```flower_photo/ daisy/ dandelion/ roses/ sunflowers/ tulips/```
###Code
import pathlib
dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
data_dir = tf.keras.utils.get_file('flower_photos', origin=dataset_url, untar=True)
data_dir = pathlib.Path(data_dir)
###Output
_____no_output_____
###Markdown
After downloading, you should now have a copy of the dataset available. There are 3,670 total images:
###Code
image_count = len(list(data_dir.glob('*/*.jpg')))
print(image_count)
###Output
_____no_output_____
###Markdown
Here are some roses:
###Code
roses = list(data_dir.glob('roses/*'))
PIL.Image.open(str(roses[0]))
PIL.Image.open(str(roses[1]))
###Output
_____no_output_____
###Markdown
And some tulips:
###Code
tulips = list(data_dir.glob('tulips/*'))
PIL.Image.open(str(tulips[0]))
PIL.Image.open(str(tulips[1]))
###Output
_____no_output_____
###Markdown
Load using keras.preprocessingLet's load these images off disk using the helpful [image_dataset_from_directory](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory) utility. This will take you from a directory of images on disk to a `tf.data.Dataset` in just a couple lines of code. If you like, you can also write your own data loading code from scratch by visiting the [load images](https://www.tensorflow.org/tutorials/load_data/images) tutorial. Create a dataset Define some parameters for the loader:
###Code
batch_size = 32
img_height = 180
img_width = 180
###Output
_____no_output_____
###Markdown
It's good practice to use a validation split when developing your model. We will use 80% of the images for training, and 20% for validation.
###Code
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="training",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
###Output
_____no_output_____
###Markdown
You can find the class names in the `class_names` attribute on these datasets. These correspond to the directory names in alphabetical order.
###Code
class_names = train_ds.class_names
print(class_names)
###Output
_____no_output_____
###Markdown
Visualize the dataHere are the first 9 images from the training dataset.
###Code
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 10))
for images, labels in train_ds.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(images[i].numpy().astype("uint8"))
plt.title(class_names[labels[i]])
plt.axis("off")
###Output
_____no_output_____
###Markdown
You will train a model using these datasets by passing them to `model.fit` in a moment. If you like, you can also manually iterate over the dataset and retrieve batches of images:
###Code
for image_batch, labels_batch in train_ds:
print(image_batch.shape)
print(labels_batch.shape)
break
###Output
_____no_output_____
###Markdown
The `image_batch` is a tensor of the shape `(32, 180, 180, 3)`. This is a batch of 32 images of shape `180x180x3` (the last dimension referes to color channels RGB). The `label_batch` is a tensor of the shape `(32,)`, these are corresponding labels to the 32 images. You can call `.numpy()` on the `image_batch` and `labels_batch` tensors to convert them to a `numpy.ndarray`. Configure the dataset for performanceLet's make sure to use buffered prefetching so we can yield data from disk without having I/O become blocking. These are two important methods you should use when loading data.`Dataset.cache()` keeps the images in memory after they're loaded off disk during the first epoch. This will ensure the dataset does not become a bottleneck while training your model. If your dataset is too large to fit into memory, you can also use this method to create a performant on-disk cache.`Dataset.prefetch()` overlaps data preprocessing and model execution while training. Interested readers can learn more about both methods, as well as how to cache data to disk in the [data performance guide](https://www.tensorflow.org/guide/data_performanceprefetching).
###Code
AUTOTUNE = tf.data.experimental.AUTOTUNE
train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
###Output
_____no_output_____
###Markdown
Standardize the data The RGB channel values are in the `[0, 255]` range. This is not ideal for a neural network; in general you should seek to make your input values small. Here, we will standardize values to be in the `[0, 1]` by using a Rescaling layer.
###Code
normalization_layer = layers.experimental.preprocessing.Rescaling(1./255)
###Output
_____no_output_____
###Markdown
Note: The Keras Preprocesing utilities and layers introduced in this section are currently experimental and may change. There are two ways to use this layer. You can apply it to the dataset by calling map:
###Code
normalized_ds = train_ds.map(lambda x, y: (normalization_layer(x), y))
image_batch, labels_batch = next(iter(normalized_ds))
first_image = image_batch[0]
# Notice the pixels values are now in `[0,1]`.
print(np.min(first_image), np.max(first_image))
###Output
_____no_output_____
###Markdown
Or, you can include the layer inside your model definition, which can simplify deployment. We will use the second approach here. Note: we previously resized images using the `image_size` argument of `image_dataset_from_directory`. If you want to include the resizing logic in your model as well, you can use the [Resizing](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/Resizing) layer. Create the modelThe model consists of three convolution blocks with a max pool layer in each of them. There's a fully connected layer with 128 units on top of it that is activated by a `relu` activation function. This model has not been tuned for high accuracy, the goal of this tutorial is to show a standard approach.
###Code
num_classes = 5
model = Sequential([
layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
###Output
_____no_output_____
###Markdown
Compile the modelFor this tutorial, choose the `optimizers.Adam` optimizer and `losses.SparseCategoricalCrossentropy` loss function. To view training and validation accuracy for each training epoch, pass the `metrics` argument.
###Code
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Model summaryView all the layers of the network using the model's `summary` method:
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Train the model
###Code
epochs=10
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
###Output
_____no_output_____
###Markdown
Visualize training results Create plots of loss and accuracy on the training and validation sets.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss=history.history['loss']
val_loss=history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
As you can see from the plots, training accuracy and validation accuracy are off by large margin and the model has achieved only around 60% accuracy on the validation set.Let's look at what went wrong and try to increase the overall performance of the model. Overfitting In the plots above, the training accuracy is increasing linearly over time, whereas validation accuracy stalls around 60% in the training process. Also, the difference in accuracy between training and validation accuracy is noticeable—a sign of [overfitting](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit).When there are a small number of training examples, the model sometimes learns from noises or unwanted details from training examples—to an extent that it negatively impacts the performance of the model on new examples. This phenomenon is known as overfitting. It means that the model will have a difficult time generalizing on a new dataset.There are multiple ways to fight overfitting in the training process. In this tutorial, you'll use *data augmentation* and add *Dropout* to your model. Data augmentation Overfitting generally occurs when there are a small number of training examples. [Data augmentation](https://www.tensorflow.org/tutorials/images/data_augmentation) takes the approach of generating additional training data from your existing examples by augmenting then using random transformations that yield believable-looking images. This helps expose the model to more aspects of the data and generalize better.We will implement data augmentation using experimental [Keras Preprocessing Layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/?version=nightly). These can be included inside your model like other layers, and run on the GPU.
###Code
data_augmentation = keras.Sequential(
[
layers.experimental.preprocessing.RandomFlip("horizontal",
input_shape=(img_height,
img_width,
3)),
layers.experimental.preprocessing.RandomRotation(0.1),
layers.experimental.preprocessing.RandomZoom(0.1),
]
)
###Output
_____no_output_____
###Markdown
Let's visualize what a few augmented examples look like by applying data augmentation to the same image several times:
###Code
plt.figure(figsize=(10, 10))
for images, _ in train_ds.take(1):
for i in range(9):
augmented_images = data_augmentation(images)
ax = plt.subplot(3, 3, i + 1)
plt.imshow(augmented_images[0].numpy().astype("uint8"))
plt.axis("off")
###Output
_____no_output_____
###Markdown
We will use data augmentation to train a model in a moment. DropoutAnother technique to reduce overfitting is to introduce [Dropout](https://developers.google.com/machine-learning/glossarydropout_regularization) to the network, a form of *regularization*.When you apply Dropout to a layer it randomly drops out (by setting the activation to zero) a number of output units from the layer during the training process. Dropout takes a fractional number as its input value, in the form such as 0.1, 0.2, 0.4, etc. This means dropping out 10%, 20% or 40% of the output units randomly from the applied layer.Let's create a new neural network using `layers.Dropout`, then train it using augmented images.
###Code
model = Sequential([
data_augmentation,
layers.experimental.preprocessing.Rescaling(1./255),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Dropout(0.2),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
###Output
_____no_output_____
###Markdown
Compile and train the model
###Code
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.summary()
epochs = 15
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
###Output
_____no_output_____
###Markdown
Visualize training resultsAfter applying data augmentation and Dropout, there is less overfitting than before, and training and validation accuracy are closer aligned.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
Predict on new data Finally, let's use our model to classify an image that wasn't included in the training or validation sets. Note: Data augmentation and Dropout layers are inactive at inference time.
###Code
sunflower_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/592px-Red_sunflower.jpg"
sunflower_path = tf.keras.utils.get_file('Red_sunflower', origin=sunflower_url)
img = keras.preprocessing.image.load_img(
sunflower_path, target_size=(img_height, img_width)
)
img_array = keras.preprocessing.image.img_to_array(img)
img_array = tf.expand_dims(img_array, 0) # Create a batch
predictions = model.predict(img_array)
score = tf.nn.softmax(predictions[0])
print(
"This image most likely belongs to {} with a {:.2f} percent confidence."
.format(class_names[np.argmax(score)], 100 * np.max(score))
)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Image classification View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial shows how to classify cats or dogs from images. It builds an image classifier using a `tf.keras.Sequential` model and load data using `tf.keras.preprocessing.image.ImageDataGenerator`. You will get some practical experience and develop intuition for the following concepts:* Building _data input pipelines_ using the `tf.keras.preprocessing.image.ImageDataGenerator` class to efficiently work with data on disk to use with the model.* _Overfitting_ —How to identify and prevent it.* _Data augmentation_ and _dropout_ —Key techniques to fight overfitting in computer vision tasks to incorporate into the data pipeline and image classifier model.This tutorial follows a basic machine learning workflow:1. Examine and understand data2. Build an input pipeline3. Build the model4. Train the model5. Test the model6. Improve the model and repeat the process Import packages Let's start by importing the required packages. The `os` package is used to read files and directory structure, NumPy is used to convert python list to numpy array and to perform required matrix operations and `matplotlib.pyplot` to plot the graph and display images in the training and validation data.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
###Output
_____no_output_____
###Markdown
Import Tensorflow and the Keras classes needed to construct our model.
###Code
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import os
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Load data Begin by downloading the dataset. This tutorial uses a filtered version of Dogs vs Cats dataset from Kaggle. Download the archive version of the dataset and store it in the "/tmp/" directory.
###Code
_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)
PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')
###Output
_____no_output_____
###Markdown
The dataset has the following directory structure:cats_and_dogs_filtered|__ train |______ cats: [cat.0.jpg, cat.1.jpg, cat.2.jpg ....] |______ dogs: [dog.0.jpg, dog.1.jpg, dog.2.jpg ...]|__ validation |______ cats: [cat.2000.jpg, cat.2001.jpg, cat.2002.jpg ....] |______ dogs: [dog.2000.jpg, dog.2001.jpg, dog.2002.jpg ...] After extracting its contents, assign variables with the proper file path for the training and validation set.
###Code
train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')
train_cats_dir = os.path.join(train_dir, 'cats') # directory with our training cat pictures
train_dogs_dir = os.path.join(train_dir, 'dogs') # directory with our training dog pictures
validation_cats_dir = os.path.join(validation_dir, 'cats') # directory with our validation cat pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs') # directory with our validation dog pictures
###Output
_____no_output_____
###Markdown
Understand the data Let's look at how many cats and dogs images are in the training and validation directory:
###Code
num_cats_tr = len(os.listdir(train_cats_dir))
num_dogs_tr = len(os.listdir(train_dogs_dir))
num_cats_val = len(os.listdir(validation_cats_dir))
num_dogs_val = len(os.listdir(validation_dogs_dir))
total_train = num_cats_tr + num_dogs_tr
total_val = num_cats_val + num_dogs_val
print('total training cat images:', num_cats_tr)
print('total training dog images:', num_dogs_tr)
print('total validation cat images:', num_cats_val)
print('total validation dog images:', num_dogs_val)
print("--")
print("Total training images:", total_train)
print("Total validation images:", total_val)
###Output
_____no_output_____
###Markdown
For convenience, set up variables to use while pre-processing the dataset and training the network.
###Code
batch_size = 128
epochs = 15
IMG_HEIGHT = 150
IMG_WIDTH = 150
###Output
_____no_output_____
###Markdown
Data preparation Format the images into appropriately pre-processed floating point tensors before feeding to the network:1. Read images from the disk.2. Decode contents of these images and convert it into proper grid format as per their RGB content.3. Convert them into floating point tensors.4. Rescale the tensors from values between 0 and 255 to values between 0 and 1, as neural networks prefer to deal with small input values.Fortunately, all these tasks can be done with the `ImageDataGenerator` class provided by `tf.keras`. It can read images from disk and preprocess them into proper tensors. It will also set up generators that convert these images into batches of tensors—helpful when training the network.
###Code
train_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our training data
validation_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our validation data
###Output
_____no_output_____
###Markdown
After defining the generators for training and validation images, the `flow_from_directory` method load images from the disk, applies rescaling, and resizes the images into the required dimensions.
###Code
train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Visualize training images Visualize the training images by extracting a batch of images from the training generator—which is 32 images in this example—then plot five of them with `matplotlib`.
###Code
sample_training_images, _ = next(train_data_gen)
###Output
_____no_output_____
###Markdown
The `next` function returns a batch from the dataset. The return value of `next` function is in form of `(x_train, y_train)` where x_train is training features and y_train, its labels. Discard the labels to only visualize the training images.
###Code
# This function will plot images in the form of a grid with 1 row and 5 columns where images are placed in each column.
def plotImages(images_arr):
fig, axes = plt.subplots(1, 5, figsize=(20,20))
axes = axes.flatten()
for img, ax in zip( images_arr, axes):
ax.imshow(img)
ax.axis('off')
plt.tight_layout()
plt.show()
plotImages(sample_training_images[:5])
###Output
_____no_output_____
###Markdown
Create the model The model consists of three convolution blocks with a max pool layer in each of them. There's a fully connected layer with 512 units on top of it thatr is activated by a `relu` activation function. The model outputs class probabilities based on binary classification by the `sigmoid` activation function.
###Code
model = Sequential([
Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(512, activation='relu'),
Dense(1, activation='sigmoid')
])
###Output
_____no_output_____
###Markdown
Compile the modelFor this tutorial, choose the *ADAM* optimizer and *binary cross entropy* loss function. To view training and validation accuracy for each training epoch, pass the `metrics` argument.
###Code
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Model summaryView all the layers of the network using the model's `summary` method:
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Train the model Use the `fit_generator` method of the `ImageDataGenerator` class to train the network.
###Code
history = model.fit_generator(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)
###Output
_____no_output_____
###Markdown
Visualize training results Now visualize the results after training the network.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
As you can see from the plots, training accuracy and validation accuracy are off by large margin and the model has achieved only around **70%** accuracy on the validation set.Let's look at what went wrong and try to increase overall performance of the model. Overfitting In the plots above, the training accuracy is increasing linearly over time, whereas validation accuracy stalls around 70% in the training process. Also, the difference in accuracy between training and validation accuracy is noticeable—a sign of *overfitting*.When there are a small number of training examples, the model sometimes learns from noises or unwanted details from training examples—to an extent that it negatively impacts the performance of the model on new examples. This phenomenon is known as overfitting. It means that the model will have a difficult time generalizing on a new dataset.There are multiple ways to fight overfitting in the training process. In this tutorial, you'll use *data augmentation* and add *dropout* to our model. Data augmentation Overfitting generally occurs when there are a small number of training examples. One way to fix this problem is to augment the dataset so that it has a sufficient number of training examples. Data augmentation takes the approach of generating more training data from existing training samples by augmenting the samples using random transformations that yield believable-looking images. The goal is the model will never see the exact same picture twice during training. This helps expose the model to more aspects of the data and generalize better.Implement this in `tf.keras` using the `ImageDataGenerator` class. Pass different transformations to the dataset and it will take care of applying it during the training process. Augment and visualize data Begin by applying random horizontal flip augmentation to the dataset and see how individual images look like after the transformation. Apply horizontal flip Pass `horizontal_flip` as an argument to the `ImageDataGenerator` class and set it to `True` to apply this augmentation.
###Code
image_gen = ImageDataGenerator(rescale=1./255, horizontal_flip=True)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
###Output
_____no_output_____
###Markdown
Take one sample image from the training examples and repeat it five times so that the augmentation is applied to the same image five times.
###Code
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
# Re-use the same custom plotting function defined and used
# above to visualize the training images
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Randomly rotate the image Let's take a look at a different augmentation called rotation and apply 45 degrees of rotation randomly to the training examples.
###Code
image_gen = ImageDataGenerator(rescale=1./255, rotation_range=45)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Apply zoom augmentation Apply a zoom augmentation to the dataset to zoom images up to 50% randomly.
###Code
image_gen = ImageDataGenerator(rescale=1./255, zoom_range=0.5)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Put it all together Apply all the previous augmentations. Here, you applied rescale, 45 degree rotation, width shift, height shift, horizontal flip and zoom augmentation to the training images.
###Code
image_gen_train = ImageDataGenerator(
rescale=1./255,
rotation_range=45,
width_shift_range=.15,
height_shift_range=.15,
horizontal_flip=True,
zoom_range=0.5
)
train_data_gen = image_gen_train.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Visualize how a single image would look five different times when passing these augmentations randomly to the dataset.
###Code
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Create validation data generator Generally, only apply data augmentation to the training examples. In this case, only rescale the validation images and convert them into batches using `ImageDataGenerator`.
###Code
image_gen_val = ImageDataGenerator(rescale=1./255)
val_data_gen = image_gen_val.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Dropout Another technique to reduce overfitting is to introduce *dropout* to the network. It is a form of *regularization* that forces the weights in the network to take only small values, which makes the distribution of weight values more regular and the network can reduce overfitting on small training examples. Dropout is one of the regularization technique used in this tutorialWhen you apply dropout to a layer it randomly drops out (set to zero) number of output units from the applied layer during the training process. Dropout takes a fractional number as its input value, in the form such as 0.1, 0.2, 0.4, etc. This means dropping out 10%, 20% or 40% of the output units randomly from the applied layer.When appling 0.1 dropout to a certain layer, it randomly kills 10% of the output units in each training epoch.Create a network architecture with this new dropout feature and apply it to different convolutions and fully-connected layers. Creating a new network with Dropouts Here, you apply dropout to first and last max pool layers and to a fully connected layer that has 512 output units. 30% of the first and last max pool layer, and 10% of fully connected layer output units, are randomly set to zero during each training epoch.
###Code
model_new = Sequential([
Conv2D(16, 3, padding='same', activation='relu',
input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Dropout(0.2),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Dropout(0.2),
Flatten(),
Dense(512, activation='relu'),
Dense(1, activation='sigmoid')
])
###Output
_____no_output_____
###Markdown
Compile the model After introducing dropouts to the network, compile the model and view the layers summary.
###Code
model_new.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
model_new.summary()
###Output
_____no_output_____
###Markdown
Train the model After successfully introducing data augmentations to the training examples and adding dropouts to the network, train this new network:
###Code
history = model_new.fit_generator(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)
###Output
_____no_output_____
###Markdown
Visualize the model Visualize the new model after training, you can see that there is significantly less overfitting than before. The accuracy should go up after training the model for more epochs.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Image classification View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial shows how to classify cats or dogs from images. It builds an image classifier using a `tf.keras.Sequential` model and load data using `tf.keras.preprocessing.image.ImageDataGenerator`. You will get some practical experience and develop intuition for the following concepts:* Building _data input pipelines_ using the `tf.keras.preprocessing.image.ImageDataGenerator` class to efficiently work with data on disk to use with the model.* _Overfitting_ —How to identify and prevent it.* _Data augmentation_ and _dropout_ —Key techniques to fight overfitting in computer vision tasks to incorporate into the data pipeline and image classifier model.This tutorial follows a basic machine learning workflow:1. Examine and understand data2. Build an input pipeline3. Build the model4. Train the model5. Test the model6. Improve the model and repeat the process Import packages Let's start by importing the required packages. The `os` package is used to read files and directory structure, NumPy is used to convert python list to numpy array and to perform required matrix operations and `matplotlib.pyplot` to plot the graph and display images in the training and validation data.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
###Output
_____no_output_____
###Markdown
Import Tensorflow and the Keras classes needed to construct our model.
###Code
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import os
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Load data Begin by downloading the dataset. This tutorial uses a filtered version of Dogs vs Cats dataset from Kaggle. Download the archive version of the dataset and store it in the "/tmp/" directory.
###Code
_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)
PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')
###Output
_____no_output_____
###Markdown
The dataset has the following directory structure:cats_and_dogs_filtered|__ train |______ cats: [cat.0.jpg, cat.1.jpg, cat.2.jpg ....] |______ dogs: [dog.0.jpg, dog.1.jpg, dog.2.jpg ...]|__ validation |______ cats: [cat.2000.jpg, cat.2001.jpg, cat.2002.jpg ....] |______ dogs: [dog.2000.jpg, dog.2001.jpg, dog.2002.jpg ...] After extracting its contents, assign variables with the proper file path for the training and validation set.
###Code
train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')
train_cats_dir = os.path.join(train_dir, 'cats') # directory with our training cat pictures
train_dogs_dir = os.path.join(train_dir, 'dogs') # directory with our training dog pictures
validation_cats_dir = os.path.join(validation_dir, 'cats') # directory with our validation cat pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs') # directory with our validation dog pictures
###Output
_____no_output_____
###Markdown
Understand the data Let's look at how many cats and dogs images are in the training and validation directory:
###Code
num_cats_tr = len(os.listdir(train_cats_dir))
num_dogs_tr = len(os.listdir(train_dogs_dir))
num_cats_val = len(os.listdir(validation_cats_dir))
num_dogs_val = len(os.listdir(validation_dogs_dir))
total_train = num_cats_tr + num_dogs_tr
total_val = num_cats_val + num_dogs_val
print('total training cat images:', num_cats_tr)
print('total training dog images:', num_dogs_tr)
print('total validation cat images:', num_cats_val)
print('total validation dog images:', num_dogs_val)
print("--")
print("Total training images:", total_train)
print("Total validation images:", total_val)
###Output
_____no_output_____
###Markdown
For convenience, set up variables to use while pre-processing the dataset and training the network.
###Code
batch_size = 128
epochs = 15
IMG_HEIGHT = 150
IMG_WIDTH = 150
###Output
_____no_output_____
###Markdown
Data preparation Format the images into appropriately pre-processed floating point tensors before feeding to the network:1. Read images from the disk.2. Decode contents of these images and convert it into proper grid format as per their RGB content.3. Convert them into floating point tensors.4. Rescale the tensors from values between 0 and 255 to values between 0 and 1, as neural networks prefer to deal with small input values.Fortunately, all these tasks can be done with the `ImageDataGenerator` class provided by `tf.keras`. It can read images from disk and preprocess them into proper tensors. It will also set up generators that convert these images into batches of tensors—helpful when training the network.
###Code
train_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our training data
validation_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our validation data
###Output
_____no_output_____
###Markdown
After defining the generators for training and validation images, the `flow_from_directory` method load images from the disk, applies rescaling, and resizes the images into the required dimensions.
###Code
train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Visualize training images Visualize the training images by extracting a batch of images from the training generator—which is 32 images in this example—then plot five of them with `matplotlib`.
###Code
sample_training_images, _ = next(train_data_gen)
###Output
_____no_output_____
###Markdown
The `next` function returns a batch from the dataset. The return value of `next` function is in form of `(x_train, y_train)` where x_train is training features and y_train, its labels. Discard the labels to only visualize the training images.
###Code
# This function will plot images in the form of a grid with 1 row and 5 columns where images are placed in each column.
def plotImages(images_arr):
fig, axes = plt.subplots(1, 5, figsize=(20,20))
axes = axes.flatten()
for img, ax in zip( images_arr, axes):
ax.imshow(img)
ax.axis('off')
plt.tight_layout()
plt.show()
plotImages(sample_training_images[:5])
###Output
_____no_output_____
###Markdown
Create the model The model consists of three convolution blocks with a max pool layer in each of them. There's a fully connected layer with 512 units on top of it that is activated by a `relu` activation function.
###Code
model = Sequential([
Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(512, activation='relu'),
Dense(1)
])
###Output
_____no_output_____
###Markdown
Compile the modelFor this tutorial, choose the *ADAM* optimizer and *binary cross entropy* loss function. To view training and validation accuracy for each training epoch, pass the `metrics` argument.
###Code
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Model summaryView all the layers of the network using the model's `summary` method:
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Train the model Use the `fit_generator` method of the `ImageDataGenerator` class to train the network.
###Code
history = model.fit_generator(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)
###Output
_____no_output_____
###Markdown
Visualize training results Now visualize the results after training the network.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss=history.history['loss']
val_loss=history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
As you can see from the plots, training accuracy and validation accuracy are off by large margin and the model has achieved only around **70%** accuracy on the validation set.Let's look at what went wrong and try to increase overall performance of the model. Overfitting In the plots above, the training accuracy is increasing linearly over time, whereas validation accuracy stalls around 70% in the training process. Also, the difference in accuracy between training and validation accuracy is noticeable—a sign of *overfitting*.When there are a small number of training examples, the model sometimes learns from noises or unwanted details from training examples—to an extent that it negatively impacts the performance of the model on new examples. This phenomenon is known as overfitting. It means that the model will have a difficult time generalizing on a new dataset.There are multiple ways to fight overfitting in the training process. In this tutorial, you'll use *data augmentation* and add *dropout* to our model. Data augmentation Overfitting generally occurs when there are a small number of training examples. One way to fix this problem is to augment the dataset so that it has a sufficient number of training examples. Data augmentation takes the approach of generating more training data from existing training samples by augmenting the samples using random transformations that yield believable-looking images. The goal is the model will never see the exact same picture twice during training. This helps expose the model to more aspects of the data and generalize better.Implement this in `tf.keras` using the `ImageDataGenerator` class. Pass different transformations to the dataset and it will take care of applying it during the training process. Augment and visualize data Begin by applying random horizontal flip augmentation to the dataset and see how individual images look like after the transformation. Apply horizontal flip Pass `horizontal_flip` as an argument to the `ImageDataGenerator` class and set it to `True` to apply this augmentation.
###Code
image_gen = ImageDataGenerator(rescale=1./255, horizontal_flip=True)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
###Output
_____no_output_____
###Markdown
Take one sample image from the training examples and repeat it five times so that the augmentation is applied to the same image five times.
###Code
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
# Re-use the same custom plotting function defined and used
# above to visualize the training images
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Randomly rotate the image Let's take a look at a different augmentation called rotation and apply 45 degrees of rotation randomly to the training examples.
###Code
image_gen = ImageDataGenerator(rescale=1./255, rotation_range=45)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Apply zoom augmentation Apply a zoom augmentation to the dataset to zoom images up to 50% randomly.
###Code
# zoom_range from 0 - 1 where 1 = 100%.
image_gen = ImageDataGenerator(rescale=1./255, zoom_range=0.5) #
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Put it all together Apply all the previous augmentations. Here, you applied rescale, 45 degree rotation, width shift, height shift, horizontal flip and zoom augmentation to the training images.
###Code
image_gen_train = ImageDataGenerator(
rescale=1./255,
rotation_range=45,
width_shift_range=.15,
height_shift_range=.15,
horizontal_flip=True,
zoom_range=0.5
)
train_data_gen = image_gen_train.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Visualize how a single image would look five different times when passing these augmentations randomly to the dataset.
###Code
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Create validation data generator Generally, only apply data augmentation to the training examples. In this case, only rescale the validation images and convert them into batches using `ImageDataGenerator`.
###Code
image_gen_val = ImageDataGenerator(rescale=1./255)
val_data_gen = image_gen_val.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Dropout Another technique to reduce overfitting is to introduce *dropout* to the network. It is a form of *regularization* that forces the weights in the network to take only small values, which makes the distribution of weight values more regular and the network can reduce overfitting on small training examples. Dropout is one of the regularization technique used in this tutorialWhen you apply dropout to a layer it randomly drops out (set to zero) number of output units from the applied layer during the training process. Dropout takes a fractional number as its input value, in the form such as 0.1, 0.2, 0.4, etc. This means dropping out 10%, 20% or 40% of the output units randomly from the applied layer.When appling 0.1 dropout to a certain layer, it randomly kills 10% of the output units in each training epoch.Create a network architecture with this new dropout feature and apply it to different convolutions and fully-connected layers. Creating a new network with Dropouts Here, you apply dropout to first and last max pool layers. Applying dropout will randomly set 20% of the neurons to zero during each training epoch. This helps to avoid overfitting on the training dataset.
###Code
model_new = Sequential([
Conv2D(16, 3, padding='same', activation='relu',
input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Dropout(0.2),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Dropout(0.2),
Flatten(),
Dense(512, activation='relu'),
Dense(1)
])
###Output
_____no_output_____
###Markdown
Compile the model After introducing dropouts to the network, compile the model and view the layers summary.
###Code
model_new.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
model_new.summary()
###Output
_____no_output_____
###Markdown
Train the model After successfully introducing data augmentations to the training examples and adding dropouts to the network, train this new network:
###Code
history = model_new.fit_generator(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)
###Output
_____no_output_____
###Markdown
Visualize the model Visualize the new model after training, you can see that there is significantly less overfitting than before. The accuracy should go up after training the model for more epochs.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Image classification View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial shows how to classify cats or dogs from images. It builds an image classifier using a `tf.keras.Sequential` model and load data using `tf.keras.preprocessing.image.ImageDataGenerator`. You will get some practical experience and develop intuition for the following concepts:* Building _data input pipelines_ using the `tf.keras.preprocessing.image.ImageDataGenerator` class to efficiently work with data on disk to use with the model.* _Overfitting_ —How to identify and prevent it.* _Data augmentation_ and _dropout_ —Key techniques to fight overfitting in computer vision tasks to incorporate into the data pipeline and image classifier model.This tutorial follows a basic machine learning workflow:1. Examine and understand data2. Build an input pipeline3. Build the model4. Train the model5. Test the model6. Improve the model and repeat the process Import packages Let's start by importing the required packages. The `os` package is used to read files and directory structure, NumPy is used to convert python list to numpy array and to perform required matrix operations and `matplotlib.pyplot` to plot the graph and display images in the training and validation data. Import Tensorflow and the Keras classes needed to construct our model.
###Code
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import os
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Load data Begin by downloading the dataset. This tutorial uses a filtered version of Dogs vs Cats dataset from Kaggle. Download the archive version of the dataset and store it in the "/tmp/" directory.
###Code
_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)
PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')
###Output
_____no_output_____
###Markdown
The dataset has the following directory structure:cats_and_dogs_filtered|__ train |______ cats: [cat.0.jpg, cat.1.jpg, cat.2.jpg ....] |______ dogs: [dog.0.jpg, dog.1.jpg, dog.2.jpg ...]|__ validation |______ cats: [cat.2000.jpg, cat.2001.jpg, cat.2002.jpg ....] |______ dogs: [dog.2000.jpg, dog.2001.jpg, dog.2002.jpg ...] After extracting its contents, assign variables with the proper file path for the training and validation set.
###Code
train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')
train_cats_dir = os.path.join(train_dir, 'cats') # directory with our training cat pictures
train_dogs_dir = os.path.join(train_dir, 'dogs') # directory with our training dog pictures
validation_cats_dir = os.path.join(validation_dir, 'cats') # directory with our validation cat pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs') # directory with our validation dog pictures
###Output
_____no_output_____
###Markdown
Understand the data Let's look at how many cats and dogs images are in the training and validation directory:
###Code
num_cats_tr = len(os.listdir(train_cats_dir))
num_dogs_tr = len(os.listdir(train_dogs_dir))
num_cats_val = len(os.listdir(validation_cats_dir))
num_dogs_val = len(os.listdir(validation_dogs_dir))
total_train = num_cats_tr + num_dogs_tr
total_val = num_cats_val + num_dogs_val
print('total training cat images:', num_cats_tr)
print('total training dog images:', num_dogs_tr)
print('total validation cat images:', num_cats_val)
print('total validation dog images:', num_dogs_val)
print("--")
print("Total training images:", total_train)
print("Total validation images:", total_val)
###Output
_____no_output_____
###Markdown
For convenience, set up variables to use while pre-processing the dataset and training the network.
###Code
batch_size = 128
epochs = 15
IMG_HEIGHT = 150
IMG_WIDTH = 150
###Output
_____no_output_____
###Markdown
Data preparation Format the images into appropriately pre-processed floating point tensors before feeding to the network:1. Read images from the disk.2. Decode contents of these images and convert it into proper grid format as per their RGB content.3. Convert them into floating point tensors.4. Rescale the tensors from values between 0 and 255 to values between 0 and 1, as neural networks prefer to deal with small input values.Fortunately, all these tasks can be done with the `ImageDataGenerator` class provided by `tf.keras`. It can read images from disk and preprocess them into proper tensors. It will also set up generators that convert these images into batches of tensors—helpful when training the network.
###Code
train_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our training data
validation_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our validation data
###Output
_____no_output_____
###Markdown
After defining the generators for training and validation images, the `flow_from_directory` method load images from the disk, applies rescaling, and resizes the images into the required dimensions.
###Code
train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Visualize training images Visualize the training images by extracting a batch of images from the training generator—which are 128 images in this example—then plot five of them with `matplotlib`.
###Code
sample_training_images, _ = next(train_data_gen)
###Output
_____no_output_____
###Markdown
The `next` function returns a batch from the dataset. The return value of `next` function is in form of `(x_train, y_train)` where x_train is training features and y_train, its labels. Discard the labels to only visualize the training images.
###Code
# This function will plot images in the form of a grid with 1 row and 5 columns where images are placed in each column.
def plotImages(images_arr):
fig, axes = plt.subplots(1, 5, figsize=(20,20))
axes = axes.flatten()
for img, ax in zip( images_arr, axes):
ax.imshow(img)
ax.axis('off')
plt.tight_layout()
plt.show()
plotImages(sample_training_images[:5])
###Output
_____no_output_____
###Markdown
Create the model The model consists of three convolution blocks with a max pool layer in each of them. There's a fully connected layer with 512 units on top of it that is activated by a `relu` activation function.
###Code
model = Sequential([
Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(512, activation='relu'),
Dense(1)
])
###Output
_____no_output_____
###Markdown
Compile the modelFor this tutorial, choose the *ADAM* optimizer and *binary cross entropy* loss function. To view training and validation accuracy for each training epoch, pass the `metrics` argument.
###Code
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Model summaryView all the layers of the network using the model's `summary` method:
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Train the model
###Code
history = model.fit(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)
###Output
_____no_output_____
###Markdown
Visualize training results Now visualize the results after training the network.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss=history.history['loss']
val_loss=history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
As you can see from the plots, training accuracy and validation accuracy are off by large margin and the model has achieved only around **70%** accuracy on the validation set.Let's look at what went wrong and try to increase overall performance of the model. Overfitting In the plots above, the training accuracy is increasing linearly over time, whereas validation accuracy stalls around 70% in the training process. Also, the difference in accuracy between training and validation accuracy is noticeable—a sign of *overfitting*.When there are a small number of training examples, the model sometimes learns from noises or unwanted details from training examples—to an extent that it negatively impacts the performance of the model on new examples. This phenomenon is known as overfitting. It means that the model will have a difficult time generalizing on a new dataset.There are multiple ways to fight overfitting in the training process. In this tutorial, you'll use *data augmentation* and add *dropout* to our model. Data augmentation Overfitting generally occurs when there are a small number of training examples. One way to fix this problem is to augment the dataset so that it has a sufficient number of training examples. Data augmentation takes the approach of generating more training data from existing training samples by augmenting the samples using random transformations that yield believable-looking images. The goal is the model will never see the exact same picture twice during training. This helps expose the model to more aspects of the data and generalize better.Implement this in `tf.keras` using the `ImageDataGenerator` class. Pass different transformations to the dataset and it will take care of applying it during the training process. Augment and visualize data Begin by applying random horizontal flip augmentation to the dataset and see how individual images look like after the transformation. Apply horizontal flip Pass `horizontal_flip` as an argument to the `ImageDataGenerator` class and set it to `True` to apply this augmentation.
###Code
image_gen = ImageDataGenerator(rescale=1./255, horizontal_flip=True)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
###Output
_____no_output_____
###Markdown
Take one sample image from the training examples and repeat it five times so that the augmentation is applied to the same image five times.
###Code
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
# Re-use the same custom plotting function defined and used
# above to visualize the training images
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Randomly rotate the image Let's take a look at a different augmentation called rotation and apply 45 degrees of rotation randomly to the training examples.
###Code
image_gen = ImageDataGenerator(rescale=1./255, rotation_range=45)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Apply zoom augmentation Apply a zoom augmentation to the dataset to zoom images up to 50% randomly.
###Code
# zoom_range from 0 - 1 where 1 = 100%.
image_gen = ImageDataGenerator(rescale=1./255, zoom_range=0.5) #
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Put it all together Apply all the previous augmentations. Here, you applied rescale, 45 degree rotation, width shift, height shift, horizontal flip and zoom augmentation to the training images.
###Code
image_gen_train = ImageDataGenerator(
rescale=1./255,
rotation_range=45,
width_shift_range=.15,
height_shift_range=.15,
horizontal_flip=True,
zoom_range=0.5
)
train_data_gen = image_gen_train.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Visualize how a single image would look five different times when passing these augmentations randomly to the dataset.
###Code
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Create validation data generator Generally, only apply data augmentation to the training examples. In this case, only rescale the validation images and convert them into batches using `ImageDataGenerator`.
###Code
image_gen_val = ImageDataGenerator(rescale=1./255)
val_data_gen = image_gen_val.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Dropout Another technique to reduce overfitting is to introduce *dropout* to the network. It is a form of *regularization* that forces the weights in the network to take only small values, which makes the distribution of weight values more regular and the network can reduce overfitting on small training examples. Dropout is one of the regularization technique used in this tutorialWhen you apply dropout to a layer it randomly drops out (set to zero) number of output units from the applied layer during the training process. Dropout takes a fractional number as its input value, in the form such as 0.1, 0.2, 0.4, etc. This means dropping out 10%, 20% or 40% of the output units randomly from the applied layer.When appling 0.1 dropout to a certain layer, it randomly kills 10% of the output units in each training epoch.Create a network architecture with this new dropout feature and apply it to different convolutions and fully-connected layers. Creating a new network with Dropouts Here, you apply dropout to first and last max pool layers. Applying dropout will randomly set 20% of the neurons to zero during each training epoch. This helps to avoid overfitting on the training dataset.
###Code
model_new = Sequential([
Conv2D(16, 3, padding='same', activation='relu',
input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Dropout(0.2),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Dropout(0.2),
Flatten(),
Dense(512, activation='relu'),
Dense(1)
])
###Output
_____no_output_____
###Markdown
Compile the model After introducing dropouts to the network, compile the model and view the layers summary.
###Code
model_new.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
model_new.summary()
###Output
_____no_output_____
###Markdown
Train the model After successfully introducing data augmentations to the training examples and adding dropouts to the network, train this new network:
###Code
history = model_new.fit(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)
###Output
_____no_output_____
###Markdown
Visualize the model Visualize the new model after training, you can see that there is significantly less overfitting than before. The accuracy should go up after training the model for more epochs.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Image classification View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial shows how to classify cats or dogs from images. It builds an image classifier using a `tf.keras.Sequential` model and load data using `tf.keras.preprocessing.image.ImageDataGenerator`. You will get some practical experience and develop intuition for the following concepts:* Building _data input pipelines_ using the `tf.keras.preprocessing.image.ImageDataGenerator` class to efficiently work with data on disk to use with the model.* _Overfitting_ —How to identify and prevent it.* _Data augmentation_ and _dropout_ —Key techniques to fight overfitting in computer vision tasks to incorporate into the data pipeline and image classifier model.This tutorial follows a basic machine learning workflow:1. Examine and understand data2. Build an input pipeline3. Build the model4. Train the model5. Test the model6. Improve the model and repeat the process Import packages Let's start by importing the required packages. The `os` package is used to read files and directory structure, NumPy is used to convert python list to numpy array and to perform required matrix operations and `matplotlib.pyplot` to plot the graph and display images in the training and validation data.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
###Output
_____no_output_____
###Markdown
Import Tensorflow and the Keras classes needed to construct our model.
###Code
try:
# %tensorflow_version only exists in Colab.
import tensorflow.compat.v2 as tf #nightly-gpu
except Exception:
pass
tf.enable_v2_behavior()
from tensorflow.compat.v2.keras.models import Sequential
from tensorflow.compat.v2.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D
from tensorflow.compat.v2.keras.preprocessing.image import ImageDataGenerator
import os
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Load data Begin by downloading the dataset. This tutorial uses a filtered version of Dogs vs Cats dataset from Kaggle. Download the archive version of the dataset and store it in the "/tmp/" directory.
###Code
_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)
PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')
###Output
_____no_output_____
###Markdown
The dataset has the following directory structure:cats_and_dogs_filtered|__ train |______ cats: [cat.0.jpg, cat.1.jpg, cat.2.jpg ....] |______ dogs: [dog.0.jpg, dog.1.jpg, dog.2.jpg ...]|__ validation |______ cats: [cat.2000.jpg, cat.2001.jpg, cat.2002.jpg ....] |______ dogs: [dog.2000.jpg, dog.2001.jpg, dog.2002.jpg ...] After extracting its contents, assign variables with the proper file path for the training and validation set.
###Code
train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')
train_cats_dir = os.path.join(train_dir, 'cats') # directory with our training cat pictures
train_dogs_dir = os.path.join(train_dir, 'dogs') # directory with our training dog pictures
validation_cats_dir = os.path.join(validation_dir, 'cats') # directory with our validation cat pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs') # directory with our validation dog pictures
###Output
_____no_output_____
###Markdown
Understand the data Let's look at how many cats and dogs images are in the training and validation directory:
###Code
num_cats_tr = len(os.listdir(train_cats_dir))
num_dogs_tr = len(os.listdir(train_dogs_dir))
num_cats_val = len(os.listdir(validation_cats_dir))
num_dogs_val = len(os.listdir(validation_dogs_dir))
total_train = num_cats_tr + num_dogs_tr
total_val = num_cats_val + num_dogs_val
print('total training cat images:', num_cats_tr)
print('total training dog images:', num_dogs_tr)
print('total validation cat images:', num_cats_val)
print('total validation dog images:', num_dogs_val)
print("--")
print("Total training images:", total_train)
print("Total validation images:", total_val)
###Output
_____no_output_____
###Markdown
For convenience, set up variables to use while pre-processing the dataset and training the network.
###Code
batch_size = 128
epochs = 15
IMG_HEIGHT = 150
IMG_WIDTH = 150
###Output
_____no_output_____
###Markdown
Data preparation Format the images into appropriately pre-processed floating point tensors before feeding to the network:1. Read images from the disk.2. Decode contents of these images and convert it into proper grid format as per their RGB content.3. Convert them into floating point tensors.4. Rescale the tensors from values between 0 and 255 to values between 0 and 1, as neural networks prefer to deal with small input values.Fortunately, all these tasks can be done with the `ImageDataGenerator` class provided by `tf.keras`. It can read images from disk and preprocess them into proper tensors. It will also set up generators that convert these images into batches of tensors—helpful when training the network.
###Code
train_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our training data
validation_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our validation data
###Output
_____no_output_____
###Markdown
After defining the generators for training and validation images, the `flow_from_directory` method load images from the disk, applies rescaling, and resizes the images into the required dimensions.
###Code
train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Visualize training images Visualize the training images by extracting a batch of images from the training generator—which is 32 images in this example—then plot five of them with `matplotlib`.
###Code
sample_training_images, _ = next(train_data_gen)
###Output
_____no_output_____
###Markdown
The `next` function returns a batch from the dataset. The return value of `next` function is in form of `(x_train, y_train)` where x_train is training features and y_train, its labels. Discard the labels to only visualize the training images.
###Code
# This function will plot images in the form of a grid with 1 row and 5 columns where images are placed in each column.
def plotImages(images_arr):
fig, axes = plt.subplots(1, 5, figsize=(20,20))
axes = axes.flatten()
for img, ax in zip( images_arr, axes):
ax.imshow(img)
ax.axis('off')
plt.tight_layout()
plt.show()
plotImages(sample_training_images[:5])
###Output
_____no_output_____
###Markdown
Create the model The model consists of three convolution blocks with a max pool layer in each of them. There's a fully connected layer with 512 units on top of it thatr is activated by a `relu` activation function. The model outputs class probabilities based on binary classification by the `sigmoid` activation function.
###Code
model = Sequential([
Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(512, activation='relu'),
Dense(1, activation='sigmoid')
])
###Output
_____no_output_____
###Markdown
Compile the modelFor this tutorial, choose the *ADAM* optimizer and *binary cross entropy* loss function. To view training and validation accuracy for each training epoch, pass the `metrics` argument.
###Code
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Model summaryView all the layers of the network using the model's `summary` method:
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Train the model Use the `fit_generator` method of the `ImageDataGenerator` class to train the network.
###Code
history = model.fit_generator(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)
###Output
_____no_output_____
###Markdown
Visualize training results Now visualize the results after training the network.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
As you can see from the plots, training accuracy and validation accuracy are off by large margin and the model has achieved only around **70%** accuracy on the validation set.Let's look at what went wrong and try to increase overall performance of the model. Overfitting In the plots above, the training accuracy is increasing linearly over time, whereas validation accuracy stalls around 70% in the training process. Also, the difference in accuracy between training and validation accuracy is noticeable—a sign of *overfitting*.When there are a small number of training examples, the model sometimes learns from noises or unwanted details from training examples—to an extent that it negatively impacts the performance of the model on new examples. This phenomenon is known as overfitting. It means that the model will have a difficult time generalizing on a new dataset.There are multiple ways to fight overfitting in the training process. In this tutorial, you'll use *data augmentation* and add *dropout* to our model. Data augmentation Overfitting generally occurs when there are a small number of training examples. One way to fix this problem is to augment the dataset so that it has a sufficient number of training examples. Data augmentation takes the approach of generating more training data from existing training samples by augmenting the samples using random transformations that yield believable-looking images. The goal is the model will never see the exact same picture twice during training. This helps expose the model to more aspects of the data and generalize better.Implement this in `tf.keras` using the `ImageDataGenerator` class. Pass different transformations to the dataset and it will take care of applying it during the training process. Augment and visualize data Begin by applying random horizontal flip augmentation to the dataset and see how individual images look like after the transformation. Apply horizontal flip Pass `horizontal_flip` as an argument to the `ImageDataGenerator` class and set it to `True` to apply this augmentation.
###Code
image_gen = ImageDataGenerator(rescale=1./255, horizontal_flip=True)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
###Output
_____no_output_____
###Markdown
Take one sample image from the training examples and repeat it five times so that the augmentation is applied to the same image five times.
###Code
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
# Re-use the same custom plotting function defined and used
# above to visualize the training images
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Randomly rotate the image Let's take a look at a different augmentation called rotation and apply 45 degrees of rotation randomly to the training examples.
###Code
image_gen = ImageDataGenerator(rescale=1./255, rotation_range=45)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Apply zoom augmentation Apply a zoom augmentation to the dataset to zoom images up to 50% randomly.
###Code
image_gen = ImageDataGenerator(rescale=1./255, zoom_range=0.5)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Put it all together Apply all the previous augmentations. Here, you applied rescale, 45 degree rotation, width shift, height shift, horizontal flip and zoom augmentation to the training images.
###Code
image_gen_train = ImageDataGenerator(
rescale=1./255,
rotation_range=45,
width_shift_range=.15,
height_shift_range=.15,
horizontal_flip=True,
zoom_range=0.5
)
train_data_gen = image_gen_train.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Visualize how a single image would look five different times when passing these augmentations randomly to the dataset.
###Code
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Create validation data generator Generally, only apply data augmentation to the training examples. In this case, only rescale the validation images and convert them into batches using `ImageDataGenerator`.
###Code
image_gen_val = ImageDataGenerator(rescale=1./255)
val_data_gen = image_gen_val.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Dropout Another technique to reduce overfitting is to introduce *dropout* to the network. It is a form of *regularization* that forces the weights in the network to take only small values, which makes the distribution of weight values more regular and the network can reduce overfitting on small training examples. Dropout is one of the regularization technique used in this tutorialWhen you apply dropout to a layer it randomly drops out (set to zero) number of output units from the applied layer during the training process. Dropout takes a fractional number as its input value, in the form such as 0.1, 0.2, 0.4, etc. This means dropping out 10%, 20% or 40% of the output units randomly from the applied layer.When appling 0.1 dropout to a certain layer, it randomly kills 10% of the output units in each training epoch.Create a network architecture with this new dropout feature and apply it to different convolutions and fully-connected layers. Creating a new network with Dropouts Here, you apply dropout to first and last max pool layers and to a fully connected layer that has 512 output units. 30% of the first and last max pool layer, and 10% of fully connected layer output units, are randomly set to zero during each training epoch.
###Code
model_new = Sequential([
Conv2D(16, 3, padding='same', activation='relu',
input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Dropout(0.2),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Dropout(0.2),
Flatten(),
Dense(512, activation='relu'),
Dense(1, activation='sigmoid')
])
###Output
_____no_output_____
###Markdown
Compile the model After introducing dropouts to the network, compile the model and view the layers summary.
###Code
model_new.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
model_new.summary()
###Output
_____no_output_____
###Markdown
Train the model After successfully introducing data augmentations to the training examples and adding dropouts to the network, train this new network:
###Code
history = model_new.fit_generator(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)
###Output
_____no_output_____
###Markdown
Visualize the model Visualize the new model after training, you can see that there is significantly less overfitting than before. The accuracy should go up after training the model for more epochs.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Image classification View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial shows how to classify images of flowers. It creates an image classifier using a `keras.Sequential` model, and loads data using `preprocessing.image_dataset_from_directory`. You will gain practical experience with the following concepts:* Efficiently loading a dataset off disk.* Identifying overfitting and applying techniques to mitigate it, including data augmentation and Dropout.This tutorial follows a basic machine learning workflow:1. Examine and understand data2. Build an input pipeline3. Build the model4. Train the model5. Test the model6. Improve the model and repeat the process Import TensorFlow and other libraries
###Code
import matplotlib.pyplot as plt
import numpy as np
import os
import PIL
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
###Output
_____no_output_____
###Markdown
Download and explore the dataset This tutorial uses a dataset of about 3,700 photos of flowers. The dataset contains 5 sub-directories, one per class:```flower_photo/ daisy/ dandelion/ roses/ sunflowers/ tulips/```
###Code
import pathlib
dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
data_dir = tf.keras.utils.get_file('flower_photos', origin=dataset_url, untar=True)
data_dir = pathlib.Path(data_dir)
###Output
_____no_output_____
###Markdown
After downloading, you should now have a copy of the dataset available. There are 3,670 total images:
###Code
image_count = len(list(data_dir.glob('*/*.jpg')))
print(image_count)
###Output
_____no_output_____
###Markdown
Here are some roses:
###Code
roses = list(data_dir.glob('roses/*'))
PIL.Image.open(str(roses[0]))
PIL.Image.open(str(roses[1]))
###Output
_____no_output_____
###Markdown
And some tulips:
###Code
tulips = list(data_dir.glob('tulips/*'))
PIL.Image.open(str(tulips[0]))
PIL.Image.open(str(tulips[1]))
###Output
_____no_output_____
###Markdown
Load using keras.preprocessingLet's load these images off disk using the helpful [image_dataset_from_directory](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory) utility. This will take you from a directory of images on disk to a `tf.data.Dataset` in just a couple lines of code. If you like, you can also write your own data loading code from scratch by visiting the [load images](https://www.tensorflow.org/tutorials/load_data/images) tutorial. Create a dataset Define some parameters for the loader:
###Code
batch_size = 32
img_height = 180
img_width = 180
###Output
_____no_output_____
###Markdown
It's good practice to use a validation split when developing your model. Let's use 80% of the images for training, and 20% for validation.
###Code
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="training",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
###Output
_____no_output_____
###Markdown
You can find the class names in the `class_names` attribute on these datasets. These correspond to the directory names in alphabetical order.
###Code
class_names = train_ds.class_names
print(class_names)
###Output
_____no_output_____
###Markdown
Visualize the dataHere are the first 9 images from the training dataset.
###Code
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 10))
for images, labels in train_ds.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(images[i].numpy().astype("uint8"))
plt.title(class_names[labels[i]])
plt.axis("off")
###Output
_____no_output_____
###Markdown
You will train a model using these datasets by passing them to `model.fit` in a moment. If you like, you can also manually iterate over the dataset and retrieve batches of images:
###Code
for image_batch, labels_batch in train_ds:
print(image_batch.shape)
print(labels_batch.shape)
break
###Output
_____no_output_____
###Markdown
The `image_batch` is a tensor of the shape `(32, 180, 180, 3)`. This is a batch of 32 images of shape `180x180x3` (the last dimension refers to color channels RGB). The `label_batch` is a tensor of the shape `(32,)`, these are corresponding labels to the 32 images. You can call `.numpy()` on the `image_batch` and `labels_batch` tensors to convert them to a `numpy.ndarray`. Configure the dataset for performanceLet's make sure to use buffered prefetching so you can yield data from disk without having I/O become blocking. These are two important methods you should use when loading data.`Dataset.cache()` keeps the images in memory after they're loaded off disk during the first epoch. This will ensure the dataset does not become a bottleneck while training your model. If your dataset is too large to fit into memory, you can also use this method to create a performant on-disk cache.`Dataset.prefetch()` overlaps data preprocessing and model execution while training. Interested readers can learn more about both methods, as well as how to cache data to disk in the [data performance guide](https://www.tensorflow.org/guide/data_performanceprefetching).
###Code
AUTOTUNE = tf.data.experimental.AUTOTUNE
train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
###Output
_____no_output_____
###Markdown
Standardize the data The RGB channel values are in the `[0, 255]` range. This is not ideal for a neural network; in general you should seek to make your input values small. Here, you will standardize values to be in the `[0, 1]` range by using a Rescaling layer.
###Code
normalization_layer = layers.experimental.preprocessing.Rescaling(1./255)
###Output
_____no_output_____
###Markdown
Note: The Keras Preprocessing utilities and layers introduced in this section are currently experimental and may change. There are two ways to use this layer. You can apply it to the dataset by calling map:
###Code
normalized_ds = train_ds.map(lambda x, y: (normalization_layer(x), y))
image_batch, labels_batch = next(iter(normalized_ds))
first_image = image_batch[0]
# Notice the pixels values are now in `[0,1]`.
print(np.min(first_image), np.max(first_image))
###Output
_____no_output_____
###Markdown
Or, you can include the layer inside your model definition, which can simplify deployment. Let's use the second approach here. Note: you previously resized images using the `image_size` argument of `image_dataset_from_directory`. If you want to include the resizing logic in your model as well, you can use the [Resizing](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/Resizing) layer. Create the modelThe model consists of three convolution blocks with a max pool layer in each of them. There's a fully connected layer with 128 units on top of it that is activated by a `relu` activation function. This model has not been tuned for high accuracy, the goal of this tutorial is to show a standard approach.
###Code
num_classes = 5
model = Sequential([
layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
###Output
_____no_output_____
###Markdown
Compile the modelFor this tutorial, choose the `optimizers.Adam` optimizer and `losses.SparseCategoricalCrossentropy` loss function. To view training and validation accuracy for each training epoch, pass the `metrics` argument.
###Code
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Model summaryView all the layers of the network using the model's `summary` method:
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Train the model
###Code
epochs=10
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
###Output
_____no_output_____
###Markdown
Visualize training results Create plots of loss and accuracy on the training and validation sets.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
As you can see from the plots, training accuracy and validation accuracy are off by large margin and the model has achieved only around 60% accuracy on the validation set.Let's look at what went wrong and try to increase the overall performance of the model. Overfitting In the plots above, the training accuracy is increasing linearly over time, whereas validation accuracy stalls around 60% in the training process. Also, the difference in accuracy between training and validation accuracy is noticeable—a sign of [overfitting](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit).When there are a small number of training examples, the model sometimes learns from noises or unwanted details from training examples—to an extent that it negatively impacts the performance of the model on new examples. This phenomenon is known as overfitting. It means that the model will have a difficult time generalizing on a new dataset.There are multiple ways to fight overfitting in the training process. In this tutorial, you'll use *data augmentation* and add *Dropout* to your model. Data augmentation Overfitting generally occurs when there are a small number of training examples. [Data augmentation](https://www.tensorflow.org/tutorials/images/data_augmentation) takes the approach of generating additional training data from your existing examples by augmenting them using random transformations that yield believable-looking images. This helps expose the model to more aspects of the data and generalize better.You will implement data augmentation using experimental [Keras Preprocessing Layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/?version=nightly). These can be included inside your model like other layers, and run on the GPU.
###Code
data_augmentation = keras.Sequential(
[
layers.experimental.preprocessing.RandomFlip("horizontal",
input_shape=(img_height,
img_width,
3)),
layers.experimental.preprocessing.RandomRotation(0.1),
layers.experimental.preprocessing.RandomZoom(0.1),
]
)
###Output
_____no_output_____
###Markdown
Let's visualize what a few augmented examples look like by applying data augmentation to the same image several times:
###Code
plt.figure(figsize=(10, 10))
for images, _ in train_ds.take(1):
for i in range(9):
augmented_images = data_augmentation(images)
ax = plt.subplot(3, 3, i + 1)
plt.imshow(augmented_images[0].numpy().astype("uint8"))
plt.axis("off")
###Output
_____no_output_____
###Markdown
You will use data augmentation to train a model in a moment. DropoutAnother technique to reduce overfitting is to introduce [Dropout](https://developers.google.com/machine-learning/glossarydropout_regularization) to the network, a form of *regularization*.When you apply Dropout to a layer it randomly drops out (by setting the activation to zero) a number of output units from the layer during the training process. Dropout takes a fractional number as its input value, in the form such as 0.1, 0.2, 0.4, etc. This means dropping out 10%, 20% or 40% of the output units randomly from the applied layer.Let's create a new neural network using `layers.Dropout`, then train it using augmented images.
###Code
model = Sequential([
data_augmentation,
layers.experimental.preprocessing.Rescaling(1./255),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Dropout(0.2),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
###Output
_____no_output_____
###Markdown
Compile and train the model
###Code
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.summary()
epochs = 15
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
###Output
_____no_output_____
###Markdown
Visualize training resultsAfter applying data augmentation and Dropout, there is less overfitting than before, and training and validation accuracy are closer aligned.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
Predict on new data Finally, let's use our model to classify an image that wasn't included in the training or validation sets. Note: Data augmentation and Dropout layers are inactive at inference time.
###Code
sunflower_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/592px-Red_sunflower.jpg"
sunflower_path = tf.keras.utils.get_file('Red_sunflower', origin=sunflower_url)
img = keras.preprocessing.image.load_img(
sunflower_path, target_size=(img_height, img_width)
)
img_array = keras.preprocessing.image.img_to_array(img)
img_array = tf.expand_dims(img_array, 0) # Create a batch
predictions = model.predict(img_array)
score = tf.nn.softmax(predictions[0])
print(
"This image most likely belongs to {} with a {:.2f} percent confidence."
.format(class_names[np.argmax(score)], 100 * np.max(score))
)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Image classification View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial shows how to classify cats or dogs from images. It builds an image classifier using a `tf.keras.Sequential` model and load data using `tf.keras.preprocessing.image.ImageDataGenerator`. You will get some practical experience and develop intuition for the following concepts:* Building _data input pipelines_ using the `tf.keras.preprocessing.image.ImageDataGenerator` class to efficiently work with data on disk to use with the model.* _Overfitting_ —How to identify and prevent it.* _Data augmentation_ and _dropout_ —Key techniques to fight overfitting in computer vision tasks to incorporate into the data pipeline and image classifier model.This tutorial follows a basic machine learning workflow:1. Examine and understand data2. Build an input pipeline3. Build the model4. Train the model5. Test the model6. Improve the model and repeat the process Import packages Let's start by importing the required packages. The `os` package is used to read files and directory structure, NumPy is used to convert python list to numpy array and to perform required matrix operations and `matplotlib.pyplot` to plot the graph and display images in the training and validation data.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
###Output
_____no_output_____
###Markdown
Import Tensorflow and the Keras classes needed to construct our model.
###Code
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import os
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Load data Begin by downloading the dataset. This tutorial uses a filtered version of Dogs vs Cats dataset from Kaggle. Download the archive version of the dataset and store it in the "/tmp/" directory.
###Code
_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)
PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')
###Output
_____no_output_____
###Markdown
The dataset has the following directory structure:cats_and_dogs_filtered|__ train |______ cats: [cat.0.jpg, cat.1.jpg, cat.2.jpg ....] |______ dogs: [dog.0.jpg, dog.1.jpg, dog.2.jpg ...]|__ validation |______ cats: [cat.2000.jpg, cat.2001.jpg, cat.2002.jpg ....] |______ dogs: [dog.2000.jpg, dog.2001.jpg, dog.2002.jpg ...] After extracting its contents, assign variables with the proper file path for the training and validation set.
###Code
train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')
train_cats_dir = os.path.join(train_dir, 'cats') # directory with our training cat pictures
train_dogs_dir = os.path.join(train_dir, 'dogs') # directory with our training dog pictures
validation_cats_dir = os.path.join(validation_dir, 'cats') # directory with our validation cat pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs') # directory with our validation dog pictures
###Output
_____no_output_____
###Markdown
Understand the data Let's look at how many cats and dogs images are in the training and validation directory:
###Code
num_cats_tr = len(os.listdir(train_cats_dir))
num_dogs_tr = len(os.listdir(train_dogs_dir))
num_cats_val = len(os.listdir(validation_cats_dir))
num_dogs_val = len(os.listdir(validation_dogs_dir))
total_train = num_cats_tr + num_dogs_tr
total_val = num_cats_val + num_dogs_val
print('total training cat images:', num_cats_tr)
print('total training dog images:', num_dogs_tr)
print('total validation cat images:', num_cats_val)
print('total validation dog images:', num_dogs_val)
print("--")
print("Total training images:", total_train)
print("Total validation images:", total_val)
###Output
_____no_output_____
###Markdown
For convenience, set up variables to use while pre-processing the dataset and training the network.
###Code
batch_size = 128
epochs = 15
IMG_HEIGHT = 150
IMG_WIDTH = 150
###Output
_____no_output_____
###Markdown
Data preparation Format the images into appropriately pre-processed floating point tensors before feeding to the network:1. Read images from the disk.2. Decode contents of these images and convert it into proper grid format as per their RGB content.3. Convert them into floating point tensors.4. Rescale the tensors from values between 0 and 255 to values between 0 and 1, as neural networks prefer to deal with small input values.Fortunately, all these tasks can be done with the `ImageDataGenerator` class provided by `tf.keras`. It can read images from disk and preprocess them into proper tensors. It will also set up generators that convert these images into batches of tensors—helpful when training the network.
###Code
train_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our training data
validation_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our validation data
###Output
_____no_output_____
###Markdown
After defining the generators for training and validation images, the `flow_from_directory` method load images from the disk, applies rescaling, and resizes the images into the required dimensions.
###Code
train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Visualize training images Visualize the training images by extracting a batch of images from the training generator—which is 32 images in this example—then plot five of them with `matplotlib`.
###Code
sample_training_images, _ = next(train_data_gen)
###Output
_____no_output_____
###Markdown
The `next` function returns a batch from the dataset. The return value of `next` function is in form of `(x_train, y_train)` where x_train is training features and y_train, its labels. Discard the labels to only visualize the training images.
###Code
# This function will plot images in the form of a grid with 1 row and 5 columns where images are placed in each column.
def plotImages(images_arr):
fig, axes = plt.subplots(1, 5, figsize=(20,20))
axes = axes.flatten()
for img, ax in zip( images_arr, axes):
ax.imshow(img)
ax.axis('off')
plt.tight_layout()
plt.show()
plotImages(sample_training_images[:5])
###Output
_____no_output_____
###Markdown
Create the model The model consists of three convolution blocks with a max pool layer in each of them. There's a fully connected layer with 512 units on top of it thatr is activated by a `relu` activation function. The model outputs class probabilities based on binary classification by the `sigmoid` activation function.
###Code
model = Sequential([
Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(512, activation='relu'),
Dense(1, activation='sigmoid')
])
###Output
_____no_output_____
###Markdown
Compile the modelFor this tutorial, choose the *ADAM* optimizer and *binary cross entropy* loss function. To view training and validation accuracy for each training epoch, pass the `metrics` argument.
###Code
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Model summaryView all the layers of the network using the model's `summary` method:
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Train the model Use the `fit_generator` method of the `ImageDataGenerator` class to train the network.
###Code
history = model.fit_generator(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)
###Output
_____no_output_____
###Markdown
Visualize training results Now visualize the results after training the network.
###Code
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
As you can see from the plots, training accuracy and validation accuracy are off by large margin and the model has achieved only around **70%** accuracy on the validation set.Let's look at what went wrong and try to increase overall performance of the model. Overfitting In the plots above, the training accuracy is increasing linearly over time, whereas validation accuracy stalls around 70% in the training process. Also, the difference in accuracy between training and validation accuracy is noticeable—a sign of *overfitting*.When there are a small number of training examples, the model sometimes learns from noises or unwanted details from training examples—to an extent that it negatively impacts the performance of the model on new examples. This phenomenon is known as overfitting. It means that the model will have a difficult time generalizing on a new dataset.There are multiple ways to fight overfitting in the training process. In this tutorial, you'll use *data augmentation* and add *dropout* to our model. Data augmentation Overfitting generally occurs when there are a small number of training examples. One way to fix this problem is to augment the dataset so that it has a sufficient number of training examples. Data augmentation takes the approach of generating more training data from existing training samples by augmenting the samples using random transformations that yield believable-looking images. The goal is the model will never see the exact same picture twice during training. This helps expose the model to more aspects of the data and generalize better.Implement this in `tf.keras` using the `ImageDataGenerator` class. Pass different transformations to the dataset and it will take care of applying it during the training process. Augment and visualize data Begin by applying random horizontal flip augmentation to the dataset and see how individual images look like after the transformation. Apply horizontal flip Pass `horizontal_flip` as an argument to the `ImageDataGenerator` class and set it to `True` to apply this augmentation.
###Code
image_gen = ImageDataGenerator(rescale=1./255, horizontal_flip=True)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
###Output
_____no_output_____
###Markdown
Take one sample image from the training examples and repeat it five times so that the augmentation is applied to the same image five times.
###Code
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
# Re-use the same custom plotting function defined and used
# above to visualize the training images
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Randomly rotate the image Let's take a look at a different augmentation called rotation and apply 45 degrees of rotation randomly to the training examples.
###Code
image_gen = ImageDataGenerator(rescale=1./255, rotation_range=45)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Apply zoom augmentation Apply a zoom augmentation to the dataset to zoom images up to 50% randomly.
###Code
image_gen = ImageDataGenerator(rescale=1./255, zoom_range=0.5)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Put it all together Apply all the previous augmentations. Here, you applied rescale, 45 degree rotation, width shift, height shift, horizontal flip and zoom augmentation to the training images.
###Code
image_gen_train = ImageDataGenerator(
rescale=1./255,
rotation_range=45,
width_shift_range=.15,
height_shift_range=.15,
horizontal_flip=True,
zoom_range=0.5
)
train_data_gen = image_gen_train.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Visualize how a single image would look five different times when passing these augmentations randomly to the dataset.
###Code
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Create validation data generator Generally, only apply data augmentation to the training examples. In this case, only rescale the validation images and convert them into batches using `ImageDataGenerator`.
###Code
image_gen_val = ImageDataGenerator(rescale=1./255)
val_data_gen = image_gen_val.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Dropout Another technique to reduce overfitting is to introduce *dropout* to the network. It is a form of *regularization* that forces the weights in the network to take only small values, which makes the distribution of weight values more regular and the network can reduce overfitting on small training examples. Dropout is one of the regularization technique used in this tutorialWhen you apply dropout to a layer it randomly drops out (set to zero) number of output units from the applied layer during the training process. Dropout takes a fractional number as its input value, in the form such as 0.1, 0.2, 0.4, etc. This means dropping out 10%, 20% or 40% of the output units randomly from the applied layer.When appling 0.1 dropout to a certain layer, it randomly kills 10% of the output units in each training epoch.Create a network architecture with this new dropout feature and apply it to different convolutions and fully-connected layers. Creating a new network with Dropouts Here, you apply dropout to first and last max pool layers. Applying dropout will randomly set 20% of the neurons to zero during each training epoch. This helps to avoid overfitting on the training dataset.
###Code
model_new = Sequential([
Conv2D(16, 3, padding='same', activation='relu',
input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Dropout(0.2),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Dropout(0.2),
Flatten(),
Dense(512, activation='relu'),
Dense(1, activation='sigmoid')
])
###Output
_____no_output_____
###Markdown
Compile the model After introducing dropouts to the network, compile the model and view the layers summary.
###Code
model_new.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
model_new.summary()
###Output
_____no_output_____
###Markdown
Train the model After successfully introducing data augmentations to the training examples and adding dropouts to the network, train this new network:
###Code
history = model_new.fit_generator(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)
###Output
_____no_output_____
###Markdown
Visualize the model Visualize the new model after training, you can see that there is significantly less overfitting than before. The accuracy should go up after training the model for more epochs.
###Code
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Image classification View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial shows how to classify images of flowers. It creates an image classifier using a `tf.keras.Sequential` model, and loads data using `tf.keras.utils.image_dataset_from_directory`. You will gain practical experience with the following concepts:* Efficiently loading a dataset off disk.* Identifying overfitting and applying techniques to mitigate it, including data augmentation and dropout.This tutorial follows a basic machine learning workflow:1. Examine and understand data2. Build an input pipeline3. Build the model4. Train the model5. Test the model6. Improve the model and repeat the process Import TensorFlow and other libraries
###Code
import matplotlib.pyplot as plt
import numpy as np
import os
import PIL
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
###Output
_____no_output_____
###Markdown
Download and explore the dataset This tutorial uses a dataset of about 3,700 photos of flowers. The dataset contains five sub-directories, one per class:```flower_photo/ daisy/ dandelion/ roses/ sunflowers/ tulips/```
###Code
import pathlib
dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
data_dir = tf.keras.utils.get_file('flower_photos', origin=dataset_url, untar=True)
data_dir = pathlib.Path(data_dir)
###Output
_____no_output_____
###Markdown
After downloading, you should now have a copy of the dataset available. There are 3,670 total images:
###Code
image_count = len(list(data_dir.glob('*/*.jpg')))
print(image_count)
###Output
_____no_output_____
###Markdown
Here are some roses:
###Code
roses = list(data_dir.glob('roses/*'))
PIL.Image.open(str(roses[0]))
PIL.Image.open(str(roses[1]))
###Output
_____no_output_____
###Markdown
And some tulips:
###Code
tulips = list(data_dir.glob('tulips/*'))
PIL.Image.open(str(tulips[0]))
PIL.Image.open(str(tulips[1]))
###Output
_____no_output_____
###Markdown
Load data using a Keras utilityLet's load these images off disk using the helpful `tf.keras.utils.image_dataset_from_directory` utility. This will take you from a directory of images on disk to a `tf.data.Dataset` in just a couple lines of code. If you like, you can also write your own data loading code from scratch by visiting the [Load and preprocess images](../load_data/images.ipynb) tutorial. Create a dataset Define some parameters for the loader:
###Code
batch_size = 32
img_height = 180
img_width = 180
###Output
_____no_output_____
###Markdown
It's good practice to use a validation split when developing your model. Let's use 80% of the images for training, and 20% for validation.
###Code
train_ds = tf.keras.utils.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="training",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
val_ds = tf.keras.utils.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
###Output
_____no_output_____
###Markdown
You can find the class names in the `class_names` attribute on these datasets. These correspond to the directory names in alphabetical order.
###Code
class_names = train_ds.class_names
print(class_names)
###Output
_____no_output_____
###Markdown
Visualize the dataHere are the first nine images from the training dataset:
###Code
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 10))
for images, labels in train_ds.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(images[i].numpy().astype("uint8"))
plt.title(class_names[labels[i]])
plt.axis("off")
###Output
_____no_output_____
###Markdown
You will train a model using these datasets by passing them to `Model.fit` in a moment. If you like, you can also manually iterate over the dataset and retrieve batches of images:
###Code
for image_batch, labels_batch in train_ds:
print(image_batch.shape)
print(labels_batch.shape)
break
###Output
_____no_output_____
###Markdown
The `image_batch` is a tensor of the shape `(32, 180, 180, 3)`. This is a batch of 32 images of shape `180x180x3` (the last dimension refers to color channels RGB). The `label_batch` is a tensor of the shape `(32,)`, these are corresponding labels to the 32 images.You can call `.numpy()` on the `image_batch` and `labels_batch` tensors to convert them to a `numpy.ndarray`. Configure the dataset for performanceLet's make sure to use buffered prefetching so you can yield data from disk without having I/O become blocking. These are two important methods you should use when loading data:- `Dataset.cache` keeps the images in memory after they're loaded off disk during the first epoch. This will ensure the dataset does not become a bottleneck while training your model. If your dataset is too large to fit into memory, you can also use this method to create a performant on-disk cache.- `Dataset.prefetch` overlaps data preprocessing and model execution while training.Interested readers can learn more about both methods, as well as how to cache data to disk in the *Prefetching* section of the [Better performance with the tf.data API](../../guide/data_performance.ipynb) guide.
###Code
AUTOTUNE = tf.data.AUTOTUNE
train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
###Output
_____no_output_____
###Markdown
Standardize the data The RGB channel values are in the `[0, 255]` range. This is not ideal for a neural network; in general you should seek to make your input values small.Here, you will standardize values to be in the `[0, 1]` range by using `tf.keras.layers.Rescaling`:
###Code
normalization_layer = layers.Rescaling(1./255)
###Output
_____no_output_____
###Markdown
There are two ways to use this layer. You can apply it to the dataset by calling `Dataset.map`:
###Code
normalized_ds = train_ds.map(lambda x, y: (normalization_layer(x), y))
image_batch, labels_batch = next(iter(normalized_ds))
first_image = image_batch[0]
# Notice the pixel values are now in `[0,1]`.
print(np.min(first_image), np.max(first_image))
###Output
_____no_output_____
###Markdown
Or, you can include the layer inside your model definition, which can simplify deployment. Let's use the second approach here. Note: You previously resized images using the `image_size` argument of `tf.keras.utils.image_dataset_from_directory`. If you want to include the resizing logic in your model as well, you can use the `tf.keras.layers.Resizing` layer. Create the modelThe [Sequential](https://www.tensorflow.org/guide/keras/sequential_model) model consists of three convolution blocks (`tf.keras.layers.Conv2D`) with a max pooling layer (`tf.keras.layers.MaxPooling2D`) in each of them. There's a fully-connected layer (`tf.keras.layers.Dense`) with 128 units on top of it that is activated by a ReLU activation function (`'relu'`). This model has not been tuned for high accuracy—the goal of this tutorial is to show a standard approach.
###Code
num_classes = 5
model = Sequential([
layers.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
###Output
_____no_output_____
###Markdown
Compile the modelFor this tutorial, choose the `tf.keras.optimizers.Adam` optimizer and `tf.keras.losses.SparseCategoricalCrossentropy` loss function. To view training and validation accuracy for each training epoch, pass the `metrics` argument to `Model.compile`.
###Code
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Model summaryView all the layers of the network using the model's `Model.summary` method:
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Train the model
###Code
epochs=10
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
###Output
_____no_output_____
###Markdown
Visualize training results Create plots of loss and accuracy on the training and validation sets:
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
The plots show that training accuracy and validation accuracy are off by large margins, and the model has achieved only around 60% accuracy on the validation set.Let's inspect what went wrong and try to increase the overall performance of the model. Overfitting In the plots above, the training accuracy is increasing linearly over time, whereas validation accuracy stalls around 60% in the training process. Also, the difference in accuracy between training and validation accuracy is noticeable—a sign of [overfitting](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit).When there are a small number of training examples, the model sometimes learns from noises or unwanted details from training examples—to an extent that it negatively impacts the performance of the model on new examples. This phenomenon is known as overfitting. It means that the model will have a difficult time generalizing on a new dataset.There are multiple ways to fight overfitting in the training process. In this tutorial, you'll use *data augmentation* and add *Dropout* to your model. Data augmentation Overfitting generally occurs when there are a small number of training examples. [Data augmentation](./data_augmentation.ipynb) takes the approach of generating additional training data from your existing examples by augmenting them using random transformations that yield believable-looking images. This helps expose the model to more aspects of the data and generalize better.You will implement data augmentation using the following Keras preprocessing layers: `tf.keras.layers.RandomFlip`, `tf.keras.layers.RandomRotation`, and `tf.keras.layers.RandomZoom`. These can be included inside your model like other layers, and run on the GPU.
###Code
data_augmentation = keras.Sequential(
[
layers.RandomFlip("horizontal",
input_shape=(img_height,
img_width,
3)),
layers.RandomRotation(0.1),
layers.RandomZoom(0.1),
]
)
###Output
_____no_output_____
###Markdown
Let's visualize what a few augmented examples look like by applying data augmentation to the same image several times:
###Code
plt.figure(figsize=(10, 10))
for images, _ in train_ds.take(1):
for i in range(9):
augmented_images = data_augmentation(images)
ax = plt.subplot(3, 3, i + 1)
plt.imshow(augmented_images[0].numpy().astype("uint8"))
plt.axis("off")
###Output
_____no_output_____
###Markdown
You will use data augmentation to train a model in a moment. DropoutAnother technique to reduce overfitting is to introduce [dropout](https://developers.google.com/machine-learning/glossarydropout_regularization) regularization to the network.When you apply dropout to a layer, it randomly drops out (by setting the activation to zero) a number of output units from the layer during the training process. Dropout takes a fractional number as its input value, in the form such as 0.1, 0.2, 0.4, etc. This means dropping out 10%, 20% or 40% of the output units randomly from the applied layer.Let's create a new neural network with `tf.keras.layers.Dropout` before training it using the augmented images:
###Code
model = Sequential([
data_augmentation,
layers.Rescaling(1./255),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Dropout(0.2),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
###Output
_____no_output_____
###Markdown
Compile and train the model
###Code
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.summary()
epochs = 15
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
###Output
_____no_output_____
###Markdown
Visualize training resultsAfter applying data augmentation and `tf.keras.layers.Dropout`, there is less overfitting than before, and training and validation accuracy are closer aligned:
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
Predict on new data Finally, let's use our model to classify an image that wasn't included in the training or validation sets. Note: Data augmentation and dropout layers are inactive at inference time.
###Code
sunflower_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/592px-Red_sunflower.jpg"
sunflower_path = tf.keras.utils.get_file('Red_sunflower', origin=sunflower_url)
img = tf.keras.utils.load_img(
sunflower_path, target_size=(img_height, img_width)
)
img_array = tf.keras.utils.img_to_array(img)
img_array = tf.expand_dims(img_array, 0) # Create a batch
predictions = model.predict(img_array)
score = tf.nn.softmax(predictions[0])
print(
"This image most likely belongs to {} with a {:.2f} percent confidence."
.format(class_names[np.argmax(score)], 100 * np.max(score))
)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Image classification View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial shows how to classify cats or dogs from images. It builds an image classifier using a `tf.keras.Sequential` model and load data using `tf.keras.preprocessing.image.ImageDataGenerator`. You will get some practical experience and develop intuition for the following concepts:* Building _data input pipelines_ using the `tf.keras.preprocessing.image.ImageDataGenerator` class to efficiently work with data on disk to use with the model.* _Overfitting_ —How to identify and prevent it.* _Data augmentation_ and _dropout_ —Key techniques to fight overfitting in computer vision tasks to incorporate into the data pipeline and image classifier model.This tutorial follows a basic machine learning workflow:1. Examine and understand data2. Build an input pipeline3. Build the model4. Train the model5. Test the model6. Improve the model and repeat the process Import packages Let's start by importing the required packages. The `os` package is used to read files and directory structure, NumPy is used to convert python list to numpy array and to perform required matrix operations and `matplotlib.pyplot` to plot the graph and display images in the training and validation data.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
###Output
_____no_output_____
###Markdown
Import Tensorflow and the Keras classes needed to construct our model.
###Code
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import os
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Load data Begin by downloading the dataset. This tutorial uses a filtered version of Dogs vs Cats dataset from Kaggle. Download the archive version of the dataset and store it in the "/tmp/" directory.
###Code
_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)
PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')
###Output
_____no_output_____
###Markdown
The dataset has the following directory structure:cats_and_dogs_filtered|__ train |______ cats: [cat.0.jpg, cat.1.jpg, cat.2.jpg ....] |______ dogs: [dog.0.jpg, dog.1.jpg, dog.2.jpg ...]|__ validation |______ cats: [cat.2000.jpg, cat.2001.jpg, cat.2002.jpg ....] |______ dogs: [dog.2000.jpg, dog.2001.jpg, dog.2002.jpg ...] After extracting its contents, assign variables with the proper file path for the training and validation set.
###Code
train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')
train_cats_dir = os.path.join(train_dir, 'cats') # directory with our training cat pictures
train_dogs_dir = os.path.join(train_dir, 'dogs') # directory with our training dog pictures
validation_cats_dir = os.path.join(validation_dir, 'cats') # directory with our validation cat pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs') # directory with our validation dog pictures
###Output
_____no_output_____
###Markdown
Understand the data Let's look at how many cats and dogs images are in the training and validation directory:
###Code
num_cats_tr = len(os.listdir(train_cats_dir))
num_dogs_tr = len(os.listdir(train_dogs_dir))
num_cats_val = len(os.listdir(validation_cats_dir))
num_dogs_val = len(os.listdir(validation_dogs_dir))
total_train = num_cats_tr + num_dogs_tr
total_val = num_cats_val + num_dogs_val
print('total training cat images:', num_cats_tr)
print('total training dog images:', num_dogs_tr)
print('total validation cat images:', num_cats_val)
print('total validation dog images:', num_dogs_val)
print("--")
print("Total training images:", total_train)
print("Total validation images:", total_val)
###Output
_____no_output_____
###Markdown
For convenience, set up variables to use while pre-processing the dataset and training the network.
###Code
batch_size = 128
epochs = 15
IMG_HEIGHT = 150
IMG_WIDTH = 150
###Output
_____no_output_____
###Markdown
Data preparation Format the images into appropriately pre-processed floating point tensors before feeding to the network:1. Read images from the disk.2. Decode contents of these images and convert it into proper grid format as per their RGB content.3. Convert them into floating point tensors.4. Rescale the tensors from values between 0 and 255 to values between 0 and 1, as neural networks prefer to deal with small input values.Fortunately, all these tasks can be done with the `ImageDataGenerator` class provided by `tf.keras`. It can read images from disk and preprocess them into proper tensors. It will also set up generators that convert these images into batches of tensors—helpful when training the network.
###Code
train_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our training data
validation_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our validation data
###Output
_____no_output_____
###Markdown
After defining the generators for training and validation images, the `flow_from_directory` method load images from the disk, applies rescaling, and resizes the images into the required dimensions.
###Code
train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Visualize training images Visualize the training images by extracting a batch of images from the training generator—which is 32 images in this example—then plot five of them with `matplotlib`.
###Code
sample_training_images, _ = next(train_data_gen)
###Output
_____no_output_____
###Markdown
The `next` function returns a batch from the dataset. The return value of `next` function is in form of `(x_train, y_train)` where x_train is training features and y_train, its labels. Discard the labels to only visualize the training images.
###Code
# This function will plot images in the form of a grid with 1 row and 5 columns where images are placed in each column.
def plotImages(images_arr):
fig, axes = plt.subplots(1, 5, figsize=(20,20))
axes = axes.flatten()
for img, ax in zip( images_arr, axes):
ax.imshow(img)
ax.axis('off')
plt.tight_layout()
plt.show()
plotImages(sample_training_images[:5])
###Output
_____no_output_____
###Markdown
Create the model The model consists of three convolution blocks with a max pool layer in each of them. There's a fully connected layer with 512 units on top of it that is activated by a `relu` activation function.
###Code
model = Sequential([
Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(512, activation='relu'),
Dense(1)
])
###Output
_____no_output_____
###Markdown
Compile the modelFor this tutorial, choose the *ADAM* optimizer and *binary cross entropy* loss function. To view training and validation accuracy for each training epoch, pass the `metrics` argument.
###Code
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Model summaryView all the layers of the network using the model's `summary` method:
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Train the model Use the `fit_generator` method of the `ImageDataGenerator` class to train the network.
###Code
history = model.fit_generator(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)
###Output
_____no_output_____
###Markdown
Visualize training results Now visualize the results after training the network.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss=history.history['loss']
val_loss=history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
As you can see from the plots, training accuracy and validation accuracy are off by large margin and the model has achieved only around **70%** accuracy on the validation set.Let's look at what went wrong and try to increase overall performance of the model. Overfitting In the plots above, the training accuracy is increasing linearly over time, whereas validation accuracy stalls around 70% in the training process. Also, the difference in accuracy between training and validation accuracy is noticeable—a sign of *overfitting*.When there are a small number of training examples, the model sometimes learns from noises or unwanted details from training examples—to an extent that it negatively impacts the performance of the model on new examples. This phenomenon is known as overfitting. It means that the model will have a difficult time generalizing on a new dataset.There are multiple ways to fight overfitting in the training process. In this tutorial, you'll use *data augmentation* and add *dropout* to our model. Data augmentation Overfitting generally occurs when there are a small number of training examples. One way to fix this problem is to augment the dataset so that it has a sufficient number of training examples. Data augmentation takes the approach of generating more training data from existing training samples by augmenting the samples using random transformations that yield believable-looking images. The goal is the model will never see the exact same picture twice during training. This helps expose the model to more aspects of the data and generalize better.Implement this in `tf.keras` using the `ImageDataGenerator` class. Pass different transformations to the dataset and it will take care of applying it during the training process. Augment and visualize data Begin by applying random horizontal flip augmentation to the dataset and see how individual images look like after the transformation. Apply horizontal flip Pass `horizontal_flip` as an argument to the `ImageDataGenerator` class and set it to `True` to apply this augmentation.
###Code
image_gen = ImageDataGenerator(rescale=1./255, horizontal_flip=True)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
###Output
_____no_output_____
###Markdown
Take one sample image from the training examples and repeat it five times so that the augmentation is applied to the same image five times.
###Code
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
# Re-use the same custom plotting function defined and used
# above to visualize the training images
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Randomly rotate the image Let's take a look at a different augmentation called rotation and apply 45 degrees of rotation randomly to the training examples.
###Code
image_gen = ImageDataGenerator(rescale=1./255, rotation_range=45)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Apply zoom augmentation Apply a zoom augmentation to the dataset to zoom images up to 50% randomly.
###Code
# zoom_range from 0 - 1 where 1 = 100%.
image_gen = ImageDataGenerator(rescale=1./255, zoom_range=0.5) #
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Put it all together Apply all the previous augmentations. Here, you applied rescale, 45 degree rotation, width shift, height shift, horizontal flip and zoom augmentation to the training images.
###Code
image_gen_train = ImageDataGenerator(
rescale=1./255,
rotation_range=45,
width_shift_range=.15,
height_shift_range=.15,
horizontal_flip=True,
zoom_range=0.5
)
train_data_gen = image_gen_train.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Visualize how a single image would look five different times when passing these augmentations randomly to the dataset.
###Code
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Create validation data generator Generally, only apply data augmentation to the training examples. In this case, only rescale the validation images and convert them into batches using `ImageDataGenerator`.
###Code
image_gen_val = ImageDataGenerator(rescale=1./255)
val_data_gen = image_gen_val.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Dropout Another technique to reduce overfitting is to introduce *dropout* to the network. It is a form of *regularization* that forces the weights in the network to take only small values, which makes the distribution of weight values more regular and the network can reduce overfitting on small training examples. Dropout is one of the regularization technique used in this tutorialWhen you apply dropout to a layer it randomly drops out (set to zero) number of output units from the applied layer during the training process. Dropout takes a fractional number as its input value, in the form such as 0.1, 0.2, 0.4, etc. This means dropping out 10%, 20% or 40% of the output units randomly from the applied layer.When appling 0.1 dropout to a certain layer, it randomly kills 10% of the output units in each training epoch.Create a network architecture with this new dropout feature and apply it to different convolutions and fully-connected layers. Creating a new network with Dropouts Here, you apply dropout to first and last max pool layers. Applying dropout will randomly set 20% of the neurons to zero during each training epoch. This helps to avoid overfitting on the training dataset.
###Code
model_new = Sequential([
Conv2D(16, 3, padding='same', activation='relu',
input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Dropout(0.2),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Dropout(0.2),
Flatten(),
Dense(512, activation='relu'),
Dense(1)
])
###Output
_____no_output_____
###Markdown
Compile the model After introducing dropouts to the network, compile the model and view the layers summary.
###Code
model_new.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
model_new.summary()
###Output
_____no_output_____
###Markdown
Train the model After successfully introducing data augmentations to the training examples and adding dropouts to the network, train this new network:
###Code
history = model_new.fit_generator(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)
###Output
_____no_output_____
###Markdown
Visualize the model Visualize the new model after training, you can see that there is significantly less overfitting than before. The accuracy should go up after training the model for more epochs.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Image classification View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial shows how to classify images of flowers. It creates an image classifier using a `keras.Sequential` model, and loads data using `preprocessing.image_dataset_from_directory`. You will gain practical experience with the following concepts:* Efficiently loading a dataset off disk.* Identifying overfitting and applying techniques to mitigate it, including data augmentation and Dropout.This tutorial follows a basic machine learning workflow:1. Examine and understand data2. Build an input pipeline3. Build the model4. Train the model5. Test the model6. Improve the model and repeat the process Import TensorFlow and other libraries
###Code
import matplotlib.pyplot as plt
import numpy as np
import os
import PIL
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
###Output
_____no_output_____
###Markdown
Download and explore the dataset This tutorial uses a dataset of about 3,700 photos of flowers. The dataset contains 5 sub-directories, one per class:```flower_photo/ daisy/ dandelion/ roses/ sunflowers/ tulips/```
###Code
import pathlib
dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
data_dir = tf.keras.utils.get_file('flower_photos', origin=dataset_url, untar=True)
data_dir = pathlib.Path(data_dir)
###Output
_____no_output_____
###Markdown
After downloading, you should now have a copy of the dataset available. There are 3,670 total images:
###Code
image_count = len(list(data_dir.glob('*/*.jpg')))
print(image_count)
###Output
_____no_output_____
###Markdown
Here are some roses:
###Code
roses = list(data_dir.glob('roses/*'))
PIL.Image.open(str(roses[0]))
PIL.Image.open(str(roses[1]))
###Output
_____no_output_____
###Markdown
And some tulips:
###Code
tulips = list(data_dir.glob('tulips/*'))
PIL.Image.open(str(tulips[0]))
PIL.Image.open(str(tulips[1]))
###Output
_____no_output_____
###Markdown
Load using keras.preprocessingLet's load these images off disk using the helpful [image_dataset_from_directory](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory) utility. This will take you from a directory of images on disk to a `tf.data.Dataset` in just a couple lines of code. If you like, you can also write your own data loading code from scratch by visiting the [load images](https://www.tensorflow.org/tutorials/load_data/images) tutorial. Create a dataset Define some parameters for the loader:
###Code
batch_size = 32
img_height = 180
img_width = 180
###Output
_____no_output_____
###Markdown
It's good practice to use a validation split when developing your model. Let's use 80% of the images for training, and 20% for validation.
###Code
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="training",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
###Output
_____no_output_____
###Markdown
You can find the class names in the `class_names` attribute on these datasets. These correspond to the directory names in alphabetical order.
###Code
class_names = train_ds.class_names
print(class_names)
###Output
_____no_output_____
###Markdown
Visualize the dataHere are the first 9 images from the training dataset.
###Code
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 10))
for images, labels in train_ds.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(images[i].numpy().astype("uint8"))
plt.title(class_names[labels[i]])
plt.axis("off")
###Output
_____no_output_____
###Markdown
You will train a model using these datasets by passing them to `model.fit` in a moment. If you like, you can also manually iterate over the dataset and retrieve batches of images:
###Code
for image_batch, labels_batch in train_ds:
print(image_batch.shape)
print(labels_batch.shape)
break
###Output
_____no_output_____
###Markdown
The `image_batch` is a tensor of the shape `(32, 180, 180, 3)`. This is a batch of 32 images of shape `180x180x3` (the last dimension refers to color channels RGB). The `label_batch` is a tensor of the shape `(32,)`, these are corresponding labels to the 32 images. You can call `.numpy()` on the `image_batch` and `labels_batch` tensors to convert them to a `numpy.ndarray`. Configure the dataset for performanceLet's make sure to use buffered prefetching so you can yield data from disk without having I/O become blocking. These are two important methods you should use when loading data.`Dataset.cache()` keeps the images in memory after they're loaded off disk during the first epoch. This will ensure the dataset does not become a bottleneck while training your model. If your dataset is too large to fit into memory, you can also use this method to create a performant on-disk cache.`Dataset.prefetch()` overlaps data preprocessing and model execution while training. Interested readers can learn more about both methods, as well as how to cache data to disk in the [data performance guide](https://www.tensorflow.org/guide/data_performanceprefetching).
###Code
AUTOTUNE = tf.data.AUTOTUNE
train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
###Output
_____no_output_____
###Markdown
Standardize the data The RGB channel values are in the `[0, 255]` range. This is not ideal for a neural network; in general you should seek to make your input values small. Here, you will standardize values to be in the `[0, 1]` range by using a Rescaling layer.
###Code
normalization_layer = layers.experimental.preprocessing.Rescaling(1./255)
###Output
_____no_output_____
###Markdown
Note: The Keras Preprocessing utilities and layers introduced in this section are currently experimental and may change. There are two ways to use this layer. You can apply it to the dataset by calling map:
###Code
normalized_ds = train_ds.map(lambda x, y: (normalization_layer(x), y))
image_batch, labels_batch = next(iter(normalized_ds))
first_image = image_batch[0]
# Notice the pixels values are now in `[0,1]`.
print(np.min(first_image), np.max(first_image))
###Output
_____no_output_____
###Markdown
Or, you can include the layer inside your model definition, which can simplify deployment. Let's use the second approach here. Note: you previously resized images using the `image_size` argument of `image_dataset_from_directory`. If you want to include the resizing logic in your model as well, you can use the [Resizing](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/Resizing) layer. Create the modelThe model consists of three convolution blocks with a max pool layer in each of them. There's a fully connected layer with 128 units on top of it that is activated by a `relu` activation function. This model has not been tuned for high accuracy, the goal of this tutorial is to show a standard approach.
###Code
num_classes = 5
model = Sequential([
layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
###Output
_____no_output_____
###Markdown
Compile the modelFor this tutorial, choose the `optimizers.Adam` optimizer and `losses.SparseCategoricalCrossentropy` loss function. To view training and validation accuracy for each training epoch, pass the `metrics` argument.
###Code
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Model summaryView all the layers of the network using the model's `summary` method:
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Train the model
###Code
epochs=10
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
###Output
_____no_output_____
###Markdown
Visualize training results Create plots of loss and accuracy on the training and validation sets.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
As you can see from the plots, training accuracy and validation accuracy are off by large margin and the model has achieved only around 60% accuracy on the validation set.Let's look at what went wrong and try to increase the overall performance of the model. Overfitting In the plots above, the training accuracy is increasing linearly over time, whereas validation accuracy stalls around 60% in the training process. Also, the difference in accuracy between training and validation accuracy is noticeable—a sign of [overfitting](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit).When there are a small number of training examples, the model sometimes learns from noises or unwanted details from training examples—to an extent that it negatively impacts the performance of the model on new examples. This phenomenon is known as overfitting. It means that the model will have a difficult time generalizing on a new dataset.There are multiple ways to fight overfitting in the training process. In this tutorial, you'll use *data augmentation* and add *Dropout* to your model. Data augmentation Overfitting generally occurs when there are a small number of training examples. [Data augmentation](https://www.tensorflow.org/tutorials/images/data_augmentation) takes the approach of generating additional training data from your existing examples by augmenting them using random transformations that yield believable-looking images. This helps expose the model to more aspects of the data and generalize better.You will implement data augmentation using the layers from `tf.keras.layers.experimental.preprocessing`. These can be included inside your model like other layers, and run on the GPU.
###Code
data_augmentation = keras.Sequential(
[
layers.experimental.preprocessing.RandomFlip("horizontal",
input_shape=(img_height,
img_width,
3)),
layers.experimental.preprocessing.RandomRotation(0.1),
layers.experimental.preprocessing.RandomZoom(0.1),
]
)
###Output
_____no_output_____
###Markdown
Let's visualize what a few augmented examples look like by applying data augmentation to the same image several times:
###Code
plt.figure(figsize=(10, 10))
for images, _ in train_ds.take(1):
for i in range(9):
augmented_images = data_augmentation(images)
ax = plt.subplot(3, 3, i + 1)
plt.imshow(augmented_images[0].numpy().astype("uint8"))
plt.axis("off")
###Output
_____no_output_____
###Markdown
You will use data augmentation to train a model in a moment. DropoutAnother technique to reduce overfitting is to introduce [Dropout](https://developers.google.com/machine-learning/glossarydropout_regularization) to the network, a form of *regularization*.When you apply Dropout to a layer it randomly drops out (by setting the activation to zero) a number of output units from the layer during the training process. Dropout takes a fractional number as its input value, in the form such as 0.1, 0.2, 0.4, etc. This means dropping out 10%, 20% or 40% of the output units randomly from the applied layer.Let's create a new neural network using `layers.Dropout`, then train it using augmented images.
###Code
model = Sequential([
data_augmentation,
layers.experimental.preprocessing.Rescaling(1./255),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Dropout(0.2),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
###Output
_____no_output_____
###Markdown
Compile and train the model
###Code
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.summary()
epochs = 15
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
###Output
_____no_output_____
###Markdown
Visualize training resultsAfter applying data augmentation and Dropout, there is less overfitting than before, and training and validation accuracy are closer aligned.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
Predict on new data Finally, let's use our model to classify an image that wasn't included in the training or validation sets. Note: Data augmentation and Dropout layers are inactive at inference time.
###Code
sunflower_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/592px-Red_sunflower.jpg"
sunflower_path = tf.keras.utils.get_file('Red_sunflower', origin=sunflower_url)
img = keras.preprocessing.image.load_img(
sunflower_path, target_size=(img_height, img_width)
)
img_array = keras.preprocessing.image.img_to_array(img)
img_array = tf.expand_dims(img_array, 0) # Create a batch
predictions = model.predict(img_array)
score = tf.nn.softmax(predictions[0])
print(
"This image most likely belongs to {} with a {:.2f} percent confidence."
.format(class_names[np.argmax(score)], 100 * np.max(score))
)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Image classification View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial shows how to classify cats or dogs from images. It builds an image classifier using a `tf.keras.Sequential` model and load data using `tf.keras.preprocessing.image.ImageDataGenerator`. You will get some practical experience and develop intuition for the following concepts:* Building _data input pipelines_ using the `tf.keras.preprocessing.image.ImageDataGenerator` class to efficiently work with data on disk to use with the model.* _Overfitting_ —How to identify and prevent it.* _Data augmentation_ and _dropout_ —Key techniques to fight overfitting in computer vision tasks to incorporate into the data pipeline and image classifier model.This tutorial follows a basic machine learning workflow:1. Examine and understand data2. Build an input pipeline3. Build the model4. Train the model5. Test the model6. Improve the model and repeat the process Import packages Let's start by importing the required packages. The `os` package is used to read files and directory structure, NumPy is used to convert python list to numpy array and to perform required matrix operations and `matplotlib.pyplot` to plot the graph and display images in the training and validation data. Import Tensorflow and the Keras classes needed to construct our model.
###Code
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import os
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Load data Begin by downloading the dataset. This tutorial uses a filtered version of Dogs vs Cats dataset from Kaggle. Download the archive version of the dataset and store it in the "/tmp/" directory.
###Code
_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)
PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')
###Output
_____no_output_____
###Markdown
The dataset has the following directory structure:cats_and_dogs_filtered|__ train |______ cats: [cat.0.jpg, cat.1.jpg, cat.2.jpg ....] |______ dogs: [dog.0.jpg, dog.1.jpg, dog.2.jpg ...]|__ validation |______ cats: [cat.2000.jpg, cat.2001.jpg, cat.2002.jpg ....] |______ dogs: [dog.2000.jpg, dog.2001.jpg, dog.2002.jpg ...] After extracting its contents, assign variables with the proper file path for the training and validation set.
###Code
train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')
train_cats_dir = os.path.join(train_dir, 'cats') # directory with our training cat pictures
train_dogs_dir = os.path.join(train_dir, 'dogs') # directory with our training dog pictures
validation_cats_dir = os.path.join(validation_dir, 'cats') # directory with our validation cat pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs') # directory with our validation dog pictures
###Output
_____no_output_____
###Markdown
Understand the data Let's look at how many cats and dogs images are in the training and validation directory:
###Code
num_cats_tr = len(os.listdir(train_cats_dir))
num_dogs_tr = len(os.listdir(train_dogs_dir))
num_cats_val = len(os.listdir(validation_cats_dir))
num_dogs_val = len(os.listdir(validation_dogs_dir))
total_train = num_cats_tr + num_dogs_tr
total_val = num_cats_val + num_dogs_val
print('total training cat images:', num_cats_tr)
print('total training dog images:', num_dogs_tr)
print('total validation cat images:', num_cats_val)
print('total validation dog images:', num_dogs_val)
print("--")
print("Total training images:", total_train)
print("Total validation images:", total_val)
###Output
_____no_output_____
###Markdown
For convenience, set up variables to use while pre-processing the dataset and training the network.
###Code
batch_size = 128
epochs = 15
IMG_HEIGHT = 150
IMG_WIDTH = 150
###Output
_____no_output_____
###Markdown
Data preparation Format the images into appropriately pre-processed floating point tensors before feeding to the network:1. Read images from the disk.2. Decode contents of these images and convert it into proper grid format as per their RGB content.3. Convert them into floating point tensors.4. Rescale the tensors from values between 0 and 255 to values between 0 and 1, as neural networks prefer to deal with small input values.Fortunately, all these tasks can be done with the `ImageDataGenerator` class provided by `tf.keras`. It can read images from disk and preprocess them into proper tensors. It will also set up generators that convert these images into batches of tensors—helpful when training the network.
###Code
train_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our training data
validation_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our validation data
###Output
_____no_output_____
###Markdown
After defining the generators for training and validation images, the `flow_from_directory` method load images from the disk, applies rescaling, and resizes the images into the required dimensions.
###Code
train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Visualize training images Visualize the training images by extracting a batch of images from the training generator—which is 32 images in this example—then plot five of them with `matplotlib`.
###Code
sample_training_images, _ = next(train_data_gen)
###Output
_____no_output_____
###Markdown
The `next` function returns a batch from the dataset. The return value of `next` function is in form of `(x_train, y_train)` where x_train is training features and y_train, its labels. Discard the labels to only visualize the training images.
###Code
# This function will plot images in the form of a grid with 1 row and 5 columns where images are placed in each column.
def plotImages(images_arr):
fig, axes = plt.subplots(1, 5, figsize=(20,20))
axes = axes.flatten()
for img, ax in zip( images_arr, axes):
ax.imshow(img)
ax.axis('off')
plt.tight_layout()
plt.show()
plotImages(sample_training_images[:5])
###Output
_____no_output_____
###Markdown
Create the model The model consists of three convolution blocks with a max pool layer in each of them. There's a fully connected layer with 512 units on top of it that is activated by a `relu` activation function.
###Code
model = Sequential([
Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(512, activation='relu'),
Dense(1)
])
###Output
_____no_output_____
###Markdown
Compile the modelFor this tutorial, choose the *ADAM* optimizer and *binary cross entropy* loss function. To view training and validation accuracy for each training epoch, pass the `metrics` argument.
###Code
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Model summaryView all the layers of the network using the model's `summary` method:
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Train the model Use the `fit_generator` method of the `ImageDataGenerator` class to train the network.
###Code
history = model.fit_generator(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)
###Output
_____no_output_____
###Markdown
Visualize training results Now visualize the results after training the network.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss=history.history['loss']
val_loss=history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
As you can see from the plots, training accuracy and validation accuracy are off by large margin and the model has achieved only around **70%** accuracy on the validation set.Let's look at what went wrong and try to increase overall performance of the model. Overfitting In the plots above, the training accuracy is increasing linearly over time, whereas validation accuracy stalls around 70% in the training process. Also, the difference in accuracy between training and validation accuracy is noticeable—a sign of *overfitting*.When there are a small number of training examples, the model sometimes learns from noises or unwanted details from training examples—to an extent that it negatively impacts the performance of the model on new examples. This phenomenon is known as overfitting. It means that the model will have a difficult time generalizing on a new dataset.There are multiple ways to fight overfitting in the training process. In this tutorial, you'll use *data augmentation* and add *dropout* to our model. Data augmentation Overfitting generally occurs when there are a small number of training examples. One way to fix this problem is to augment the dataset so that it has a sufficient number of training examples. Data augmentation takes the approach of generating more training data from existing training samples by augmenting the samples using random transformations that yield believable-looking images. The goal is the model will never see the exact same picture twice during training. This helps expose the model to more aspects of the data and generalize better.Implement this in `tf.keras` using the `ImageDataGenerator` class. Pass different transformations to the dataset and it will take care of applying it during the training process. Augment and visualize data Begin by applying random horizontal flip augmentation to the dataset and see how individual images look like after the transformation. Apply horizontal flip Pass `horizontal_flip` as an argument to the `ImageDataGenerator` class and set it to `True` to apply this augmentation.
###Code
image_gen = ImageDataGenerator(rescale=1./255, horizontal_flip=True)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
###Output
_____no_output_____
###Markdown
Take one sample image from the training examples and repeat it five times so that the augmentation is applied to the same image five times.
###Code
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
# Re-use the same custom plotting function defined and used
# above to visualize the training images
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Randomly rotate the image Let's take a look at a different augmentation called rotation and apply 45 degrees of rotation randomly to the training examples.
###Code
image_gen = ImageDataGenerator(rescale=1./255, rotation_range=45)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Apply zoom augmentation Apply a zoom augmentation to the dataset to zoom images up to 50% randomly.
###Code
# zoom_range from 0 - 1 where 1 = 100%.
image_gen = ImageDataGenerator(rescale=1./255, zoom_range=0.5) #
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH))
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Put it all together Apply all the previous augmentations. Here, you applied rescale, 45 degree rotation, width shift, height shift, horizontal flip and zoom augmentation to the training images.
###Code
image_gen_train = ImageDataGenerator(
rescale=1./255,
rotation_range=45,
width_shift_range=.15,
height_shift_range=.15,
horizontal_flip=True,
zoom_range=0.5
)
train_data_gen = image_gen_train.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Visualize how a single image would look five different times when passing these augmentations randomly to the dataset.
###Code
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Create validation data generator Generally, only apply data augmentation to the training examples. In this case, only rescale the validation images and convert them into batches using `ImageDataGenerator`.
###Code
image_gen_val = ImageDataGenerator(rescale=1./255)
val_data_gen = image_gen_val.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Dropout Another technique to reduce overfitting is to introduce *dropout* to the network. It is a form of *regularization* that forces the weights in the network to take only small values, which makes the distribution of weight values more regular and the network can reduce overfitting on small training examples. Dropout is one of the regularization technique used in this tutorialWhen you apply dropout to a layer it randomly drops out (set to zero) number of output units from the applied layer during the training process. Dropout takes a fractional number as its input value, in the form such as 0.1, 0.2, 0.4, etc. This means dropping out 10%, 20% or 40% of the output units randomly from the applied layer.When appling 0.1 dropout to a certain layer, it randomly kills 10% of the output units in each training epoch.Create a network architecture with this new dropout feature and apply it to different convolutions and fully-connected layers. Creating a new network with Dropouts Here, you apply dropout to first and last max pool layers. Applying dropout will randomly set 20% of the neurons to zero during each training epoch. This helps to avoid overfitting on the training dataset.
###Code
model_new = Sequential([
Conv2D(16, 3, padding='same', activation='relu',
input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Dropout(0.2),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Dropout(0.2),
Flatten(),
Dense(512, activation='relu'),
Dense(1)
])
###Output
_____no_output_____
###Markdown
Compile the model After introducing dropouts to the network, compile the model and view the layers summary.
###Code
model_new.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
model_new.summary()
###Output
_____no_output_____
###Markdown
Train the model After successfully introducing data augmentations to the training examples and adding dropouts to the network, train this new network:
###Code
history = model_new.fit_generator(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)
###Output
_____no_output_____
###Markdown
Visualize the model Visualize the new model after training, you can see that there is significantly less overfitting than before. The accuracy should go up after training the model for more epochs.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____ |
Multivariate Regression Feature Scoring Example.ipynb | ###Markdown
General Imports
###Code
import pandas as pd
import numpy as np
import plotly.graph_objects as go
from datetime import date
from collections import Counter
pd.set_option("display.precision", 2)
import pandas as pd
import plotly.graph_objs as go
import chart_studio.plotly as py
import cufflinks # Cufflinks wrapper on plotly
from IPython.core.interactiveshell import InteractiveShell # Display all cell outputs
InteractiveShell.ast_node_interactivity = 'last'
import chart_studio.plotly as py
import plotly.figure_factory as ff
from plotly.offline import iplot
import chart_studio
pd.options.display.max_columns = 30
from urllib.request import urlopen
import json
import plotly.express as px
import plotly.graph_objects as go
import random
###Output
_____no_output_____
###Markdown
Sklearn Imports
###Code
from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_regression
from matplotlib import pyplot
from collections import OrderedDict
date_today = str(date.today())
###Output
_____no_output_____
###Markdown
Read Data
###Code
df = pd.read_csv('path/to/data/<some-filename>.csv')
selected_columns = ['col0','col1', 'col2', 'col3', 'col4', 'col5', 'col6']
# feature selection
def select_features(X_train, y_train, X_test):
# configure to select all features
fs = SelectKBest(score_func=f_regression, k='all')
# learn relationship from training data
fs.fit(X_train, y_train)
# transform train input data
X_train_fs = fs.transform(X_train)
# transform test input data
X_test_fs = fs.transform(X_test)
return X_train_fs, X_test_fs, fs
def scatter_plot(x_colname, x, y_colname, y):
fig = px.scatter(x=x, y=y)
layout = go.Layout(
title=f"""Correlation between '{x_colname}' and '{y_colname}'""",
xaxis=dict(
title=x_colname
),
yaxis=dict(
title=y_colname
) )
fig.update_layout(layout)
fig.show()
df_input_features = df[selected_columns].dropna()
X, y = np.nan_to_num(df_input_features.to_numpy()), filtered_df.loc[df_input_features.index]['Predicted_Value_Column'].to_numpy()
X.shape
y.shape
# split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
# feature selection
X_train_fs, X_test_fs, fs = select_features(X_train, y_train, X_test)
feature2score = {}
# what are scores for the features
for i in range(len(fs.scores_)):
feature2score[df_input_features.columns[i]] = fs.scores_[i]
# plot the scores
print('-'*100)
print('Features sorted in descending order of their scores:')
print('-'*100)
for k,v in OrderedDict(sorted(feature2score.items(),key=lambda kv: kv[1], reverse=True)).items():
score = round(v,2)
predictor = k
print(f' Score {score} for predictor {predictor}')
###Output
_____no_output_____ |
Exploration/.ipynb_checkpoints/Summary Statistics-checkpoint.ipynb | ###Markdown
1. Setup and Import libraries
###Code
# django path
mysite_path = "C:\\Data\\UCL\\@MSc Project\\DB\\mysite\\"
# standard packages
import os
import sys
import re
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from datetime import datetime
import django
from set_django_db import set_django_db
from asgiref.sync import sync_to_async
from IPython.core.display import HTML
%matplotlib inline
# set django models
set_django_db(mysite_path)
from tables_daniel.models import Company, Review
# specifically for Jupyter notebooks
os.environ["DJANGO_ALLOW_ASYNC_UNSAFE"] = "true"
# center plots
HTML("""
<style>
.output_png {
display: table-cell;
text-align: center;
vertical_align: middle;
}
</style>
""")
###Output
_____no_output_____
###Markdown
2. Load, merge and filter the datasets**Content** 2.1 Load companies 2.2 Load reviews 2.3 Some useful merges/adds 2.4 Filter the data from the monitored period between 2018-07-01 and 2020-06-30 2.5 Filter only the reviews for the companies with at least 10 reviews 2.6 Add a column concatenating pros & cons 2.7 Update employee relatioship 2.1 Companies
###Code
companies = pd.DataFrame(
list(
Company
.objects
.values('id', 'Company', 'Sector', 'ListedOn')
.all()
)
)
companies_id = list(companies.id)
###Output
_____no_output_____
###Markdown
2.2 Reviews
###Code
reviews = list(
Review
.objects
.values(
'id', 'Company_id', 'ReviewTitle', 'Rating',
'JobTitle', 'EmployeeRelationship',
'Contract', 'Pros', 'Cons',
'Year', 'Month', 'Day'
)
.all()
.filter(Company_id = company_id) for company_id in companies_id
)
reviews_df = pd.DataFrame(
sum([list(reviews_i) for reviews_i in reviews],[])
).drop_duplicates()
"""
for i in range(reviews_df.shape[0]):
row = dict(reviews_df.iloc[i,:])
review = (
Review
.objects
.values('id', 'JobTitle' ,'EmployeeRelationship')
.get(id=row['id'])
)
if review['JobTitle'] in ['Former Employee', 'Current Employee']:
new_jobTitle = review['EmployeeRelationship']
new_relationship = review['JobTitle']
(Review
.objects
.filter(id=row['id'])
.update(
JobTitle = new_jobTitle,
EmployeeRelationship = new_relationship
)
)
else:
pass
if (i+1)%100==0:
print(i+1)
"""
###Output
_____no_output_____
###Markdown
2.3 Some useful merges/adds
###Code
# add sector and company name
reviews_df = reviews_df.merge(
companies[['id', 'Company', 'Sector', 'ListedOn']].rename(columns={'id': 'Company_id'}),
on='Company_id'
)
# add date column used for filtering
reviews_df['Date'] = reviews_df.apply(lambda x: '-'.join(
[str(x['Year']), str(x['Month']), str(x['Day'])]
), axis=1
)
reviews_df
def string_to_date(date_str):
try:
return datetime.strptime(date_str, '%Y-%m-%d')
except:
return datetime.strptime('1800-1-1', '%Y-%m-%d')
def string_to_YM(date_str):
try:
return datetime.strptime(date_str, '%Y-%m')
except:
return datetime.strptime('1800-1-1', '%Y-%m-%d')
reviews_df['Date'] = reviews_df['Date'].apply(lambda x: string_to_date(x))
reviews_df['Year-Month'] = reviews_df.apply(lambda x: string_to_YM('-'.join([str(x['Year']), str(x['Month'])])), axis=1)
###Output
_____no_output_____
###Markdown
2.4 Filter the data from the monitored period between
###Code
# further analysis focusing only on the companies with at least 10 reviews in the monitored period
min_date = datetime.strptime('2018-7-1', '%Y-%m-%d')
max_date = datetime.strptime('2020-6-30', '%Y-%m-%d')
reviews_df = pd.DataFrame(
reviews_df[(reviews_df.Date >= min_date) & (reviews_df.Date <= max_date)]
)
reviews_df
###Output
_____no_output_____
###Markdown
2.5 Filter only the reviews for companies with at least 10 reviews
###Code
# count reviews
reviews_count = (
reviews_df
.groupby('Company')
.Rating
.count()
)
# filter companies
companies_filtered = list(reviews_count[reviews_count>10].index)
reviews_df = reviews_df[reviews_df.Company.isin(companies_filtered)]
print(
f"There are {reviews_df.shape[0]:.0f} reviews in total."
)
###Output
There are 392408 reviews in total.
###Markdown
2.6 Add a column of concatenated pros & cons + their length
###Code
reviews_df['Review'] = reviews_df['Pros'] + ' ' + reviews_df['Cons']
reviews_df['ReviewLength'] = reviews_df['Review'].apply(lambda x: len(x))
reviews_df.head()
###Output
_____no_output_____
###Markdown
2.7 Update employee relationship
###Code
def update_EmployeeRelationship(x):
if x not in ['Current Employee', 'Former Employee']:
return 'Not specified'
else:
return x
reviews_df['EmployeeRelationship'] = [update_EmployeeRelationship(reviews_df.loc[row, 'EmployeeRelationship']) for row in reviews_df.index]
###Output
C:\Users\danie\Anaconda3\lib\site-packages\ipykernel_launcher.py:7: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
import sys
###Markdown
2.8 Add columns with total reviews to companies_df
###Code
# 1. get number of reviews per company
reviewsPerCompany = (
reviews_df
.groupby('Company')
.Rating
.count()
)
# 1. get number of reviews per company
reviewsPerCompany = (
reviews_df
.groupby('Company')
.Rating
.count()
)
# 2. assign these values to company_df
companies_filtered_df = companies[companies.Company.isin(companies_filtered)]
companies_filtered_df['TotalReviews'] = 0
companies_filtered_df['TotalReviews'] = [reviewsPerCompany.loc[company] for company in companies_filtered_df.Company]
###Output
C:\Users\danie\Anaconda3\lib\site-packages\ipykernel_launcher.py:11: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
# This is added back by InteractiveShellApp.init_path()
C:\Users\danie\Anaconda3\lib\site-packages\ipykernel_launcher.py:12: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
if sys.path[0] == '':
###Markdown
3. Summary statistics**Content** 3.1 Distribution of ratings/reviews over former/current and part/full-time employees 3.2 Mean, median, standard deviation and quantiles of total reviews per company 3.3 Mean, meadin, std and quantiles per sectors
###Code
# helper quantile/quartile functions
def q1(x):
return x.quantile(.25)
def q3(x):
return x.quantile(.75)
def q10(x):
return x.quantile(.1)
def q90(x):
return x.quantile(.9)
# Overall rating stats
print(
reviews_df
.Rating
.agg(['mean', 'std', q1, 'median', q3])
)
# Overall reviews stats
print(
reviews_df
.ReviewLength
.agg(['mean', 'std', q1, 'median', q3])
)
###Output
mean 198.459048
std 290.027821
q1 81.000000
median 112.000000
q3 197.000000
Name: ReviewLength, dtype: float64
###Markdown
3.1 Distribution of ratings/reviews over former/current and part/full-time employees
###Code
print(
reviews_df
.groupby('Contract')
.Rating
.agg(['count', 'mean', 'std', q1, 'median', q3])
)
print(
reviews_df
.groupby('EmployeeRelationship')
.Rating
.agg(['count', 'mean', 'std', q1, 'median', q3])
)
print(
reviews_df
.groupby('Contract')
.ReviewLength
.agg(['count', 'mean', 'std', q1, 'median', q3])
)
print(
reviews_df
.groupby('EmployeeRelationship')
.ReviewLength
.agg(['count', 'mean', 'std', q1, 'median', q3])
)
###Output
count mean std q1 median q3
EmployeeRelationship
Current Employee 220135 193.473196 268.526755 81 112 195
Former Employee 154224 208.151915 320.575667 82 114 202
Not specified 18049 176.446063 263.869329 76 104 169
###Markdown
3.2 Mean, median, standard deviation and quantiles of total reviews per company
###Code
print(
reviews_df
.groupby('Company')
.Rating
.count()
.agg(['mean', 'std', q10, q1, 'median', q3, q90, 'max'])
)
###Output
mean 648.595041
std 1680.993707
q10 39.400000
q1 83.000000
median 214.000000
q3 559.000000
q90 1361.200000
max 27455.000000
Name: Rating, dtype: float64
###Markdown
**Global**
###Code
file_path = r'C:\Data\UCL\@MSc Project - Data and sources\Images\histogram01.png'
plt.figure(figsize=(7,5))
ax = (reviews_df
.groupby('Company')
.Rating
.count()
.plot
.hist()
)
# set label, title, font size etc.
ax.set_xlabel('Number of reviews', fontsize=12)
ax.set_ylabel('Frequency', fontsize=12)
ax.set_title('Number of reviews per company', fontsize=14)
ax.set_xlim((0,companies_filtered_df.TotalReviews.max()))
plt.tight_layout()
plt.savefig(fname=file_path, dpi=300)
###Output
_____no_output_____
###Markdown
**Focused on bottom 90th companies**
###Code
file_path = r'C:\Data\UCL\@MSc Project - Data and sources\Images\histogram01_zoomed.png'
bins = list(range(0, 1500, 100))
bins.append(companies_filtered_df.TotalReviews.max())
plt.figure(figsize=(7,5))
ax = (reviews_df
.groupby('Company')
.Rating
.count()
.plot
.hist(bins=bins)
)
# set label, title, font size etc.
ax.set_xlabel('Number of reviews', fontsize=12)
ax.set_ylabel('Frequency', fontsize=12)
ax.set_title('Number of reviews per company (zoom on the bottom 90th)', fontsize=14)
ax.set_xlim((0, 1400))
plt.tight_layout()
plt.savefig(fname=file_path, dpi=300)
###Output
_____no_output_____
###Markdown
**Combo**
###Code
# setup
filepath = r'C:\Data\UCL\@MSc Project - Data and sources\Images\histogram01_combo.png'
fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, figsize=(14,5))
# global
ax1=(reviews_df
.groupby('Company')
.Rating
.count()
.hist(ax=ax1, grid=False)
)
ax1.set_xlabel('Number of reviews', fontsize=12)
ax1.set_ylabel('Frequency', fontsize=12)
ax1.set_title('Number of reviews per company', fontsize=14)
ax1.set_xlim((0,companies_filtered_df.TotalReviews.max()))
# zoom
bins = list(range(0, 1500, 100))
bins.append(companies_filtered_df.TotalReviews.max())
ax2=(reviews_df
.groupby('Company')
.Rating
.count()
.hist(ax=ax2, bins=bins,grid=False)
)
ax2.set_xlabel('Number of reviews', fontsize=12)
ax2.set_ylabel('Frequency', fontsize=12)
ax2.set_title('Number of reviews per company (zoom on the bottom 90th)', fontsize=14)
ax2.set_xlim((0, 1400))
# tight_layout and save
plt.tight_layout()
plt.savefig(fname=filepath, dpi=300)
###Output
_____no_output_____
###Markdown
3.3 Mean, meadin, std and quantiles per market and sector 3.3.1 Review length
###Code
print(
companies_filtered_df
.TotalReviews
.agg(['mean', 'std', q10, q1, 'median', q3, q90, 'max'])
)
filepath = r'C:/Data/UCL/@MSc Project - Data and sources/Exploration csv/StockMarket_hist.csv'
(companies_filtered_df
.groupby('ListedOn')
.TotalReviews
.agg(['count', 'mean', 'std', q10, q1, 'median', q3, q90, 'max'])
.round(2)
).to_csv(filepath)
print(
companies_filtered_df
.groupby('ListedOn')
.TotalReviews
.agg(['count', 'sum', 'mean', 'std', q10, q1, 'median', q3, q90, 'max'])
.round(2)
)
filepath = r'C:/Data/UCL/@MSc Project - Data and sources/Exploration csv/Sector_hist.csv'
(companies_filtered_df
.groupby('Sector')
.TotalReviews
.agg(['count', 'mean', 'std', q10, q1, 'median', q3, q90, 'max'])
.round(2)
).to_csv(filepath)
print(
companies_filtered_df
.groupby('Sector')
.TotalReviews
.agg(['count', 'sum', 'mean', 'std', q10, q1, 'median', q3, q90, 'max'])
.round(2)
)
filepath = r'C:/Data/UCL/@MSc Project - Data and sources/Exploration csv/Reviews_IndexSector.xlsx'
(companies_filtered_df
.groupby(['ListedOn', 'Sector'])
.TotalReviews
.agg(['mean', 'std', q10, q1, 'median', q3, q90, 'max'])
.round(2)
).to_excel(filepath)
print(
companies_filtered_df
.groupby(['ListedOn', 'Sector'])
.TotalReviews
.agg(['sum', 'mean', 'std', q10, q1, 'median', q3, q90, 'max'])
.round(2)
)
(reviews_df
.groupby('ListedOn')
.ReviewLength
.agg(['mean', 'std', q1, 'median', q3])
)
(reviews_df
.groupby('Sector')
.ReviewLength
.agg(['mean', 'std', q1, 'median', q3])
)
###Output
_____no_output_____
###Markdown
3.3.2 Ratings
###Code
filepath = r'C:/Data/UCL/@MSc Project - Data and sources/Exploration csv/Ratings - descriptive stats.csv'
(reviews_df
.groupby('ListedOn')
.Rating
.agg(['mean', 'std', q1, 'median', q3])
.round(2)
).to_csv(filepath)
print(
reviews_df
.groupby('Sector')
.Rating
.agg(['mean', 'std', q1, 'median', q3])
.round(2)
)
print(
reviews_df
.groupby('ListedOn')
.Rating
.agg(['mean', 'std', q1, 'median', q3])
)
filepath = r'C:/Data/UCL/@MSc Project - Data and sources/Exploration csv/RatingsIndexSector.xlsx'
(reviews_df
.groupby(['ListedOn', 'Sector'])
.Rating
.agg(['mean', 'std', q1, 'median', q3])
.round(2)
).to_excel(filepath)
print(
reviews_df
.groupby(['ListedOn', 'Sector'])
.Rating
.agg(['mean', 'std', q1, 'median', q3])
.round(2)
)
###Output
mean std q1 median q3
ListedOn Sector
EURO STOXX 50 Basic Materials 3.42 1.34 3.0 4.0 5.0
Communication Services 3.81 1.06 3.0 4.0 5.0
Consumer Cyclical 3.80 1.13 3.0 4.0 5.0
Consumer Defensive 3.71 1.22 3.0 4.0 5.0
Energy 3.66 1.21 3.0 4.0 5.0
Financial Services 3.51 1.20 3.0 4.0 4.0
Healthcare 3.69 1.16 3.0 4.0 5.0
Industrials 3.90 1.10 3.0 4.0 5.0
Real Estate 3.23 1.45 2.0 4.0 4.0
Technology 4.29 0.98 4.0 5.0 5.0
Utilities 3.49 1.25 3.0 4.0 4.0
FTSE 100 Basic Materials 3.61 1.22 3.0 4.0 5.0
Communication Services 3.58 1.21 3.0 4.0 5.0
Consumer Cyclical 3.44 1.34 3.0 4.0 5.0
Consumer Defensive 3.55 1.18 3.0 4.0 4.0
Energy 3.93 1.12 3.0 4.0 5.0
Financial Services 3.65 1.12 3.0 4.0 4.0
Healthcare 3.76 1.16 3.0 4.0 5.0
Industrials 3.63 1.37 3.0 4.0 5.0
Real Estate 3.93 1.23 3.0 4.0 5.0
Technology 3.58 1.23 3.0 4.0 4.0
Utilities 3.48 1.31 3.0 4.0 5.0
S&P 500 Basic Materials 3.57 1.21 3.0 4.0 4.0
Communication Services 3.61 1.29 3.0 4.0 5.0
Consumer Cyclical 3.63 1.22 3.0 4.0 5.0
Consumer Defensive 3.34 1.27 3.0 3.0 4.0
Energy 3.58 1.18 3.0 4.0 4.0
Financial Services 3.58 1.22 3.0 4.0 5.0
Healthcare 3.44 1.31 3.0 4.0 5.0
Industrials 3.55 1.27 3.0 4.0 5.0
Real Estate 3.68 1.36 3.0 4.0 5.0
Technology 3.73 1.16 3.0 4.0 5.0
Utilities 3.58 1.32 3.0 4.0 5.0
###Markdown
3.4 Monthly mean rating per sector (+ 3M moving average)
###Code
reviews_MonthSector = pd.DataFrame(
reviews_df
.groupby(['Sector', 'Year-Month'])
.agg(['count', 'mean'])
)
reviews_MonthSector = pd.DataFrame(
reviews_MonthSector.to_records()
)[['Sector', 'Year-Month', "('Rating', 'mean')", "('Rating', 'count')"]]
reviews_MonthSector.columns = ['Sector', 'Year-Month', 'Rating', 'Count']
reviews_MonthSector.head()
sectors = reviews_MonthSector.Sector.unique()
# add 3-month rating average to the DF
reviews_MonthSector['3M_Average']=0
i=0
for sector in sectors:
avg = reviews_MonthSector[reviews_MonthSector.Sector==sector].Rating.rolling(window=3).mean()
start = i
end = i+avg.shape[0]
reviews_MonthSector.iloc[start:end, -1] = avg
i+=avg.shape[0]
reviews_MonthSector
###Output
_____no_output_____ |
tp3/WordEmbeddings.ipynb | ###Markdown
Word Embeddings : le modèle Word2Vec Imports
###Code
import sys
# library to build bi-grams
from gensim.models.phrases import Phrases, Phraser
from gensim.models import Word2Vec
import nltk
from nltk.tokenize import wordpunct_tokenize
from unidecode import unidecode
###Output
_____no_output_____
###Markdown
Chargement et traitement des phrases du corpus Création d'un objet qui *streame* les lignes d'un fichier pour économiser de la RAM
###Code
class MySentences(object):
"""Tokenize and Lemmatize sentences"""
def __init__(self, filename):
self.filename = filename
def __iter__(self):
for line in open(self.filename, encoding='utf-8', errors="backslashreplace"):
yield [unidecode(w.lower()) for w in wordpunct_tokenize(line)]
infile = f"../data/sents.txt"
sentences = MySentences(infile)
###Output
_____no_output_____
###Markdown
Détection des bigrams
###Code
bigram_phrases = Phrases(sentences)
type(bigram_phrases.vocab)
len(bigram_phrases.vocab.keys())
###Output
_____no_output_____
###Markdown
Prenons une clé au hasard :
###Code
key_ = list(bigram_phrases.vocab.keys())[145]
print(key_)
bigram_phrases.vocab[key_]
###Output
_____no_output_____
###Markdown
Conversion des `Phrases` en objet `Phraser``Phraser` est un alias pour `gensim.models.phrases.FrozenPhrases`, voir ici https://radimrehurek.com/gensim/models/phrases.html.Le `Phraser` est une version *light* du `Phrases`, plus optimale pour transformer les phrases en concaténant les bigrams.
###Code
bigram_phraser = Phraser(phrases_model=bigram_phrases)
###Output
_____no_output_____
###Markdown
Le `Phraser` est un objet qui convertit certains unigrams d'une liste en bigrams lorsqu'ils ont été identifiés comme pertinents. Extraction des trigrams Nous répétons l'opération en envoyant cette fois la liste de bigrams afin d'extraire les trigrams.
###Code
trigram_phrases = Phrases(bigram_phraser[sentences])
trigram_phraser = Phraser(phrases_model=trigram_phrases)
###Output
_____no_output_____
###Markdown
Création d'un corpus d'unigrams, bigrams, trigrams
###Code
corpus = list(trigram_phraser[bigram_phraser[sentences]])
print(corpus[:10])
###Output
[['v', 'i', 'l', 'l', 'e', 'de', 'bruxelles', 'bulletin', 'ires', '8eanas', 'dl', '!'], ['conseil_communal', 'annee', '1847', '.'], ['au', 'ville', 'de', 'b', 'r', 'u', 'x', 'e', 'l', 'l', 'e', 's', '.'], ['bulletin', 'conseil', 'aes', 'seances', 'communal', '.'], ['annee', '1847', '.'], ['bruxelles', ',', 'imprimerie', 'd', 'e', 'j', '.'], ['h', '.', 'b', 'r', 'i', 'a', 'r', 'd', ',', 'rite', 'n', 'e', 'u', 'v', 'e', ',', '3', '1', ',', 'faubourg', 'de', 'n', 'a', 'm', 'u', 'r', ',', '1', '84', '8', 'de', '!'], ['du', 'consei', 'dibi', 'e', '.', '-', 'communication', 'conclusions', 'de', 'la', 'section', 'des', 'du', 'nouvel_hospice', 'pour', 'les', 'av', 'enraisonde', 'l', "'", 'absence', '&', 'maladie', '.', 'le', 'conseil', 'ajourne', 'leurs', 'de', 'pierre', 'el', 'marchai', 'cles', 'des', 'taxes', 'communale', "'", 'bieniaance', 'eldeseianv', 'il', 'est', 'donne', 'communie', ';', 'mandant', 'le', 'o', 'p', 'fa', 'gnant', 'l', "'", 'envoi', 'de', 'leur', 'bn', 'par', 'l', "'", 'etat', 'obligatoire', 'p', 'secretariat', 'et', 'dtput', 'uf', 'proposition', 'dan', '*', 'le', 'meme', 'u', 'est', 'donne_lecture', 'd', "'", 't', 'glissement', 'd', "'", 'un', 'marc', '!'], ["'*", 'royales', ',', 'rue', 'de', 'la', 'i', 'd', 'e', 'k', ':', ';', 'i', 'fai', 'phonnenr', 'de', 'to', '>>', '<<', '<<', 'terrains', 'reumsderb', '."'], ['^', 'par', 'une', 'combinaison', 'f', 'sans', 'devoir', 'fe', 'soit', 'dow', 'ans', ',', 'un', 'marcs', '1', 's', 'u', 'r', 'l', 'iraocs', '.']]
###Markdown
Entrainement d'un modèle Word2Vec sur ce corpus
###Code
%%time
model = Word2Vec(
corpus, # On passe le corpus de ngrams que nous venons de créer
vector_size=32, # Le nombre de dimensions dans lesquelles le contexte des mots devra être réduit, aka. vector_size
window=5, # La taille du "contexte", ici 5 mots avant et après le mot observé
min_count=5, # On ignore les mots qui n'apparaissent pas au moins 5 fois dans le corpus
workers=4, # Permet de paralléliser l'entraînement du modèle en 4 threads
epochs=5 # Nombre d'itérations du réseau de neurones sur le jeu de données pour ajuster les paramètres avec la descende de gradient, aka. epochs.
)
###Output
CPU times: user 10min 39s, sys: 17.8 s, total: 10min 57s
Wall time: 6min 12s
###Markdown
RemarqueVous voyez ici que l'entrainement du modèle est parallélisé (sur 4 workers).Lors qu'on parallélise l'entrainement du modèle, 4 modèles "séparés" sont entrainés sur environ un quart des phrases.Ensuite, les résultats sont agrégés pour ne plus faire qu'un seul modèle.On ne peut prédire quel worker aura quelle phrase, car il y a des aléas lors de la parallélisation (p. ex. un worker qui serait plus lent, etc.).Du coup, les valeurs peuvent varier légèrement d'un entrainement à l'autre.Mais, globalement, les résultats restent cohérents. Sauver le modèle dans un fichier
###Code
outfile = f"../data/bulletins.model"
model.save(outfile)
###Output
_____no_output_____
###Markdown
Explorer le modèle Charger le modèle en mémoire
###Code
model = Word2Vec.load("../data/bulletins.model")
###Output
_____no_output_____
###Markdown
Imprimer le vecteur d'un terme
###Code
model.wv["bruxelles"]
###Output
_____no_output_____
###Markdown
Calculer la similarité entre deux termes
###Code
# Cuanto más alejadas las palabras -1 y cuanto más cercanas 1
# model.wv.similarity("boucher", "boulanger")
# model.wv.similarity("bourgmestre", "ministre")
model.wv.similarity("place", "parc")
model.wv.similarity("route", "rue")
model.wv.similarity("conseil", "college")
###Output
_____no_output_____
###Markdown
Chercher les mots les plus proches d'un terme donné
###Code
model.wv.most_similar("belgique", topn=10)
model.wv.most_similar("quartier", topn=10)
model.wv.most_similar("devis", topn=10)
###Output
_____no_output_____
###Markdown
Graphique des mots les plus proches d'un terme donné Imports
###Code
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.decomposition import PCA
import pandas as pd
# Here it is interesting to change the word, and the topn to see a differents results in chart
#word_list = dict(model.wv.most_similar("belgique", topn=20)).keys()
word_list = dict(model.wv.most_similar("belgique", topn=10)).keys()
# Loop to save each most_similar topn10 vector
word_vectors = {}
for word in word_list:
vectors = model.wv[word]
word_vectors[word] = vectors
###Output
_____no_output_____
###Markdown
Réduire les vecteurs à 2 dimensions Documentation de la fonction https://towardsdatascience.com/how-to-draw-a-map-using-python-and-word2vec-e9627b4eae34
###Code
def plot_2d_representation_of_words(
word_list,
word_vectors,
flip_x_axis = False,
flip_y_axis = False,
label_x_axis = "x",
label_y_axis = "y",
label_label = "X"):
pca = PCA(n_components = 2)
word_plus_coordinates=[]
for word in word_list:
current_row = []
current_row.append(word)
current_row.extend(word_vectors[word])
word_plus_coordinates.append(current_row)
word_plus_coordinates = pd.DataFrame(word_plus_coordinates)
loc = word_plus_coordinates.iloc[:,1:32]
coordinates_2d = pca.fit_transform(
loc)
coordinates_2d = pd.DataFrame(
coordinates_2d, columns=[label_x_axis, label_y_axis])
coordinates_2d[label_label] = word_plus_coordinates.iloc[:,0]
if flip_x_axis:
coordinates_2d[label_x_axis] = \
coordinates_2d[label_x_axis] * (-1)
if flip_y_axis:
coordinates_2d[label_y_axis] = \
coordinates_2d[label_y_axis] * (-1)
plt.figure(figsize = (15,10))
p1=sns.scatterplot(
data=coordinates_2d, x=label_x_axis, y=label_y_axis)
x = coordinates_2d[label_x_axis]
y = coordinates_2d[label_y_axis]
label = coordinates_2d[label_label]
texts = [plt.text(x[i], y[i], label[i]) for i in range(len(x))]
###Output
_____no_output_____
###Markdown
Générer le plot Plot avec un topn=10
###Code
plot_2d_representation_of_words(
word_list = word_list,
word_vectors = word_vectors,
flip_y_axis = False)
###Output
_____no_output_____
###Markdown
Plot avec un topn=20
###Code
plot_2d_representation_of_words(
word_list = word_list,
word_vectors = word_vectors,
flip_y_axis = False)
###Output
_____no_output_____ |
Week 10/SLU17_1 - Exam Prep I/Solutions notebook.ipynb | ###Markdown
SLU 17 - Exam Prep I Batch 1 Python Exam This is the Python exam from batch 1. Exam Duration: 2h
###Code
import json
#used for evaluation
import utils
###Output
_____no_output_____
###Markdown
Question 1Complete the function `import_data`. This function should:- Open the data/countries.json file;- Use Python’s module json to load the data from the file into a dictionary;- Return the dictionary;
###Code
def import_data():
### BEGIN SOLUTION
with open("data/countries.json") as json_file:
data = json.load(json_file)
return data
### END SOLUTION
utils.b1_exerc_1_grading(import_data)
print("Answer is correct. Good Job.")
###Output
Answer is correct. Good Job.
###Markdown
Question 2The dictionary that you built contains information about a series of countries. So from now on, we’ll call it the `countries` dictionary.In the `countries` dictionary, each country has a field called `Population` that gives the number of people living in that country. Since this is a count, the data should have integer type. However, we have float…Complete the function `convert_population_to_int`. This function should:- Receive the `countries` dictionary as argument;- Convert the `Population` field from float to int type for all the countries, using a for loop;- Return the updated `countries` dictionary;
###Code
def convert_population_to_int(countries):
### BEGIN SOLUTION
for country in countries.keys():
countries[country]["Population"] = int(countries[country]["Population"])
return countries
### END SOLUTION
utils.b1_exerc_2_grading(convert_population_to_int)
print("Answer is correct. Good Job.")
###Output
Answer is correct. Good Job.
###Markdown
Question 3In the countries dictionary, each country has a field called `Area`. This is the area of the country given in square miles.Complete the function `convert_area_to_sq_km`. This function should:- Receive the `countries` dictionary as argument;- Convert the `Area` field from square miles to square kilometres; - Use the following conversion: 1 sq mi = 2.58999 sq km; - Round the result to 1 decimal digit;- Return the updated `countries` dictionary;
###Code
def convert_area_to_sq_km(countries):
### BEGIN SOLUTION
for country in countries.keys():
countries[country]["Area"] = round(countries[country]["Area"] * 2.58999, 1)
return countries
### END SOLUTION
utils.b1_exerc_3_grading(convert_area_to_sq_km)
print("Answer is correct. Good Job.")
###Output
Answer is correct. Good Job.
###Markdown
Question 4Complete the function `get_europe_countries`. This function should:- Receive the `countries` dictionary as argument;- Build a list with the names of the countries in Europe sorted by alphabetical order (from A to Z); - Use the field called `Continent`; - Use list comprehension; - Use the sort method from the list data type;- Return the list;
###Code
def get_europe_countries(countries):
### BEGIN SOLUTION
countries_list = [country for country, properties in countries.items() if properties["Continent"] == "Europe"]
countries_list.sort()
return countries_list
### END SOLUTION
utils.b1_exerc_4_grading(get_europe_countries)
print("Answer is correct. Good Job.")
###Output
Answer is correct. Good Job.
###Markdown
Question 5Complete the function `get_literacy_levels_by_continent`. This function should:- Receive the `countries` dictionary as argument;- Receive a continent name as argument;- For the countries in the continent (received as argument), compute the literacy level as: - literacy in [0, 25[ % - VERY_LOW; - literacy in [25, 50[ % - LOW; - literacy in [50, 70[ % - MEDIUM; - literacy in [70, 90[ % - HIGH; - literacy in [90, 100] % - VERY_HIGH;- Return a list of tuples like: [(country_1, literacy_1, literacy_level_1), ..., (country_n, literacy_n, literacy_level_n), ...];
###Code
def get_literacy_levels_by_continent(countries, continent):
### BEGIN SOLUTION
continent_literacy = []
for country, properties in countries.items():
if properties["Continent"] == continent:
if properties["Literacy"] < 25.0:
literacy_description = "VERY_LOW"
elif properties["Literacy"] < 50.0:
literacy_description = "LOW"
elif properties["Literacy"] < 70.0:
literacy_description = "MEDIUM"
elif properties["Literacy"] < 90.0:
literacy_description = "HIGH"
elif properties["Literacy"] <= 100.0:
literacy_description = "VERY_HIGH"
continent_literacy.append((country, properties["Literacy"], literacy_description))
return continent_literacy
### END SOLUTION
utils.b1_exerc_5_grading(get_literacy_levels_by_continent, "Africa")
utils.b1_exerc_5_grading(get_literacy_levels_by_continent, "Europe")
print("Answer is correct. Good Job.")
###Output
Answer is correct. Good Job.
###Markdown
Question 6Complete the function `get_country_codes`. This function should:- Receive the `countries` dictionary as argument;- Build a list with all the countries names;- Using `map` and a `lambda` function, convert the country names in country codes by selecting the 3 first letters of the country;- Returns this list;Note: Don't worry about duplicates on the result.
###Code
def get_country_codes(countries):
### BEGIN SOLUTION
country_names = list(countries.keys())
country_codes = list(map(lambda x: x[:3], country_names))
return country_codes
### END SOLUTION
utils.b1_exerc_6_grading(get_country_codes)
print("Answer is correct. Good Job.")
###Output
Answer is correct. Good Job.
###Markdown
Question 7The `Country` class is going to be used to represent countries and to return information about them. Question 7.1Complete the `get_population_in_millions` method. This method should:- Return the population of the country that the class represents;- The population should be returned in millions with 2 decimal digits;
###Code
class Country:
def __init__(self, country_name, population, continent):
self.country_name = country_name
self.population = population
self.continent = continent
def get_population_in_millions(self):
### BEGIN SOLUTION
return round(self.population / 1E6, 2)
### END SOLUTION
utils.b1_exerc_7_1_grading(Country("Penguinea", 12345678, "Antarctica"), Country)
utils.b1_exerc_7_1_grading(Country("Walrussia", 8344567, "Antarctica"), Country)
print("Answer is correct. Good Job.")
###Output
Answer is correct. Good Job.
###Markdown
Question 7.2Complete the `get_country_population`. This function should:- Receive the `countries` dictionary as argument;- Receive a country name as argument;- Try to find the country (received as argument) in the countries dictionary;- If the country doesn’t exist in the countries dictionary, catch the KeyError, print a statement with the information that there is no information for that country and return `None`;- If the country exists, create an object of the `Country` class;- Call the `get_population_in_millions` method for the country;- Return the population in millions value;
###Code
def get_country_population(countries, country_name):
### BEGIN SOLUTION
try:
country_dict = countries[country_name]
except KeyError as e:
print("There is no information on country: " + country_name)
return None
country_obj = Country(country_name, country_dict["Population"], country_dict["Continent"])
return country_obj.get_population_in_millions()
### END SOLUTION
utils.b1_exerc_7_2_grading("Turkey", get_country_population)
utils.b1_exerc_7_2_grading("Narnia", get_country_population)
print("Answer is correct. Good Job.")
###Output
There is no information on country: Narnia
Answer is correct. Good Job.
###Markdown
Last but not least, submit your work! To submit your work, fill your slack ID in the `slack_id` variable (as a string).Example: `slack_id = "x-men"`Help: if you forgot your slack ID, [read this](https://moshfeu.medium.com/how-to-find-my-member-id-in-slack-workspace-d4bba942e38c).
###Code
# Submit your work!
#slack_id =
### BEGIN SOLUTION
slack_id = "ADMINSLU17_1"
### END SOLUTION
from submit import submit
assert isinstance(slack_id, str)
slu = 17_1
submit(slack_id, slu)
###Output
Success
|
week_07/week-7-os_SOLUTION.ipynb | ###Markdown
Files and Folder using Python We'll use the ```os module``` and ```pathlib``` to create, navigate and delete files and folders. You already know how to do this with terminal commands like ```ls```, ```cd``` and ```mkdir```, but we do it within our script.
###Code
## import libraries
import os ## allows you to navigate, create, delete folders
from pathlib import Path ## allows to create paths to files and folders
###Output
_____no_output_____
###Markdown
List what is in the directory NOTE: regular terminal commands have to be in empty cells
###Code
ls
## Python scriptable
os.listdir()
## create a path to folder called some_new_folder
## we store that path in a variable called my_new_directory
my_new_directory = Path('some_new_folder/')
## create that directory
my_new_directory.mkdir(exist_ok=True)
## show directory now
os.listdir()
###Output
_____no_output_____
###Markdown
You don't have to create a variable for the path, but it is easier to resuse that path```Path('some_new_folder/').mkdir(exist_ok=True)```
###Code
## remove an empty directory
## NOTRE: This only removes empty directories
my_new_directory.rmdir()
## show directory now
os.listdir(".")
###Output
_____no_output_____
###Markdown
list using terminal command
###Code
ls
###Output
_____no_output_____
###Markdown
Let's add some crap to some_new_folders and then delete
###Code
## create that directory
my_new_directory.mkdir(exist_ok=True)
###Output
_____no_output_____
###Markdown
Take a moment to manually add some crap
###Code
## navigate into a directory
os.chdir("some_new_folder/")
ls
###Output
_____no_output_____
###Markdown
back out of directory
###Code
os.chdir("../")
ls
## To empty a directory with files in it, we use another library called shutil
import shutil
## Now delete all contents
shutil.rmtree(my_new_directory)
ls
###Output
_____no_output_____ |
pacote-download/Estudos - Criando e Acessando Banco de Dados.ipynb | ###Markdown
Acessando Banco de dados com Python
###Code
#Remove arquivo com o banco de dados SQLite (caso exista)
import os
os.remove('escola.db') if os.path.exists('escola.db') else None
# importando o módulo(biblioteca) que da acesso ao SQLite
import sqlite3
# Criando uma conexão com o banco de dados / Se o banco de dados não existir ele é criado nesse momento
# O python cria o banco no mesmo diretorio que esta o arqui do jupyter, caso queira apontar outro diretorio
# é só alterar o caminho
con = sqlite3.connect('escola.db')
# Verificando o tipo do objeto
type(con)
# Criando um cursor
# Um cursor permite percorrer todos os registros de um conjunto de dados / Cursor me da acesso ao BD
cur = con.cursor()
type(cur)
# Criando uma instrução(tabela) sql
sql_create = 'create table cursos '\
'(id integer primary key, '\
'titulo varchar(100), '\
'categoria varchar(140))'
# Executando a instrução sql no cursor para criar a tabela
cur.execute(sql_create)
# Criando um comando de inserção de registros em SQL, passei ? como parametros para inserir os dados depois
sql_insert = 'insert into cursos values (?,?,?)'
# Inserindo os dados
recset = [(1000, 'Ciência de Dados', 'Data Science'),
(1001, 'Big Data Fundamentos', 'Big Data'),
(1002, 'Python Fundamentos', 'Análise de Dados')]
# Inserindo os registros(dados)
for rec in recset:
cur.execute(sql_insert, rec)
# Fazendo um commit para gravar os dados no BD
con.commit()
# Criando outro comando SQL para selecionar registros
sql_select = 'select * from cursos'
# Usando o execute para executar o select e recuperar os registros / fetchall pega todos os registros e
# grava na variavel(objeto)
cur.execute(sql_select)
dados = cur.fetchall()
# Mostrando os dados
for linha in dados:
print('Curso ID: %d, Título: %s, Categoria: %s' % linha)
# Gerando mais registros
recset = [(1003, 'Gestão de Dados com MongoDB', 'Big Data'),
(1004, 'R Fundamentos', 'Análise de Dados')]
# Inserindo os registros
for rec in recset:
cur.execute(sql_insert, rec)
# Gravando os registros no BD
con.commit
# Selecionando todos os registros de outra maneira
cur.execute('select * from cursos')
# Gravando todos os registros
recset = cur.fetchall()
# Imprimindo os registros
for rec in recset:
print('Curso ID: %d, Título: %s, Categoria: %s \n'% rec)
# Fechando minha conexão com o BD
con.close()
###Output
_____no_output_____ |
Notebooks/Vanilla/Customer_satisfaction_example/customer_satisfaction_challenge.ipynb | ###Markdown
Santander customer satisfaction challenge Problem Customer satisfaction is a key measure of success for all businesses. Unhappy customers don't stay with the same provider and they rarely voice their dissatisfaction before leaving. In this context, Santander bank launched a challenge in Kaggle in order to build models that predict potential unhappy customers--- Objective The objective of this competition is to be able to identify unhappy customers early and anticipate their leaving which would allow the company to take proactive steps to improve a customer's happiness before it's too late. In this competition, you'll work with hundreds of anonymized features to predict if a customer is satisfied or dissatisfied with their banking experience. Data The data is an anonymized dataset containing a large number of numeric variables. The "TARGET" column is the variable to predict. It equals 1 for unsatisfied customers and 0 for satisfied customers. The task is to predict the probability that each customer in the test set is an unsatisfied customer.- train.csv: (371 columns): The training set including the target- test.csv: (370 columns): The test set without the target Install Vectice and GCS packages Vectice provides a generic metadata layer that is potentially suitable for most data science workflows. For this tutorial we will use the sickit-learn library for modeling and track experiments directly through our Python SDK to illustrate how to fine-tune exactly what you would like to track: metrics, etc. The same mechanisms would apply to R, Java or even more generic REST APIs to track metadata from any programming language and library. Here is a link to the Python SDK Documentation, it's not final nor complete and it is updated as we go along. [Python SDK Documentation](https://doc-dev.vectice.com/)
###Code
!pip3 install -q fsspec
!pip3 install -q gcsfs
!pip3 install -q vectice
!pip3 show vectice
###Output
_____no_output_____
###Markdown
Install the required packages **Especially if you're working locally and you didn't already install them**
###Code
!pip install -q numpy
!pip install -q pandas
!pip install -q matplotlib
!pip install -q seaborn
!pip install -q sklearn
!pip install -q lightgbm
!pip install -q imblearn
###Output
_____no_output_____
###Markdown
Import the required packages
###Code
import os
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
import seaborn as sns
from vectice.models import JobType
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV, cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score, precision_score, recall_score, roc_auc_score, auc
from sklearn.metrics import f1_score, confusion_matrix, precision_recall_curve, roc_curve
import lightgbm as lgb
from lightgbm import plot_importance
from imblearn.over_sampling import SMOTE
from collections import Counter
plt.style.use('seaborn')
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
###Output
_____no_output_____
###Markdown
Retreive the data from GS If you're not using Google Colab, don't run the following cell and put the json key access file that was provided with your tutorial account in the same directory with this notebook
###Code
# Don't run this cell if you're not using Colab
# Load your json key file to access GCS. It can be found in the tutorial page
# The name should be something like readerKey.json
from google.colab import files
uploaded = files.upload()
# Once your file is loaded set the credentials for GCS and load the file
# in a pandas frame, double check the json file name you uploaded below.
### Complete with the name of the JSON key file to access GCS. It can be found in the tutorial page
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'Name of the GCS key access file (readerKey.json)'
## Get the dataset from GCS
train_df = pd.read_csv("gs://vectice-examples-samples/Customer_satisfaction_challenge/dataset.csv")
# Run head to make sure the data was loaded properly
print(train_df.head())
###Output
_____no_output_____
###Markdown
Data explorationData exploration enables us to take a first look on the data, can enhance the overall understanding of the characteristics of the data domain and helps to detect correlation between the features, thereby allowing for the creation of more accurate models
###Code
print("Train Data Shape : ",train_df.shape)
train_df['TARGET'].value_counts()
train_df.info()
train_df.describe()
features = train_df.drop(['ID','TARGET'],axis=1)
###Output
_____no_output_____
###Markdown
Exploratory data analysis (EDA)* Target Percent* Check Multicollinearity* Check OutlierEDA is a technique that helps to explore and understand the data sets by summarizing their main characteristics often plotting them visually. It consists of Histograms, Box plot, Scatter plot and many more. EDA is about gathering as many insights from data as we can in order to understand it
###Code
pd.DataFrame(train_df['TARGET'].value_counts())
###Output
_____no_output_____
###Markdown
The training set is way imbalanced (73012 zeros vs 3008 ones), so some algorithms may learn mostly from the 0 which can affect our predictions. We address that by using oversampling
###Code
f, ax = plt.subplots(1,2,figsize=(10,4))
train_df['TARGET'].value_counts().plot.pie(
explode=[0,0.1],autopct='%1.1f%%',ax=ax[0],shadow=True
)
sns.countplot('TARGET', data=train_df, ax=ax[1])
plt.show()
null_value = train_df.isnull().sum().sort_values(ascending=False)
null_percent = round(train_df.isnull().sum().sort_values(ascending=False)/len(train_df)*100,2)
pd.concat([null_value, null_percent], axis=1, keys=['Null values', 'Percent'])
###Output
_____no_output_____
###Markdown
Ther is no column with null values **Correlation**If we have a big correlation, we have a problem of multicolinearity. That means that there are some features that depend of other features, so we should reduce the dimentionality of our data (if A depends of B, we should either find a way to aggregate or combine the two features and turn it into one variable or drop one of the variables that are too highly correlated with another) and that can be adressed using Principal component analysis (PCA)
###Code
features[features.columns[:8]].corr()
sns.heatmap(features[features.columns[:8]].corr(),annot=True,cmap='YlGnBu')
fig=plt.gcf()
fig.set_size_inches(10,8)
plt.show()
###Output
_____no_output_____
###Markdown
=> We Can Check MulticollinearityMulticollinearity is a phenomenon in which one independent variable is highly correlated with one or more of the other independent variables
###Code
plt.figure(figsize=(16,6))
plt.title("Distribution of mean values per row in the train and test set")
sns.distplot(train_df[features.columns].mean(axis=1),color="black", kde=True,bins=120, label='train')
plt.legend()
plt.show()
plt.figure(figsize=(16,6))
plt.title("Distribution of std values per rows in the train and test set")
sns.distplot(train_df[features.columns].std(axis=1),color="blue",kde=True,bins=120, label='train')
plt.legend(); plt.show()
t0 = train_df[train_df['TARGET'] == 0]
t1 = train_df[train_df['TARGET'] == 1]
plt.figure(figsize=(16,6))
plt.title("Distribution of skew values per row in the train set")
sns.distplot(t0[features.columns].skew(axis=1),color="red", kde=True,bins=120, label='target = 0')
sns.distplot(t1[features.columns].skew(axis=1),color="blue", kde=True,bins=120, label='target = 1')
plt.legend(); plt.show()
###Output
_____no_output_____
###Markdown
=> We Can Check OutlierAn outlier is a value or point that differs substantially from the rest of the data
###Code
train_df.describe()
plt.boxplot(train_df['var3'])
plt.boxplot(train_df['var38'])
###Output
_____no_output_____
###Markdown
The training set:- Contains continuous and and catigorized data (we should treate carigorized data cuz 10000>1 if we interpret them as numeric values and not catigorical (example IDs)- Contains variables with zero variance or non predictive value- Contains fake values (-999999) that were introduced to replace missing data- Is way imbalanced PreprocessingData preprocessing alludes to the method of cleaning and arranging the crude data to make it appropriate for building and preparing AI models. Data preprocessing is a procedure that changes crude data into an instructive and lucid arrangement. Our dataset is imbalanced. We will use oversampling to resolve this problem because if not some algorithms will learn more from the zeros than the ones in our training dataset* Processing Outlier Values
###Code
train_df['var3'].replace(-999999,2,inplace=True)
train_df.describe()
###Output
_____no_output_____
###Markdown
Connect to your Vectice project Here we are going to need an API token and a project token. An API token is used to secure requests between your existing tools and Vectice. You can create and manage those at the API Tokens tab in your workspace, and they impersonate you and your rights per workspace, so we strongly recommend you to avoid sharing them.A project token is used to target the project you're working on in the UI and can found (after creating a project) in the Project settings page, and anyone working on the project can see it and copy/paste it.
###Code
# In order to use Vectice SDK, let's set up the configurations first.
# The Vectice API key below can be generated from the UI.
# For better security, the settings can also be put into a dedicated file called `.vectice` or `.env`.
## Make sure that you're using the right endpoint
from vectice import Vectice
from vectice.entity.model import ModelType
os.environ['VECTICE_API_ENDPOINT']= "beta.vectice.com"
os.environ['VECTICE_API_TOKEN'] = ""
## Create a Vetice instance to connect to your project using your project token
vectice = Vectice(project_token="")
print(vectice)
###Output
_____no_output_____
###Markdown
Feature EngineeringIt's about creating new input features from your existing ones and can be seen as a process of addition that helps to improve the mode's performance by :- Isolating and highlighting key information, which helps the algorithms "focus" on what’s important.- You can bring in your own domain expertise.- Most importantly, once you understand the "vocabulary" of feature engineering, you can bring in other people’s domain expertise!In this part we will:* Split Data to Train / Test * Train Data to Standard Scaler* Target Data to Oversampling by SMOTE
###Code
train_df.drop('ID',axis=1,inplace=True)
x = train_df.drop('TARGET',axis=1)
y = train_df['TARGET']
###Output
_____no_output_____
###Markdown
Resolving the problem of multicolinearity Here we are going to use the "The Pearson correlation" method. It is the most common method to use for numerical variables; it assigns a value between − 1 and 1, where 0 is no correlation, 1 is total positive correlation, and − 1 is total negative correlation. This is interpreted as follows: a correlation value of 0.7 between two variables would indicate that a significant and positive relationship exists between the two. A positive correlation signifies that if variable A goes up, then B will also go up, whereas if the value of the correlation is negative, then if A increases, B decreases
###Code
def correlation(dataset, threshold):
col_corr = set() # Set of all the names of correlated columns
corr_matrix = dataset.corr()
for i in range(len(corr_matrix.columns)):
for j in range(i):
if abs(corr_matrix.iloc[i, j]) > threshold: # we are interested in absolute coeff value
colname = corr_matrix.columns[i] # getting the name of column
col_corr.add(colname)
return col_corr
###Output
_____no_output_____
###Markdown
We consider a threshold of 0.9 to avoid high correlation
###Code
corr_features = correlation(x, 0.9)
len(set(corr_features))
x = x.drop(corr_features,axis=1)
###Output
_____no_output_____
###Markdown
Standardize dataStandardScaler is an important technique that is mainly performed as a preprocessing step before many machine learning models, in order to standardize the range of functionality of the input dataset. It's used to resize the distribution of values so that the mean of the observed values is 0 and the standard deviation is 1
###Code
scaler = StandardScaler().fit(x)
x_scaler = scaler.transform(x)
x_scaler_df = pd.DataFrame(x_scaler, columns=x.columns)
###Output
_____no_output_____
###Markdown
**Principal component analysis (PCA)** It's a statistical technique used for data reduction without losing its properties. Using PCA can help identify correlations between data points. PCA creates a visualization of data that minimizes residual variance in the least squares sense and maximizes the variance of the projection coordinates
###Code
pca = PCA(n_components=0.95)
x_scaler_pca = pca.fit_transform(x_scaler)
x_scaler_pca_df = pd.DataFrame(x_scaler_pca)
x_scaler_pca_df.head()
pca.explained_variance_ratio_
plt.scatter(x_scaler_pca_df.loc[:, 0], x_scaler_pca_df.loc[:, 1], c=y, cmap="copper_r")
plt.axis('off')
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
=> We cant use PCA since we can't reduce the dimentionality (The variance is represented by multiple variables and we didn't find a small number of variables that enable to represent a considerable part of the variance) Split the data and use oversampling ML algorithms can have a poor performance when dealing with datasets in which one or more classes represent only a small proportion of the overall dataset compared with a dominant class. This problem can be solved by balancing the number of examples between different classes. Here we suggest to you SMOTE (Synthetic Minority Over-sampling Technique) that creates synthetic data based on creating new data points that are mid-way between two near neighbours in any particular class Create a dataset containing your dataset to use it as input for your splitting data job. That can be done through the UI by going to your project, clicking on datasets and then click on add (you should add a connection to be able to create a dataset)Create a dataset version based on the dataset you created above
###Code
# Use auto-versioning here
input_ds_version = vectice.create_dataset_version().with_parent_name("Your dataset's name")
###Output
_____no_output_____
###Markdown
The following code splits the dataset to train and test sets and uses the SMOTE methode for oversampling in order to balance our dataset. Please complete itwith creating PREPARATION job run, start it and then declare train_set and test_set as dataset versions (after creating the datasets in the UI) in order to be able to use them as inputs for the different models
###Code
uri = "https://github.com/vectice/vectice-examples"
script_relative_path="Notebooks/Vanilla/Customer_satisfaction_example/customer_satisfaction_challenge.ipynb"
input_code = Vectice.create_code_version_with_github_uri(uri=uri, script_relative_path=script_relative_path)
input_ds_version = input_ds_version
# Start a Vectice run. The job type should be PREPARATION in this case.
vectice.create_run("jobSplitData_Customer_Satisfaction", JobType.PREPARATION)
with vectice.start_run(inputs=[input_ds_version,input_code]) as run:
#with vectice.start_run(inputs=[input_ds_version]) as run:
#Split data
scaler_x_train, scaler_x_test, scaler_y_train, scaler_y_test = train_test_split(x_scaler, y, test_size=0.3)
#Use SMOTE to oversample the dataset
x_over, y_over = SMOTE().fit_resample(scaler_x_train,scaler_y_train)
print(sorted(Counter(y_over).items()))
# We commented out the code to persist the training and testing test in GCS,
# because we already generated it for you, but feel free to uncomment it and execute it.
# The key you were provided for this tutorial may not have write permissions to GCS.
# Let us know if you want to be able to write files as well and we can issue you a different key.
## Get training and testing data in dataframes in orderto upload them to GCS
#train_set = pd.DataFrame(x_over, columns=x.columns).join(pd.DataFrame(y_over, columns=["TARGET"]))
#test_set = pd.DataFrame(scaler_x_test, columns=x.columns).join(pd.DataFrame(scaler_y_test, columns=["TARGET"]))
#train_set.to_csv (r'gs://vectice-examples-samples/Customer_satisfaction_challenge/training_data.csv', index = False, header = True)
#test_set.to_csv (r'gs://vectice-examples-samples/Customer_satisfaction_challenge/testing_data.csv', index = False, header = True)
# Don't forget to create the datasets from the SDK (You should have writing rights to do that) or from the UI before creating train_ds_version and test_ds_version
train_ds_version = vectice.create_dataset_version().with_parent_name("train dataset name")
test_ds_version = vectice.create_dataset_version().with_parent_name("test dataset name")
run.add_outputs([train_ds_version,test_ds_version])
###Output
_____no_output_____
###Markdown
Our data contains now the same number of zeros and ones now Get different user versions Generate a random user version by calling get_random_string
###Code
# Let's generate some unique names for our following modeling experiments
import random
import string
def get_random_string(length):
return "".join(random.choice(string.ascii_letters) for i in range(length))
###Output
_____no_output_____
###Markdown
Modeling* LogisticRegression* LightGBM Classification Here we create a function that calculates and shows the confusion matrix and the accuracy, precision, recall, f1_score, roc_auc metrics.- Confusion matrix: Confusion matrices represent counts from predicted and actual values. It shows the rates of TP, FP, FN and TN- Accuracy: The model’s capability to correctly predict both the positives and negatives out of all the predictions. Accuracy_score = (TP + TN)/ (TP + FN + TN + FP)- Precision: The model's capability to correctly predict the positives out of all the positive prediction it made. Precision Score = TP / (FP + TP)- Recall: The model’s capability to correctly predict the positives out of actual positives. This is unlike precision which measures as to how many predictions made by models are actually positive out of all positive predictions made. Recall score is a useful measure of success of prediction when the classes are very imbalanced. Recall Score = TP / (FN + TP)- F1score: The model score as a function of precision and recall score. This is useful measure of the model in the scenarios where one tries to optimize either of precision or recall score and as a result, the model performance suffers.F1_Score = 2* Precision Score * Recall Score/ (Precision Score + Recall Score)- roc_auc_score : ROC curve is a graph that shows the performance of a classification model at all possible thresholds( threshold is a particular value beyond which you say a point belongs to a particular class). AUC measures how well a model is able to distinguish between classes. The curve is plotted between two parameters : * TRUE POSITIVE RATE * FALSE POSTIVIE RATE- True Positive (TP): The value of correct predictions of positives out of actual positive cases. Ex : Predict a well person as not sick- False Positive (FP): The value of incorrect positive predictions. The number of negatives falsly pridected as positives. Ex : Predict a sick person as not sick- True Negative (TN): The value of correct predictions of negatives out of actual negative cases. Ex : Predict a sick person as sick- False Negative (FN): The value of incorrect negative predictions. This value represents the number of positives which gets falsely. Predict a well person as sick
###Code
def get_clf_eval(y_test, pred = None, pred_proba = None):
confusion = confusion_matrix(y_test, pred)
accuracy = accuracy_score(y_test, pred)
precision = precision_score(y_test, pred)
recall = recall_score(y_test, pred)
f1 = f1_score(y_test, pred)
roc_auc = roc_auc_score(y_test, pred_proba)
print('confusion')
print(confusion)
print('Accuacy : {}'.format(np.around(accuracy,4)))
print('Precision: {}'.format(np.around(precision,4)))
print('Recall : {}'.format(np.around(recall,4)))
print('F1 : {}'.format(np.around(f1,4)))
print('ROC_AUC : {}'.format(np.around(roc_auc,4)))
return confusion, accuracy, precision, recall, f1, roc_auc
###Output
_____no_output_____
###Markdown
* **LogisticRegression**[Logistic regression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) is used to calculate the probability of a binary event occurring. For example, predicting if a credit card transaction is fraudulent or not fraudulent or predicting if an incoming email is spam or not spam The following code creates a Logistic regression model and calculates the metrics related to this model. Complete the code by adding a job run to create a model and send the metrics to Vectice (You can look at the examples in the documentation) and don't forget to use the names you generated for your experimints
###Code
uri = "https://github.com/vectice/vectice-examples"
script_relative_path="Notebooks/Vanilla/Customer_satisfaction_example/customer_satisfaction_challenge.ipynb"
input_code = Vectice.create_code_version_with_github_uri(uri=uri, script_relative_path=script_relative_path)
vectice.create_run(job_name = "Train model with Logistic regression", job_type = JobType.TRAINING)
with vectice.start_run(inputs=[train_ds_version,test_ds_version, input_code]) as run:
lg_reg = LogisticRegression()
lg_reg.fit(x_over, y_over)
pred = lg_reg.predict(scaler_x_test)
pred_proba = lg_reg.predict_proba(scaler_x_test)[:,1]
confusion, accuracy, precision, recall, f1, roc_auc = get_clf_eval(scaler_y_test, pred=pred, pred_proba=pred_proba)
metrics = [('Accuracy score', accuracy), ("Precision",precision), ("Recall", recall), ('f1 score', f1), ('AUC score', roc_auc)]
model_version1 = vectice.create_model_version().with_parent_name("Classifier").with_algorithm("Logistic Regression").with_type(ModelType.CLASSIFICATION).with_metrics(metrics).with_user_version(get_random_string(12))
run.add_outputs([model_version1])
###Output
_____no_output_____
###Markdown
* **LightGBM Classifier**[LightGBM](https://lightgbm.readthedocs.io/en/latest/index.html) is a gradient boosting classifier in machine learning that uses tree-based learning algorithms. It can handle a large amount of data, less memory usage, has parallel and GPU learning, good accuracy, faster training speed and efficiency
###Code
scaler_x_test, scaler_x_val, scaler_y_test, scaler_y_val = train_test_split(scaler_x_test, scaler_y_test, test_size=0.5)
##Setting up the model's parameters
## Feel free to play with the parameters
train_data = lgb.Dataset(x_over, label=y_over)
val_data = lgb.Dataset(scaler_x_val, label=scaler_y_val)
n_estimators = 5000
num_leaves = 20
max_depth = -1
min_data_in_leaf = 80
learning_rate = 0.001
boosting = 'gbdt'
objective = 'binary'
metric = 'auc'
n_jobs = -1
params = {
'n_estimators': n_estimators,
'num_leaves': num_leaves,
'max_depth': max_depth,
'min_data_in_leaf': min_data_in_leaf,
'learning_rate': learning_rate,
'boosting': boosting,
'objective': objective,
'metric': metric,
'n_jobs': n_jobs
}
###Output
_____no_output_____
###Markdown
The following code creates a LightGBM classifier model and calculates the metrics related to this model. Complete the code by adding a job run to create a model and send the metrics to Vectice and don't forget to give the names dataset version you created before as inputs to you model
###Code
uri = "https://github.com/vectice/vectice-examples"
script_relative_path="Notebooks/Vanilla/Customer_satisfaction_example/customer_satisfaction_challenge.ipynb"
input_code = Vectice.create_code_version_with_github_uri(uri=uri, script_relative_path=script_relative_path)
vectice.create_run(job_name="Train model with lightgbm", job_type = JobType.TRAINING)
with vectice.start_run(inputs=[train_ds_version,test_ds_version, input_code]) as run:
lgbm = lgb.train(params,
train_data,
valid_sets=val_data,
valid_names=['train','valid'],
early_stopping_rounds=300)
# Predicting the output on the Test Dataset
ypred_lgbm = lgbm.predict(scaler_x_test)
ypred_lgbm
y_pred_lgbm_class = [np.argmax(line) for line in ypred_lgbm]
accuracy_lgbm=accuracy_score(scaler_y_test,y_pred_lgbm_class)
print(accuracy_lgbm)
#Print Area Under Curve
plt.figure()
false_positive_rate, recall, thresholds = roc_curve(scaler_y_test, ypred_lgbm)
roc_auc = auc(false_positive_rate, recall)
plt.title('Receiver Operating Characteristic (ROC)')
plt.plot(false_positive_rate, recall, 'b', label = 'AUC = %0.3f' %roc_auc)
plt.legend(loc='lower right')
plt.plot([0,1], [0,1], 'r--')
plt.xlim([0.0,1.0])
plt.ylim([0.0,1.0])
plt.ylabel('Recall')
plt.xlabel('Fall-out (1-Specificity)')
plt.show()
print('AUC score:', roc_auc)
metrics =[("Accuracy score", accuracy_lgbm), ("AUC score", roc_auc)]
properties = [("n_estimators", str(n_estimators)), ("num_leaves", str(num_leaves)), ("max_depth", str(max_depth)),
("min_data_in_leaf", str(min_data_in_leaf)), ("learning_rate", str(learning_rate)), ("boosting", str(boosting)),
("objective", str(objective)), ("metric", str(metric)), ("n_jobs", str(n_jobs))]
model_version2 = vectice.create_model_version().with_parent_name("Classifier").with_algorithm("Light GBM").with_type(ModelType.CLASSIFICATION).with_properties(properties).with_metrics(metrics).with_user_version(get_random_string(12))
run.add_outputs([model_version2])
###Output
_____no_output_____
###Markdown
Santander customer satisfaction challenge Problem Customer satisfaction is a key measure of success for all businesses. Unhappy customers don't stay with the same provider and they rarely voice their dissatisfaction before leaving. In this context, Santander bank launched a challenge in Kaggle in order to build models that predict potential unhappy customers--- Objective The objective of this competition is to be able to identify unhappy customers early and anticipate their leaving which would allow the company to take proactive steps to improve a customer's happiness before it's too late. In this competition, you'll work with hundreds of anonymized features to predict if a customer is satisfied or dissatisfied with their banking experience. Data The data is an anonymized dataset containing a large number of numeric variables. The "TARGET" column is the variable to predict. It equals 1 for unsatisfied customers and 0 for satisfied customers. The task is to predict the probability that each customer in the test set is an unsatisfied customer.- train.csv: (371 columns): The training set including the target- test.csv: (370 columns): The test set without the target Install Vectice and GCS packages Vectice provides a generic metadata layer that is potentially suitable for most data science workflows. For this tutorial we will use the sickit-learn library for modeling and track experiments directly through our Python SDK to illustrate how to fine-tune exactly what you would like to track: metrics, etc. The same mechanisms would apply to R, Java or even more generic REST APIs to track metadata from any programming language and library. Here is a link to the Python SDK Documentation, it's not final nor complete and it is updated as we go along. [Python SDK Documentation](https://doc.vectice.com/)
###Code
!pip3 install -q fsspec
!pip3 install -q gcsfs
!pip3 install -q vectice[github]
!pip3 show vectice
###Output
_____no_output_____
###Markdown
Install the required packages **Especially if you're working locally and you didn't already install them**
###Code
!pip install -q numpy
!pip install -q pandas
!pip install -q matplotlib
!pip install -q seaborn
!pip install -q sklearn
!pip install -q lightgbm
!pip install -q imblearn
###Output
_____no_output_____
###Markdown
Import the required packages
###Code
import os
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
import seaborn as sns
from vectice.models import JobType
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV, cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score, precision_score, recall_score, roc_auc_score, auc
from sklearn.metrics import f1_score, confusion_matrix, precision_recall_curve, roc_curve
import lightgbm as lgb
from lightgbm import plot_importance
from imblearn.over_sampling import SMOTE
from collections import Counter
plt.style.use('seaborn')
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
###Output
_____no_output_____
###Markdown
Retreive the data from GS If you're not using Google Colab, don't run the following cell and put the json key access file that was provided with your tutorial account in the same directory with this notebook
###Code
# Don't run this cell if you're not using Colab
# Load your json key file to access GCS. It can be found in the tutorial page
# The name should be something like readerKey.json
from google.colab import files
uploaded = files.upload()
# Once your file is loaded set the credentials for GCS and load the file
# in a pandas frame, double check the json file name you uploaded below.
### Complete with the name of the JSON key file to access GCS. It can be found in the tutorial page
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'readerKey.json'
## Get the dataset from GCS
train_df = pd.read_csv("gs://vectice-examples-samples/Customer_satisfaction_challenge/dataset.csv")
# Run head to make sure the data was loaded properly
print(train_df.head())
###Output
_____no_output_____
###Markdown
Data explorationData exploration enables us to take a first look on the data, can enhance the overall understanding of the characteristics of the data domain and helps to detect correlation between the features, thereby allowing for the creation of more accurate models
###Code
print("Train Data Shape : ",train_df.shape)
train_df['TARGET'].value_counts()
train_df.info()
train_df.describe()
features = train_df.drop(['ID','TARGET'],axis=1)
###Output
_____no_output_____
###Markdown
Exploratory data analysis (EDA)* Target Percent* Check Multicollinearity* Check OutlierEDA is a technique that helps to explore and understand the data sets by summarizing their main characteristics often plotting them visually. It consists of Histograms, Box plot, Scatter plot and many more. EDA is about gathering as many insights from data as we can in order to understand it
###Code
pd.DataFrame(train_df['TARGET'].value_counts())
###Output
_____no_output_____
###Markdown
The training set is way imbalanced (73012 zeros vs 3008 ones), so some algorithms may learn mostly from the 0 which can affect our predictions. We address that by using oversampling
###Code
f, ax = plt.subplots(1,2,figsize=(10,4))
train_df['TARGET'].value_counts().plot.pie(
explode=[0,0.1],autopct='%1.1f%%',ax=ax[0],shadow=True
)
sns.countplot('TARGET', data=train_df, ax=ax[1])
plt.show()
null_value = train_df.isnull().sum().sort_values(ascending=False)
null_percent = round(train_df.isnull().sum().sort_values(ascending=False)/len(train_df)*100,2)
pd.concat([null_value, null_percent], axis=1, keys=['Null values', 'Percent'])
###Output
_____no_output_____
###Markdown
Ther is no column with null values **Correlation**If we have a big correlation, we have a problem of multicolinearity. That means that there are some features that depend of other features, so we should reduce the dimentionality of our data (if A depends of B, we should either find a way to aggregate or combine the two features and turn it into one variable or drop one of the variables that are too highly correlated with another) and that can be adressed using Principal component analysis (PCA)
###Code
features[features.columns[:8]].corr()
sns.heatmap(features[features.columns[:8]].corr(),annot=True,cmap='YlGnBu')
fig=plt.gcf()
fig.set_size_inches(10,8)
plt.show()
###Output
_____no_output_____
###Markdown
=> We Can Check MulticollinearityMulticollinearity is a phenomenon in which one independent variable is highly correlated with one or more of the other independent variables
###Code
plt.figure(figsize=(16,6))
plt.title("Distribution of mean values per row in the train and test set")
sns.distplot(train_df[features.columns].mean(axis=1),color="black", kde=True,bins=120, label='train')
plt.legend()
plt.show()
plt.figure(figsize=(16,6))
plt.title("Distribution of std values per rows in the train and test set")
sns.distplot(train_df[features.columns].std(axis=1),color="blue",kde=True,bins=120, label='train')
plt.legend(); plt.show()
t0 = train_df[train_df['TARGET'] == 0]
t1 = train_df[train_df['TARGET'] == 1]
plt.figure(figsize=(16,6))
plt.title("Distribution of skew values per row in the train set")
sns.distplot(t0[features.columns].skew(axis=1),color="red", kde=True,bins=120, label='target = 0')
sns.distplot(t1[features.columns].skew(axis=1),color="blue", kde=True,bins=120, label='target = 1')
plt.legend(); plt.show()
###Output
_____no_output_____
###Markdown
=> We Can Check OutlierAn outlier is a value or point that differs substantially from the rest of the data
###Code
train_df.describe()
plt.boxplot(train_df['var3'])
plt.boxplot(train_df['var38'])
###Output
_____no_output_____
###Markdown
The training set:- Contains continuous and and catigorized data (we should treate carigorized data cuz 10000>1 if we interpret them as numeric values and not catigorical (example IDs)- Contains variables with zero variance or non predictive value- Contains fake values (-999999) that were introduced to replace missing data- Is way imbalanced PreprocessingData preprocessing alludes to the method of cleaning and arranging the crude data to make it appropriate for building and preparing AI models. Data preprocessing is a procedure that changes crude data into an instructive and lucid arrangement. Our dataset is imbalanced. We will use oversampling to resolve this problem because if not some algorithms will learn more from the zeros than the ones in our training dataset* Processing Outlier Values
###Code
train_df['var3'].replace(-999999,2,inplace=True)
train_df.describe()
###Output
_____no_output_____
###Markdown
Connect to your Vectice project Here we are going to need an API token and a project token. An API token is used to secure requests between your existing tools and Vectice. You can create and manage those at the API Tokens tab in your workspace, and they impersonate you and your rights per workspace, so we strongly recommend you to avoid sharing them.A project token is used to target the project you're working on in the UI and can found (after creating a project) in the Project settings page, and anyone working on the project can see it and copy/paste it.
###Code
# In order to use Vectice SDK, let's set up the configurations first.
# The Vectice API key below can be generated from the UI.
# For better security, the settings can also be put into a dedicated file called `.vectice` or `.env`.
## Make sure that you're using the right endpoint
from vectice import Vectice
from vectice.entity.model import ModelType
## Here, we specify the Vectice API endpoint
os.environ['VECTICE_API_ENDPOINT']= "beta.vectice.com"
##Complete with your Vectice API token
os.environ['VECTICE_API_TOKEN'] = ""
## Complete with your Vectice project token
Project_Token = "PROJECT TOKEN"
## Create a Vetice instance to connect to your project using your project token
vectice = Vectice(project_token=Project_Token)
print(vectice)
###Output
_____no_output_____
###Markdown
Feature EngineeringIt's about creating new input features from your existing ones and can be seen as a process of addition that helps to improve the mode's performance by :- Isolating and highlighting key information, which helps the algorithms "focus" on what’s important.- You can bring in your own domain expertise.- Most importantly, once you understand the "vocabulary" of feature engineering, you can bring in other people’s domain expertise!In this part we will:* Split Data to Train / Test * Train Data to Standard Scaler* Target Data to Oversampling by SMOTE
###Code
train_df.drop('ID',axis=1,inplace=True)
x = train_df.drop('TARGET',axis=1)
y = train_df['TARGET']
###Output
_____no_output_____
###Markdown
Resolving the problem of multicolinearity Here we are going to use the "The Pearson correlation" method. It is the most common method to use for numerical variables; it assigns a value between − 1 and 1, where 0 is no correlation, 1 is total positive correlation, and − 1 is total negative correlation. This is interpreted as follows: a correlation value of 0.7 between two variables would indicate that a significant and positive relationship exists between the two. A positive correlation signifies that if variable A goes up, then B will also go up, whereas if the value of the correlation is negative, then if A increases, B decreases
###Code
def correlation(dataset, threshold):
col_corr = set() # Set of all the names of correlated columns
corr_matrix = dataset.corr()
for i in range(len(corr_matrix.columns)):
for j in range(i):
if abs(corr_matrix.iloc[i, j]) > threshold: # we are interested in absolute coeff value
colname = corr_matrix.columns[i] # getting the name of column
col_corr.add(colname)
return col_corr
###Output
_____no_output_____
###Markdown
We consider a threshold of 0.9 to avoid high correlation
###Code
corr_features = correlation(x, 0.9)
len(set(corr_features))
x = x.drop(corr_features,axis=1)
###Output
_____no_output_____
###Markdown
Standardize dataStandardScaler is an important technique that is mainly performed as a preprocessing step before many machine learning models, in order to standardize the range of functionality of the input dataset. It's used to resize the distribution of values so that the mean of the observed values is 0 and the standard deviation is 1
###Code
scaler = StandardScaler().fit(x)
x_scaler = scaler.transform(x)
x_scaler_df = pd.DataFrame(x_scaler, columns=x.columns)
###Output
_____no_output_____
###Markdown
**Principal component analysis (PCA)** It's a statistical technique used for data reduction without losing its properties. Using PCA can help identify correlations between data points. PCA creates a visualization of data that minimizes residual variance in the least squares sense and maximizes the variance of the projection coordinates
###Code
pca = PCA(n_components=0.95)
x_scaler_pca = pca.fit_transform(x_scaler)
x_scaler_pca_df = pd.DataFrame(x_scaler_pca)
x_scaler_pca_df.head()
pca.explained_variance_ratio_
plt.scatter(x_scaler_pca_df.loc[:, 0], x_scaler_pca_df.loc[:, 1], c=y, cmap="copper_r")
plt.axis('off')
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
=> We cant use PCA since we can't reduce the dimentionality (The variance is represented by multiple variables and we didn't find a small number of variables that enable to represent a considerable part of the variance) Split the data and use oversampling ML algorithms can have a poor performance when dealing with datasets in which one or more classes represent only a small proportion of the overall dataset compared with a dominant class. This problem can be solved by balancing the number of examples between different classes. Here we suggest to you SMOTE (Synthetic Minority Over-sampling Technique) that creates synthetic data based on creating new data points that are mid-way between two near neighbours in any particular class Create a dataset containing your data to use it as input for your splitting data job. That can be done through the UI by going to your project, clicking on datasets and then clicking on add (you have to add a connection to be able to create a dataset (the service account (readerKey.json) used to create the connection must have writing rights to do that)).We can also create a dataset from the SDK:We can either create a dataset using the connection name, or the connection id. For both methods, we should specify the connection id/name, the dataset name, the list of files and the list of folders that our dataset will contain. To create a dataset containing just folders (no files) we should put None in the files list argument when we call one of the two functions. For exemple:- **vectice.create_dataset_with_connection_id(Connection_id, "dataset_versioning", files)** and **vectice.create_dataset_with_connection_name("Connection_name", "dataset_versioning",files)** enable to create a dataset named dataset_versioning using the connection whose id/name is Connection_id/Connection_name and add the list of files files to the dataset.- **vectice.create_dataset_with_connection_id(Connection_id, "dataset_versioning", files, folders)** and **vectice.create_dataset_with_connection_name("Connection_name", "dataset_versioning", files, folders)** enable to create a dataset named dataset_versioning using the connection whose id/name is Connection_id/Connection_name and add the list of files files and the list of folders folders to the dataset. - **vectice.create_dataset_with_connection_id(Connection_id, "dataset_versioning", None, folders)** and **vectice.create_dataset_with_connection_name("Connection_name", "dataset_versioning", None, folders)** enable to create a dataset named dataset_versioning using the connection whose id/name is Connection_id/Connection_name and add the list of folders folders to the dataset. We can check if the datasets are already created in our workspace by calling **vectice.list_datasets()** which lists all the datasets existing in the project
###Code
vectice.list_datasets().list
###Output
_____no_output_____
###Markdown
Create a dataset version based on the created/existing dataset that contains your data
###Code
# Use auto-versioning here
input_ds_version = vectice.create_dataset_version().with_parent_name("Your dataset's name")
## Here, we create a code version in order to use it as input for our runs to show the location of the source code
## This notebook is in a Github repository, so we're going to use vectice.create_code_version_with_github_uri to create the code version
uri = "https://github.com/vectice/vectice-examples"
script_relative_path="Notebooks/Vanilla/Customer_satisfaction_example/customer_satisfaction_challenge.ipynb"
input_code = Vectice.create_code_version_with_github_uri(uri=uri, script_relative_path=script_relative_path)
###Output
_____no_output_____
###Markdown
The following code splits the dataset to train and test sets and uses the SMOTE methode for oversampling in order to balance our dataset. Here, we declare train_set and test_set as dataset versions (after creating the datasets in the UI or from the SDK) in order to be able to use them as inputs for the different models
###Code
# Start a Vectice run. The job type is PREPARATION in this case.
vectice.create_run("jobSplitData_Customer_Satisfaction", JobType.PREPARATION).with_properties([("A run property", "A value"), ("A second run property", "A value")])
with vectice.start_run(inputs=[input_ds_version,input_code]) as run:
#with vectice.start_run(inputs=[input_ds_version]) as run:
#Split data
scaler_x_train, scaler_x_test, scaler_y_train, scaler_y_test = train_test_split(x_scaler, y, test_size=0.3)
#Use SMOTE to oversample the dataset
x_over, y_over = SMOTE().fit_resample(scaler_x_train,scaler_y_train)
print(sorted(Counter(y_over).items()))
# We commented out the code to persist the training and testing test in GCS,
# because we already generated it for you, but feel free to uncomment it and execute it.
# The key (service account (readerKey.json)) existing in the tutorial page may not have writing permissions to GCS.
# Let us know if you want to be able to write files as well and we can issue you a different key.
## Get training and testing data in dataframes in orderto upload them to GCS
#train_set = pd.DataFrame(x_over, columns=x.columns).join(pd.DataFrame(y_over, columns=["TARGET"]))
#test_set = pd.DataFrame(scaler_x_test, columns=x.columns).join(pd.DataFrame(scaler_y_test, columns=["TARGET"]))
#train_set.to_csv (r'gs://vectice-examples-samples/Customer_satisfaction_challenge/training_data.csv', index = False, header = True)
#test_set.to_csv (r'gs://vectice-examples-samples/Customer_satisfaction_challenge/testing_data.csv', index = False, header = True)
# Don't forget to create the datasets from the SDK or from the UI before creating train_ds_version and test_ds_version
train_ds_version = vectice.create_dataset_version().with_parent_name("train dataset name")
test_ds_version = vectice.create_dataset_version().with_parent_name("test dataset name")
run.add_outputs([train_ds_version,test_ds_version])
###Output
_____no_output_____
###Markdown
Our data contains now the same number of zeros and ones now Modeling* LogisticRegression* LightGBM Classification Here we create a function that calculates and shows the confusion matrix and the accuracy, precision, recall, f1_score, roc_auc metrics.- Confusion matrix: Confusion matrices represent counts from predicted and actual values. It shows the rates of TP, FP, FN and TN- Accuracy: The model’s capability to correctly predict both the positives and negatives out of all the predictions. Accuracy_score = (TP + TN)/ (TP + FN + TN + FP)- Precision: The model's capability to correctly predict the positives out of all the positive prediction it made. Precision Score = TP / (FP + TP)- Recall: The model’s capability to correctly predict the positives out of actual positives. This is unlike precision which measures as to how many predictions made by models are actually positive out of all positive predictions made. Recall score is a useful measure of success of prediction when the classes are very imbalanced. Recall Score = TP / (FN + TP)- F1score: The model score as a function of precision and recall score. This is useful measure of the model in the scenarios where one tries to optimize either of precision or recall score and as a result, the model performance suffers.F1_Score = 2* Precision Score * Recall Score/ (Precision Score + Recall Score)- roc_auc_score : ROC curve is a graph that shows the performance of a classification model at all possible thresholds( threshold is a particular value beyond which you say a point belongs to a particular class). AUC measures how well a model is able to distinguish between classes. The curve is plotted between two parameters : * TRUE POSITIVE RATE * FALSE POSTIVIE RATE- True Positive (TP): The value of correct predictions of positives out of actual positive cases. Ex : Predict a well person as not sick- False Positive (FP): The value of incorrect positive predictions. The number of negatives falsly pridected as positives. Ex : Predict a sick person as not sick- True Negative (TN): The value of correct predictions of negatives out of actual negative cases. Ex : Predict a sick person as sick- False Negative (FN): The value of incorrect negative predictions. This value represents the number of positives which gets falsely. Predict a well person as sick
###Code
def get_clf_eval(y_test, pred = None, pred_proba = None):
confusion = confusion_matrix(y_test, pred)
accuracy = accuracy_score(y_test, pred)
precision = precision_score(y_test, pred)
recall = recall_score(y_test, pred)
f1 = f1_score(y_test, pred)
roc_auc = roc_auc_score(y_test, pred_proba)
print('confusion')
print(confusion)
print('Accuacy : {}'.format(np.around(accuracy,4)))
print('Precision: {}'.format(np.around(precision,4)))
print('Recall : {}'.format(np.around(recall,4)))
print('F1 : {}'.format(np.around(f1,4)))
print('ROC_AUC : {}'.format(np.around(roc_auc,4)))
return confusion, accuracy, precision, recall, f1, roc_auc
###Output
_____no_output_____
###Markdown
We can get the list of the models existing in our project by calling **vectice.list_models()**
###Code
vectice.list_models().list
###Output
_____no_output_____
###Markdown
* **LogisticRegression**[Logistic regression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) is used to calculate the probability of a binary event occurring. For example, predicting if a credit card transaction is fraudulent or not fraudulent or predicting if an incoming email is spam or not spam
###Code
## Logistic Regression
vectice.create_run(job_name = "Train model with Logistic regression", job_type = JobType.TRAINING).with_properties([("A run property", "A value"), ("A second run property", "A value")])
with vectice.start_run(inputs=[train_ds_version,test_ds_version, input_code]) as run:
lg_reg = LogisticRegression()
lg_reg.fit(x_over, y_over)
pred = lg_reg.predict(scaler_x_test)
pred_proba = lg_reg.predict_proba(scaler_x_test)[:,1]
confusion, accuracy, precision, recall, f1, roc_auc = get_clf_eval(scaler_y_test, pred=pred, pred_proba=pred_proba)
metrics = [('Accuracy score', accuracy), ("Precision",precision), ("Recall", recall), ('f1 score', f1), ('AUC score', roc_auc)]
## Here we use with_generated_version() to create a new model version. We can also use with_user_version() to specify a user version
model_version1 = vectice.create_model_version().with_parent_name("Customer_Satisfaction_Classifier").with_algorithm("Logistic Regression").with_type(ModelType.CLASSIFICATION).with_metrics(metrics).with_generated_version()
run.add_outputs([model_version1])
###Output
_____no_output_____
###Markdown
* **LightGBM Classifier**[LightGBM](https://lightgbm.readthedocs.io/en/latest/index.html) is a gradient boosting classifier in machine learning that uses tree-based learning algorithms. It can handle a large amount of data, less memory usage, has parallel and GPU learning, good accuracy, faster training speed and efficiency
###Code
scaler_x_test, scaler_x_val, scaler_y_test, scaler_y_val = train_test_split(scaler_x_test, scaler_y_test, test_size=0.5)
##Setting up the model's parameters
## Feel free to play with the parameters
train_data = lgb.Dataset(x_over, label=y_over)
val_data = lgb.Dataset(scaler_x_val, label=scaler_y_val)
n_estimators = 5000
num_leaves = 20
max_depth = -1
min_data_in_leaf = 80
learning_rate = 0.001
boosting = 'gbdt'
objective = 'binary'
metric = 'auc'
n_jobs = -1
params = {
'n_estimators': n_estimators,
'num_leaves': num_leaves,
'max_depth': max_depth,
'min_data_in_leaf': min_data_in_leaf,
'learning_rate': learning_rate,
'boosting': boosting,
'objective': objective,
'metric': metric,
'n_jobs': n_jobs
}
## LightGBM Classifier
vectice.create_run(job_name="Train model with lightgbm", job_type = JobType.TRAINING).with_properties([("A run property", "A value"), ("A second run property", "A value")])
with vectice.start_run(inputs=[train_ds_version,test_ds_version, input_code]) as run:
lgbm = lgb.train(params,
train_data,
valid_sets=val_data,
valid_names=['train','valid'],
early_stopping_rounds=300)
# Predicting the output on the Test Dataset
ypred_lgbm = lgbm.predict(scaler_x_test)
ypred_lgbm
y_pred_lgbm_class = [np.argmax(line) for line in ypred_lgbm]
accuracy_lgbm=accuracy_score(scaler_y_test,y_pred_lgbm_class)
print(accuracy_lgbm)
#Print Area Under Curve
plt.figure()
false_positive_rate, recall, thresholds = roc_curve(scaler_y_test, ypred_lgbm)
roc_auc = auc(false_positive_rate, recall)
plt.title('Receiver Operating Characteristic (ROC)')
plt.plot(false_positive_rate, recall, 'b', label = 'AUC = %0.3f' %roc_auc)
plt.legend(loc='lower right')
plt.plot([0,1], [0,1], 'r--')
plt.xlim([0.0,1.0])
plt.ylim([0.0,1.0])
plt.ylabel('Recall')
plt.xlabel('Fall-out (1-Specificity)')
plt.savefig("ROC_curve.png")
plt.show()
print('AUC score:', roc_auc)
metrics =[("Accuracy score", accuracy_lgbm), ("AUC score", roc_auc)]
properties = [("n_estimators", str(n_estimators)), ("num_leaves", str(num_leaves)), ("max_depth", str(max_depth)),
("min_data_in_leaf", str(min_data_in_leaf)), ("learning_rate", str(learning_rate)), ("boosting", str(boosting)),
("objective", str(objective)), ("metric", str(metric)), ("n_jobs", str(n_jobs))]
model_version2 = vectice.create_model_version().with_parent_name("Customer_Satisfaction_Classifier").with_algorithm("Light GBM").with_type(ModelType.CLASSIFICATION).with_properties(properties).with_metrics(metrics).with_generated_version().with_attachments(["ROC_curve.png"])
run.add_outputs([model_version2])
###Output
_____no_output_____
###Markdown
Santander customer satisfaction challenge Problem Customer satisfaction is a key measure of success for all businesses. Unhappy customers don't stay with the same provider and they rarely voice their dissatisfaction before leaving. In this context, Santander bank launched a challenge in Kaggle in order to build models that predict potential unhappy customers--- Objective The objective of this competition is to be able to identify unhappy customers early and anticipate their leaving which would allow the company to take proactive steps to improve a customer's happiness before it's too late. In this competition, you'll work with hundreds of anonymized features to predict if a customer is satisfied or dissatisfied with their banking experience. Data The data is an anonymized dataset containing a large number of numeric variables. The "TARGET" column is the variable to predict. It equals 1 for unsatisfied customers and 0 for satisfied customers. The task is to predict the probability that each customer in the test set is an unsatisfied customer.- train.csv: (371 columns): The training set including the target- test.csv: (370 columns): The test set without the target Install Vectice and GCS packages Vectice provides a generic metadata layer that is potentially suitable for most data science workflows. For this tutorial we will use the sickit-learn library for modeling and track experiments directly through our Python SDK to illustrate how to fine-tune exactly what you would like to track: metrics, etc. The same mechanisms would apply to R, Java or even more generic REST APIs to track metadata from any programming language and library. Here is a link to the Python SDK Documentation, it's not final nor complete and it is updated as we go along. [Python SDK Documentation](https://storage.googleapis.com/sdk-documentation/index.html) This is a copy
###Code
print('Hello world! From Azure juplab')
print('modif from Azure terminal')
!pip3 install -q fsspec
!pip3 install -q gcsfs
!pip3 install -q vectice
!pip3 show vectice
###Output
Name: vectice
Version: 0.0.6
Summary: Vectice Python library
Home-page: https://github.com/vectice/vectice-python
Author: Vectice Inc
Author-email: [email protected]
License: Apache License 2.0
Location: /opt/conda/lib/python3.7/site-packages
Requires: GitPython, python-dotenv, PyGithub, requests
Required-by:
###Markdown
Install the required packages **Especially if you're working locally and you didn't already install them**
###Code
!pip install -q numpy
!pip install -q pandas
!pip install -q matplotlib
!pip install -q seaborn
!pip install -q sklearn
!pip install -q lightgbm
!pip install -q imblearn
###Output
_____no_output_____
###Markdown
Import the required packages
###Code
import os
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
import seaborn as sns
from vectice.models import JobType
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV, cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score, precision_score, recall_score, roc_auc_score, auc
from sklearn.metrics import f1_score, confusion_matrix, precision_recall_curve, roc_curve
import lightgbm as lgb
from lightgbm import plot_importance
from imblearn.over_sampling import SMOTE
plt.style.use('seaborn')
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
###Output
_____no_output_____
###Markdown
Retreive the data from GS
###Code
# Load your json key file to access GCS that was provided with your tutorial account
# The name should be something like test.json
from google.colab import files
uploaded = files.upload()
# Once your file is loaded set the credentials for GCS and load the file
# in a pandas frame, double check the json file name you uploaded below.
## Complete with the name of your json key file access to GCS that was provided with your tutorial account
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'sandbox-storage-read-only.json'
# The original source dataset is already declared in the Vectice UI as "customer_satisfaction_train"
# and its connection to "gs://vectice-examples-samples/Customer_satisfaction_challenge/" has been established
train_df = pd.read_csv("gs://vectice-examples-samples/Customer_satisfaction_challenge/dataset.csv")
# Run head to make sure the data was loaded properly
print(train_df.head())
###Output
ID var3 var15 imp_ent_var16_ult1 imp_op_var39_comer_ult1 \
0 1 2 23 0.0 0.0
1 3 2 34 0.0 0.0
2 4 2 23 0.0 0.0
3 8 2 37 0.0 195.0
4 10 2 39 0.0 0.0
imp_op_var39_comer_ult3 imp_op_var40_comer_ult1 imp_op_var40_comer_ult3 \
0 0.0 0.0 0.0
1 0.0 0.0 0.0
2 0.0 0.0 0.0
3 195.0 0.0 0.0
4 0.0 0.0 0.0
imp_op_var40_efect_ult1 imp_op_var40_efect_ult3 ... \
0 0.0 0.0 ...
1 0.0 0.0 ...
2 0.0 0.0 ...
3 0.0 0.0 ...
4 0.0 0.0 ...
saldo_medio_var33_hace2 saldo_medio_var33_hace3 saldo_medio_var33_ult1 \
0 0.0 0.0 0.0
1 0.0 0.0 0.0
2 0.0 0.0 0.0
3 0.0 0.0 0.0
4 0.0 0.0 0.0
saldo_medio_var33_ult3 saldo_medio_var44_hace2 saldo_medio_var44_hace3 \
0 0.0 0.0 0.0
1 0.0 0.0 0.0
2 0.0 0.0 0.0
3 0.0 0.0 0.0
4 0.0 0.0 0.0
saldo_medio_var44_ult1 saldo_medio_var44_ult3 var38 TARGET
0 0.0 0.0 39205.170000 0
1 0.0 0.0 49278.030000 0
2 0.0 0.0 67333.770000 0
3 0.0 0.0 64007.970000 0
4 0.0 0.0 117310.979016 0
[5 rows x 371 columns]
###Markdown
Data explorationData exploration enables us to take a first look on the data, can enhance the overall understanding of the characteristics of the data domain and helps to detect correlation between the features, thereby allowing for the creation of more accurate models
###Code
print("Train Data Shape : ",train_df.shape)
train_df['TARGET'].value_counts()
train_df.info()
train_df.describe()
features = train_df.drop(['ID','TARGET'],axis=1)
###Output
_____no_output_____
###Markdown
Exploratory data analysis (EDA)* Target Percent* Check Multicollinearity* Check OutlierEDA is a technique that helps to explore and understand the data sets by summarizing their main characteristics often plotting them visually. It consists of Histograms, Box plot, Scatter plot and many more. EDA is about gathering as many insights from data as we can in order to understand it
###Code
pd.DataFrame(train_df['TARGET'].value_counts())
###Output
_____no_output_____
###Markdown
The training set is way imbalanced (73012 zeros vs 3008 ones), so some algorithms may learn mostly from the 0 which can affect our predictions. We address that by using oversampling
###Code
f, ax = plt.subplots(1,2,figsize=(10,4))
train_df['TARGET'].value_counts().plot.pie(
explode=[0,0.1],autopct='%1.1f%%',ax=ax[0],shadow=True
)
sns.countplot('TARGET', data=train_df, ax=ax[1])
plt.show()
null_value = train_df.isnull().sum().sort_values(ascending=False)
null_percent = round(train_df.isnull().sum().sort_values(ascending=False)/len(train_df)*100,2)
pd.concat([null_value, null_percent], axis=1, keys=['Null values', 'Percent'])
###Output
_____no_output_____
###Markdown
Ther is no column with null values **Correlation**If we have a big correlation, we have a problem of multicolinearity. That means that there are some features that depend of other features, so we should reduce the dimentionality of our data (if A depends of B, we should either find a way to aggregate or combine the two features and turn it into one variable or drop one of the variables that are too highly correlated with another) and that can be adressed using Principal component analysis (PCA)
###Code
features[features.columns[:8]].corr()
sns.heatmap(features[features.columns[:8]].corr(),annot=True,cmap='YlGnBu')
fig=plt.gcf()
fig.set_size_inches(10,8)
plt.show()
###Output
_____no_output_____
###Markdown
=> We Can Check MulticollinearityMulticollinearity is a phenomenon in which one independent variable is highly correlated with one or more of the other independent variables
###Code
plt.figure(figsize=(16,6))
plt.title("Distribution of mean values per row in the train and test set")
sns.distplot(train_df[features.columns].mean(axis=1),color="black", kde=True,bins=120, label='train')
plt.legend()
plt.show()
plt.figure(figsize=(16,6))
plt.title("Distribution of std values per rows in the train and test set")
sns.distplot(train_df[features.columns].std(axis=1),color="blue",kde=True,bins=120, label='train')
plt.legend(); plt.show()
t0 = train_df[train_df['TARGET'] == 0]
t1 = train_df[train_df['TARGET'] == 1]
plt.figure(figsize=(16,6))
plt.title("Distribution of skew values per row in the train set")
sns.distplot(t0[features.columns].skew(axis=1),color="red", kde=True,bins=120, label='target = 0')
sns.distplot(t1[features.columns].skew(axis=1),color="blue", kde=True,bins=120, label='target = 1')
plt.legend(); plt.show()
###Output
_____no_output_____
###Markdown
=> We Can Check OutlierAn outlier is a value or point that differs substantially from the rest of the data
###Code
train_df.describe()
plt.boxplot(train_df['var3'])
plt.boxplot(train_df['var38'])
###Output
_____no_output_____
###Markdown
The training set:- Contains continuous and and catigorized data (we should treate carigorized data cuz 10000>1 if we interpret them as numeric values and not catigorical (example IDs)- Contains variables with zero variance or non predictive value- Contains fake values (-999999) that were introduced to replace missing data- Is way imbalanced PreprocessingData preprocessing alludes to the method of cleaning and arranging the crude data to make it appropriate for building and preparing AI models. Data preprocessing is a procedure that changes crude data into an instructive and lucid arrangement. Our dataset is imbalanced. We will use oversampling to resolve this problem because if not some algorithms will learn more from the zeros than the ones in our training dataset* Processing Outlier Values
###Code
train_df['var3'].replace(-999999,2,inplace=True)
train_df.describe()
###Output
_____no_output_____
###Markdown
Connect to your Vectice project Here we are going to need an API token and a project token. An API token is used to secure requests between your existing tools and Vectice. You can create and manage those at the API Tokens tab in your workspace, and they impersonate you and your rights per workspace, so we strongly recommend you to avoid sharing them.A project token is used to target the project you're working on in the UI and can found (after creating a project) in the Project settings page, and anyone working on the project can see it and copy/paste it.
###Code
# In order to use Vectice SDK, let's set up the configurations first.
# The Vectice API key below can be generated from the UI.
# For better security, the settings can also be put into a dedicated file called `.vectice` or `.env`.
## Make sure that you're using the right endpoint (hint: be-beta.vectice.com)
os.environ['VECTICE_API_ENDPOINT']= ""
os.environ['VECTICE_API_TOKEN'] = ""
## Create a Vetice instance to connect to your project using your project token
## Hint: Do not forget to import vectice (from vectice import Vectice)
vectice = Vectice(project_token="")
print(vectice)
#@title Double click to show the syntax
os.environ['VECTICE_API_ENDPOINT']= "be-test.vectice.com"
##Complete with your Vectice API token
os.environ['VECTICE_API_TOKEN'] = "NKBkPJ25j.m6QdAkPDrWYwRZxXEOJLeNKBkPJ25j3VgzlypaGnM41bq89omv"
from vectice import Vectice
## Complete with your project token
vectice = Vectice(project_token="Kvbr2g6EIk6rkpWBwdV5")
print(vectice)
###Output
<vectice.vectice.Vectice object at 0x7f16d62e0bd0>
###Markdown
Feature EngineeringIt's about creating new input features from your existing ones and can be seen as a process of addition that helps to improve the mode's performance by :- Isolating and highlighting key information, which helps the algorithms "focus" on what’s important.- You can bring in your own domain expertise.- Most importantly, once you understand the "vocabulary" of feature engineering, you can bring in other people’s domain expertise!In this part we will:* Split Data to Train / Test * Train Data to Standard Scaler* Target Data to Oversampling by SMOTE
###Code
train_df.drop('ID',axis=1,inplace=True)
x = train_df.drop('TARGET',axis=1)
y = train_df['TARGET']
###Output
_____no_output_____
###Markdown
Resolving the problem of multicolinearity Here we are going to use the "The Pearson correlation" method. It is the most common method to use for numerical variables; it assigns a value between − 1 and 1, where 0 is no correlation, 1 is total positive correlation, and − 1 is total negative correlation. This is interpreted as follows: a correlation value of 0.7 between two variables would indicate that a significant and positive relationship exists between the two. A positive correlation signifies that if variable A goes up, then B will also go up, whereas if the value of the correlation is negative, then if A increases, B decreases
###Code
def correlation(dataset, threshold):
col_corr = set() # Set of all the names of correlated columns
corr_matrix = dataset.corr()
for i in range(len(corr_matrix.columns)):
for j in range(i):
if abs(corr_matrix.iloc[i, j]) > threshold: # we are interested in absolute coeff value
colname = corr_matrix.columns[i] # getting the name of column
col_corr.add(colname)
return col_corr
###Output
_____no_output_____
###Markdown
We consider a threshold of 0.9 to avoid high correlation
###Code
corr_features = correlation(x, 0.9)
len(set(corr_features))
x = x.drop(corr_features,axis=1)
###Output
_____no_output_____
###Markdown
Standardize dataStandardScaler is an important technique that is mainly performed as a preprocessing step before many machine learning models, in order to standardize the range of functionality of the input dataset. It's used to resize the distribution of values so that the mean of the observed values is 0 and the standard deviation is 1
###Code
scaler = StandardScaler().fit(x)
x_scaler = scaler.transform(x)
x_scaler_df = pd.DataFrame(x_scaler, columns=x.columns)
###Output
_____no_output_____
###Markdown
**Principal component analysis (PCA)** It's a statistical technique used for data reduction without losing its properties. Using PCA can help identify correlations between data points. PCA creates a visualization of data that minimizes residual variance in the least squares sense and maximizes the variance of the projection coordinates
###Code
pca = PCA(n_components=0.95)
x_scaler_pca = pca.fit_transform(x_scaler)
x_scaler_pca_df = pd.DataFrame(x_scaler_pca)
x_scaler_pca_df.head()
pca.explained_variance_ratio_
plt.scatter(x_scaler_pca_df.loc[:, 0], x_scaler_pca_df.loc[:, 1], c=y, cmap="copper_r")
plt.axis('off')
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
=> We cant use PCA since we can't reduce the dimentionality (The variance is represented by multiple variables and we didn't find a small number of variables that enable to represent a considerable part of the variance) Split the data and use oversampling ML algorithms can have a poor performance when dealing with datasets in which one or more classes represent only a small proportion of the overall dataset compared with a dominant class. This problem can be solved by balancing the number of examples between different classes. Here we suggest to you SMOTE (Synthetic Minority Over-sampling Technique) that creates synthetic data based on creating new data points that are mid-way between two near neighbours in any particular class Create a dataset containing your dataset to use it as input for your splitting data job. That can be done through the UI by going to your project, clicking on datasets and then click on add (you should add a connection to be able to create a dataset)Create a dataset version based on the dataset you created above
###Code
input_ds_version = ""
#@title Double click to show the syntax
# Use auto-versioning here
input_ds_version = vectice.create_dataset_version().with_parent_name("customer_satisfaction_jupyter")
###Output
_____no_output_____
###Markdown
The following code splits the dataset to train and test sets and uses the SMOTE methode for oversampling in order to balance our dataset. Please complete itwith creating PREPARATION job run, start it and then declare train_set and test_set as dataset versions (after creating the datasets in the UI) in order to be able to use them as inputs for the different models
###Code
from imblearn.over_sampling import SMOTE
from collections import Counter
#Split data
scaler_x_train, scaler_x_test, scaler_y_train, scaler_y_test = train_test_split(x_scaler, y, test_size=0.3)
#Use SMOTE to oversample the dataset
x_over, y_over = SMOTE().fit_resample(scaler_x_train,scaler_y_train)
print(sorted(Counter(y_over).items()))
#@title Double click to show the answer
uri = "https://github.com/Mesto00/Users-Notebooks"
script_relative_path="Notebooks/Vanilla/Customer_satisfaction_example/customer_satisfaction_challenge.ipynb"
input_code = Vectice.create_code_version_with_github_uri(uri=uri, script_relative_path=script_relative_path)
input_ds_version = input_ds_version
# Start a Vectice run. The job type should be PREPARATION in this case.
vectice.create_run("jobSplitData_Customer_Satisfaction", JobType.PREPARATION)
with vectice.start_run(inputs=[input_ds_version,input_code]) as run:
#with vectice.start_run(inputs=[input_ds_version]) as run:
#Split data
scaler_x_train, scaler_x_test, scaler_y_train, scaler_y_test = train_test_split(x_scaler, y, test_size=0.3)
#Use SMOTE to oversample the dataset
x_over, y_over = SMOTE().fit_resample(scaler_x_train,scaler_y_train)
##We commented out the code to persist the training and testing test in GCS,
# because we already generated it for you, but feel free to uncomment it and execute it.
# The key you were provided for this tutorial may not have write permissions to GCS.
# Let us know if you want to be able to write files as well and we can issue you a different key.
## Get training and testing data in dataframes in orderto upload them to GCS
#train_set = pd.DataFrame(x_over, columns=x.columns).join(pd.DataFrame(y_over, columns=["TARGET"]))
#test_set = pd.DataFrame(scaler_x_test, columns=x.columns).join(pd.DataFrame(scaler_y_test, columns=["TARGET"]))
#train_set.to_csv (r'gs://vectice-examples-samples/Customer_satisfaction_challenge/training_data.csv', index = False, header = True)
#test_set.to_csv (r'gs://vectice-examples-samples/Customer_satisfaction_challenge/testing_data.csv', index = False, header = True)
# Don't forget to create the datasets before creating train_ds_version and test_ds_version
train_ds_version = vectice.create_dataset_version().with_parent_name("train_customer_satisfaction_jupyter")
test_ds_version = vectice.create_dataset_version().with_parent_name("test_customer_satisfaction_jupyter")
run.add_outputs([train_ds_version,test_ds_version])
###Output
_____no_output_____
###Markdown
Our data contains now the same number of zeros and ones now Get different user versions Generate a random user version by calling get_random_string
###Code
# Let's generate some unique names for our following modeling experiments
import random
import string
def get_random_string(length):
return "".join(random.choice(string.ascii_letters) for i in range(length))
###Output
_____no_output_____
###Markdown
Modeling* LogisticRegression* LightGBM Classification Here we create a function that calculates and shows the confusion matrix and the accuracy, precision, recall, f1_score, roc_auc metrics.- Confusion matrix: Confusion matrices represent counts from predicted and actual values. It shows the rates of TP, FP, FN and TN- Accuracy: The model’s capability to correctly predict both the positives and negatives out of all the predictions. Accuracy_score = (TP + TN)/ (TP + FN + TN + FP)- Precision: The model's capability to correctly predict the positives out of all the positive prediction it made. Precision Score = TP / (FP + TP)- Recall: The model’s capability to correctly predict the positives out of actual positives. This is unlike precision which measures as to how many predictions made by models are actually positive out of all positive predictions made. Recall score is a useful measure of success of prediction when the classes are very imbalanced. Recall Score = TP / (FN + TP)- F1score: The model score as a function of precision and recall score. This is useful measure of the model in the scenarios where one tries to optimize either of precision or recall score and as a result, the model performance suffers.F1_Score = 2* Precision Score * Recall Score/ (Precision Score + Recall Score)- roc_auc_score : ROC curve is a graph that shows the performance of a classification model at all possible thresholds( threshold is a particular value beyond which you say a point belongs to a particular class). AUC measures how well a model is able to distinguish between classes. The curve is plotted between two parameters : * TRUE POSITIVE RATE * FALSE POSTIVIE RATE- True Positive (TP): The value of correct predictions of positives out of actual positive cases. Ex : Predict a well person as not sick- False Positive (FP): The value of incorrect positive predictions. The number of negatives falsly pridected as positives. Ex : Predict a sick person as not sick- True Negative (TN): The value of correct predictions of negatives out of actual negative cases. Ex : Predict a sick person as sick- False Negative (FN): The value of incorrect negative predictions. This value represents the number of positives which gets falsely. Predict a well person as sick
###Code
def get_clf_eval(y_test, pred = None, pred_proba = None):
confusion = confusion_matrix(y_test, pred)
accuracy = accuracy_score(y_test, pred)
precision = precision_score(y_test, pred)
recall = recall_score(y_test, pred)
f1 = f1_score(y_test, pred)
roc_auc = roc_auc_score(y_test, pred_proba)
print('confusion')
print(confusion)
print('Accuacy : {}'.format(np.around(accuracy,4)))
print('Precision: {}'.format(np.around(precision,4)))
print('Recall : {}'.format(np.around(recall,4)))
print('F1 : {}'.format(np.around(f1,4)))
print('ROC_AUC : {}'.format(np.around(roc_auc,4)))
return confusion, accuracy, precision, recall, f1, roc_auc
###Output
_____no_output_____
###Markdown
* **LogisticRegression**[Logistic regression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) is used to calculate the probability of a binary event occurring. For example, predicting if a credit card transaction is fraudulent or not fraudulent or predicting if an incoming email is spam or not spam The following code creates a Logistic regression model and calculates the metrics related to this model. Complete the code by adding a job run to create a model and send the metrics to Vectice (You can look at the examples in the documentation) and don't forget to use the names you generated for your experimints
###Code
#Logistic Regression
## Create a run
##Start the run
lg_reg = LogisticRegression()
lg_reg.fit(x_over, y_over)
pred = lg_reg.predict(scaler_x_test)
pred_proba = lg_reg.predict_proba(scaler_x_test)[:,1]
confusion, accuracy, precision, recall, f1, roc_auc = get_clf_eval(scaler_y_test, pred=pred, pred_proba=pred_proba)
## Create a model version and add metrics and properties to it (you can use the function get_random_string defined below in order to be able to generate different user versions)
#@title Double click to show the answer
uri = "https://github.com/Mesto00/Users-Notebooks"
script_relative_path="Notebooks/Vanilla/Customer_satisfaction_example/customer_satisfaction_challenge.ipynb"
input_code = Vectice.create_code_version_with_github_uri(uri=uri, script_relative_path=script_relative_path)
vectice.create_run(job_name = "Train model with Logistic regression", job_type = JobType.TRAINING)
with vectice.start_run(inputs=[train_ds_version,test_ds_version, input_code]) as run:
lg_reg = LogisticRegression()
lg_reg.fit(x_over, y_over)
pred = lg_reg.predict(scaler_x_test)
pred_proba = lg_reg.predict_proba(scaler_x_test)[:,1]
confusion, accuracy, precision, recall, f1, roc_auc = get_clf_eval(scaler_y_test, pred=pred, pred_proba=pred_proba)
metrics = [('Accuracy score', accuracy), ("Precision,",precision), ("Recall", recall), ('f1 score', f1), ('ROC_AUC', roc_auc)]
model_version1 = vectice.create_model_version().with_parent_name("logisticRegression").with_algorithm("Classification : Logistic Regression").with_metrics(metrics).with_user_version(get_random_string(12))
run.add_outputs([model_version1])
###Output
confusion
[[15443 6460]
[ 235 668]]
Accuacy : 0.7064
Precision: 0.0937
Recall : 0.7398
F1 : 0.1664
ROC_AUC : 0.8015
###Markdown
* **LightGBM Classifier**[LightGBM](https://lightgbm.readthedocs.io/en/latest/index.html) is a gradient boosting classifier in machine learning that uses tree-based learning algorithms. It can handle a large amount of data, less memory usage, has parallel and GPU learning, good accuracy, faster training speed and efficiency
###Code
scaler_x_test, scaler_x_val, scaler_y_test, scaler_y_val = train_test_split(scaler_x_test, scaler_y_test, test_size=0.5)
##Setting up the model's parameters
## Feel free to play with the parameters
train_data = lgb.Dataset(x_over, label=y_over)
val_data = lgb.Dataset(scaler_x_val, label=scaler_y_val)
n_estimators = 5000
num_leaves = 20
max_depth = -1
min_data_in_leaf = 80
learning_rate = 0.001
boosting = 'gbdt'
objective = 'binary'
metric = 'auc'
n_jobs = -1
params = {
'n_estimators': n_estimators,
'num_leaves': num_leaves,
'max_depth': max_depth,
'min_data_in_leaf': min_data_in_leaf,
'learning_rate': learning_rate,
'boosting': boosting,
'objective': objective,
'metric': metric,
'n_jobs': n_jobs
}
###Output
_____no_output_____
###Markdown
The following code creates a LightGBM classifier model and calculates the metrics related to this model. Complete the code by adding a job run to create a model and send the metrics to Vectice and don't forget to give the names dataset version you created before as inputs to you model
###Code
## Create a run
##Start the run
lgbm = lgb.train(params,
train_data,
valid_sets=val_data,
valid_names=['train','valid'],
early_stopping_rounds=300)
# Predicting the output on the Test Dataset
ypred_lgbm = lgbm.predict(scaler_x_test)
ypred_lgbm
y_pred_lgbm_class = [np.argmax(line) for line in ypred_lgbm]
accuracy_lgbm=accuracy_score(scaler_y_test,y_pred_lgbm_class)
print(accuracy_lgbm)
#Print Area Under Curve
plt.figure()
false_positive_rate, recall, thresholds = roc_curve(scaler_y_test, ypred_lgbm)
roc_auc = auc(false_positive_rate, recall)
plt.title('Receiver Operating Characteristic (ROC)')
plt.plot(false_positive_rate, recall, 'b', label = 'AUC = %0.3f' %roc_auc)
plt.legend(loc='lower right')
plt.plot([0,1], [0,1], 'r--')
plt.xlim([0.0,1.0])
plt.ylim([0.0,1.0])
plt.ylabel('Recall')
plt.xlabel('Fall-out (1-Specificity)')
plt.show()
print('AUC score:', roc_auc)
## Create a model version and add metrics and properties to it (you can use the function get_random_string defined below in order to be able to generate different user versions)
#@title Double click to show the answer
uri = "https://github.com/Mesto00/Users-Notebooks"
script_relative_path="Notebooks/Vanilla/Customer_satisfaction_example/customer_satisfaction_challenge.ipynb"
input_code = Vectice.create_code_version_with_github_uri(uri=uri, script_relative_path=script_relative_path)
vectice.create_run(job_name="Train model with lightgbm", job_type = JobType.TRAINING)
with vectice.start_run(inputs=[train_ds_version,test_ds_version, input_code]) as run:
lgbm = lgb.train(params,
train_data,
valid_sets=val_data,
valid_names=['train','valid'],
early_stopping_rounds=300)
# Predicting the output on the Test Dataset
ypred_lgbm = lgbm.predict(scaler_x_test)
ypred_lgbm
y_pred_lgbm_class = [np.argmax(line) for line in ypred_lgbm]
accuracy_lgbm=accuracy_score(scaler_y_test,y_pred_lgbm_class)
print(accuracy_lgbm)
#Print Area Under Curve
plt.figure()
false_positive_rate, recall, thresholds = roc_curve(scaler_y_test, ypred_lgbm)
roc_auc = auc(false_positive_rate, recall)
plt.title('Receiver Operating Characteristic (ROC)')
plt.plot(false_positive_rate, recall, 'b', label = 'AUC = %0.3f' %roc_auc)
plt.legend(loc='lower right')
plt.plot([0,1], [0,1], 'r--')
plt.xlim([0.0,1.0])
plt.ylim([0.0,1.0])
plt.ylabel('Recall')
plt.xlabel('Fall-out (1-Specificity)')
plt.show()
print('AUC score:', roc_auc)
metrics =[("Accuracy", accuracy_lgbm), ("AUC score", roc_auc)]
properties = [("n_estimators", n_estimators), ("num_leaves", num_leaves), ("max_depth", max_depth),
("min_data_in_leaf", min_data_in_leaf), ("learning_rate", learning_rate), ("boosting", boosting),
("objective", objective), ("metric", metric), ("n_jobs", n_jobs)]
model_version2 = vectice.create_model_version().with_parent_name("LGBMclassification").with_algorithm("Classification : LGBM").with_properties(properties).with_metrics(metrics).with_user_version(get_random_string(12))
run.add_outputs([model_version2])
###Output
[LightGBM] [Info] Number of positive: 51109, number of negative: 51109
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.037769 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 20053
[LightGBM] [Info] Number of data points in the train set: 102218, number of used features: 118
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.500000 -> initscore=0.000000
[1] train's auc: 0.798169
Training until validation scores don't improve for 300 rounds
[2] train's auc: 0.798169
[3] train's auc: 0.798169
[4] train's auc: 0.798169
[5] train's auc: 0.798169
[6] train's auc: 0.798169
[7] train's auc: 0.798169
[8] train's auc: 0.798169
[9] train's auc: 0.798169
[10] train's auc: 0.798169
[11] train's auc: 0.798169
[12] train's auc: 0.798169
[13] train's auc: 0.798169
[14] train's auc: 0.798169
[15] train's auc: 0.798169
[16] train's auc: 0.798169
[17] train's auc: 0.798169
[18] train's auc: 0.798169
[19] train's auc: 0.798169
[20] train's auc: 0.798169
[21] train's auc: 0.798169
[22] train's auc: 0.798169
[23] train's auc: 0.798169
[24] train's auc: 0.798169
[25] train's auc: 0.798169
[26] train's auc: 0.798169
[27] train's auc: 0.798169
[28] train's auc: 0.798169
[29] train's auc: 0.798169
[30] train's auc: 0.798169
[31] train's auc: 0.798169
[32] train's auc: 0.798169
[33] train's auc: 0.798169
[34] train's auc: 0.798169
[35] train's auc: 0.798169
[36] train's auc: 0.798169
[37] train's auc: 0.798169
[38] train's auc: 0.798169
[39] train's auc: 0.798169
[40] train's auc: 0.798169
[41] train's auc: 0.798169
[42] train's auc: 0.798169
[43] train's auc: 0.798169
[44] train's auc: 0.798169
[45] train's auc: 0.79818
[46] train's auc: 0.79818
[47] train's auc: 0.79818
[48] train's auc: 0.79818
[49] train's auc: 0.79818
[50] train's auc: 0.79818
[51] train's auc: 0.79818
[52] train's auc: 0.798334
[53] train's auc: 0.798334
[54] train's auc: 0.798334
[55] train's auc: 0.798334
[56] train's auc: 0.798334
[57] train's auc: 0.798334
[58] train's auc: 0.798299
[59] train's auc: 0.798299
[60] train's auc: 0.798299
[61] train's auc: 0.798299
[62] train's auc: 0.798299
[63] train's auc: 0.798299
[64] train's auc: 0.798299
[65] train's auc: 0.798299
[66] train's auc: 0.798299
[67] train's auc: 0.798299
[68] train's auc: 0.798299
[69] train's auc: 0.798299
[70] train's auc: 0.798299
[71] train's auc: 0.798299
[72] train's auc: 0.799056
[73] train's auc: 0.799056
[74] train's auc: 0.799058
[75] train's auc: 0.799112
[76] train's auc: 0.799112
[77] train's auc: 0.799114
[78] train's auc: 0.799114
[79] train's auc: 0.799114
[80] train's auc: 0.799114
[81] train's auc: 0.79997
[82] train's auc: 0.79997
[83] train's auc: 0.801113
[84] train's auc: 0.801113
[85] train's auc: 0.801777
[86] train's auc: 0.802605
[87] train's auc: 0.802603
[88] train's auc: 0.802602
[89] train's auc: 0.802602
[90] train's auc: 0.802688
[91] train's auc: 0.802688
[92] train's auc: 0.80265
[93] train's auc: 0.80265
[94] train's auc: 0.802688
[95] train's auc: 0.802688
[96] train's auc: 0.804695
[97] train's auc: 0.804695
[98] train's auc: 0.804695
[99] train's auc: 0.805734
[100] train's auc: 0.804695
[101] train's auc: 0.805734
[102] train's auc: 0.805773
[103] train's auc: 0.805773
[104] train's auc: 0.805783
[105] train's auc: 0.805783
[106] train's auc: 0.805783
[107] train's auc: 0.805783
[108] train's auc: 0.805783
[109] train's auc: 0.805783
[110] train's auc: 0.807495
[111] train's auc: 0.8075
[112] train's auc: 0.8075
[113] train's auc: 0.8075
[114] train's auc: 0.808401
[115] train's auc: 0.807502
[116] train's auc: 0.808432
[117] train's auc: 0.808431
[118] train's auc: 0.808431
[119] train's auc: 0.808431
[120] train's auc: 0.809886
[121] train's auc: 0.809886
[122] train's auc: 0.809944
[123] train's auc: 0.809944
[124] train's auc: 0.809944
[125] train's auc: 0.809944
[126] train's auc: 0.809944
[127] train's auc: 0.809943
[128] train's auc: 0.809943
[129] train's auc: 0.810016
[130] train's auc: 0.810018
[131] train's auc: 0.810017
[132] train's auc: 0.810304
[133] train's auc: 0.810304
[134] train's auc: 0.810156
[135] train's auc: 0.810156
[136] train's auc: 0.810156
[137] train's auc: 0.809611
[138] train's auc: 0.809611
[139] train's auc: 0.809611
[140] train's auc: 0.809611
[141] train's auc: 0.809611
[142] train's auc: 0.809611
[143] train's auc: 0.809611
[144] train's auc: 0.809611
[145] train's auc: 0.809611
[146] train's auc: 0.809688
[147] train's auc: 0.809688
[148] train's auc: 0.809688
[149] train's auc: 0.809688
[150] train's auc: 0.809688
[151] train's auc: 0.809719
[152] train's auc: 0.809689
[153] train's auc: 0.809719
[154] train's auc: 0.809689
[155] train's auc: 0.80972
[156] train's auc: 0.809722
[157] train's auc: 0.809753
[158] train's auc: 0.809753
[159] train's auc: 0.809753
[160] train's auc: 0.80985
[161] train's auc: 0.809753
[162] train's auc: 0.810078
[163] train's auc: 0.810105
[164] train's auc: 0.810615
[165] train's auc: 0.810615
[166] train's auc: 0.811362
[167] train's auc: 0.810586
[168] train's auc: 0.811362
[169] train's auc: 0.811389
[170] train's auc: 0.811392
[171] train's auc: 0.811394
[172] train's auc: 0.811396
[173] train's auc: 0.811396
[174] train's auc: 0.811397
[175] train's auc: 0.811397
[176] train's auc: 0.811663
[177] train's auc: 0.811663
[178] train's auc: 0.811703
[179] train's auc: 0.811703
[180] train's auc: 0.811703
[181] train's auc: 0.811703
[182] train's auc: 0.812071
[183] train's auc: 0.811746
[184] train's auc: 0.812128
[185] train's auc: 0.812128
[186] train's auc: 0.812128
[187] train's auc: 0.812128
[188] train's auc: 0.812128
[189] train's auc: 0.812128
[190] train's auc: 0.812128
[191] train's auc: 0.812126
[192] train's auc: 0.812126
[193] train's auc: 0.812152
[194] train's auc: 0.812152
[195] train's auc: 0.812152
[196] train's auc: 0.812269
[197] train's auc: 0.812269
[198] train's auc: 0.812269
[199] train's auc: 0.812269
[200] train's auc: 0.812269
[201] train's auc: 0.812247
[202] train's auc: 0.812247
[203] train's auc: 0.812245
[204] train's auc: 0.812245
[205] train's auc: 0.812245
[206] train's auc: 0.812245
[207] train's auc: 0.812249
[208] train's auc: 0.812247
[209] train's auc: 0.812226
[210] train's auc: 0.812226
[211] train's auc: 0.812228
[212] train's auc: 0.812499
[213] train's auc: 0.812498
[214] train's auc: 0.812615
[215] train's auc: 0.813
[216] train's auc: 0.812988
[217] train's auc: 0.812985
[218] train's auc: 0.813019
[219] train's auc: 0.813019
[220] train's auc: 0.81302
[221] train's auc: 0.81302
[222] train's auc: 0.813131
[223] train's auc: 0.813131
[224] train's auc: 0.813131
[225] train's auc: 0.813132
[226] train's auc: 0.813132
[227] train's auc: 0.813132
[228] train's auc: 0.813409
[229] train's auc: 0.813414
[230] train's auc: 0.813413
[231] train's auc: 0.813459
[232] train's auc: 0.813484
[233] train's auc: 0.813364
[234] train's auc: 0.814243
[235] train's auc: 0.814306
[236] train's auc: 0.814452
[237] train's auc: 0.814394
[238] train's auc: 0.814513
[239] train's auc: 0.814707
[240] train's auc: 0.814641
[241] train's auc: 0.81446
[242] train's auc: 0.814569
[243] train's auc: 0.814522
[244] train's auc: 0.814873
[245] train's auc: 0.814823
[246] train's auc: 0.814803
[247] train's auc: 0.815015
[248] train's auc: 0.815194
[249] train's auc: 0.815182
[250] train's auc: 0.815488
[251] train's auc: 0.815554
[252] train's auc: 0.814856
[253] train's auc: 0.814839
[254] train's auc: 0.815051
[255] train's auc: 0.815086
[256] train's auc: 0.815226
[257] train's auc: 0.815227
[258] train's auc: 0.815068
[259] train's auc: 0.815223
[260] train's auc: 0.815206
[261] train's auc: 0.815166
[262] train's auc: 0.815443
[263] train's auc: 0.815433
[264] train's auc: 0.815404
[265] train's auc: 0.815421
[266] train's auc: 0.815422
[267] train's auc: 0.815514
[268] train's auc: 0.815556
[269] train's auc: 0.815535
[270] train's auc: 0.815597
[271] train's auc: 0.81566
[272] train's auc: 0.815663
[273] train's auc: 0.815718
[274] train's auc: 0.815736
[275] train's auc: 0.8157
[276] train's auc: 0.815676
[277] train's auc: 0.81577
[278] train's auc: 0.815752
[279] train's auc: 0.815718
[280] train's auc: 0.815826
[281] train's auc: 0.815816
[282] train's auc: 0.81582
[283] train's auc: 0.815885
[284] train's auc: 0.815974
[285] train's auc: 0.815995
[286] train's auc: 0.815953
[287] train's auc: 0.815936
[288] train's auc: 0.815992
[289] train's auc: 0.81682
[290] train's auc: 0.816742
[291] train's auc: 0.816926
[292] train's auc: 0.816924
[293] train's auc: 0.81701
[294] train's auc: 0.816983
[295] train's auc: 0.817258
[296] train's auc: 0.817237
[297] train's auc: 0.817233
[298] train's auc: 0.817241
[299] train's auc: 0.817353
[300] train's auc: 0.81745
[301] train's auc: 0.817434
[302] train's auc: 0.817407
[303] train's auc: 0.81736
[304] train's auc: 0.817427
[305] train's auc: 0.81743
[306] train's auc: 0.817478
[307] train's auc: 0.817642
[308] train's auc: 0.817704
[309] train's auc: 0.817718
[310] train's auc: 0.817712
[311] train's auc: 0.817741
[312] train's auc: 0.817737
[313] train's auc: 0.817761
[314] train's auc: 0.817771
[315] train's auc: 0.817769
[316] train's auc: 0.817862
[317] train's auc: 0.818017
[318] train's auc: 0.818117
[319] train's auc: 0.81834
[320] train's auc: 0.818326
[321] train's auc: 0.81842
[322] train's auc: 0.818451
[323] train's auc: 0.818446
[324] train's auc: 0.818647
[325] train's auc: 0.818719
[326] train's auc: 0.818594
[327] train's auc: 0.818778
[328] train's auc: 0.818799
[329] train's auc: 0.818869
[330] train's auc: 0.818881
[331] train's auc: 0.81891
[332] train's auc: 0.819107
[333] train's auc: 0.819086
[334] train's auc: 0.819129
[335] train's auc: 0.819157
[336] train's auc: 0.819158
[337] train's auc: 0.819137
[338] train's auc: 0.819232
[339] train's auc: 0.819266
[340] train's auc: 0.819276
[341] train's auc: 0.819361
[342] train's auc: 0.819324
[343] train's auc: 0.819328
[344] train's auc: 0.819286
[345] train's auc: 0.81939
[346] train's auc: 0.819548
[347] train's auc: 0.819526
[348] train's auc: 0.819512
[349] train's auc: 0.819529
[350] train's auc: 0.819595
[351] train's auc: 0.819706
[352] train's auc: 0.819667
[353] train's auc: 0.819664
[354] train's auc: 0.819715
[355] train's auc: 0.819714
[356] train's auc: 0.819755
[357] train's auc: 0.819777
[358] train's auc: 0.820167
[359] train's auc: 0.820195
[360] train's auc: 0.820262
[361] train's auc: 0.820338
[362] train's auc: 0.82031
[363] train's auc: 0.820412
[364] train's auc: 0.820466
[365] train's auc: 0.820429
[366] train's auc: 0.820444
[367] train's auc: 0.820567
[368] train's auc: 0.820643
[369] train's auc: 0.820593
[370] train's auc: 0.820696
[371] train's auc: 0.82066
[372] train's auc: 0.820752
[373] train's auc: 0.820706
[374] train's auc: 0.820743
[375] train's auc: 0.820729
[376] train's auc: 0.820699
[377] train's auc: 0.820807
[378] train's auc: 0.820855
[379] train's auc: 0.820854
[380] train's auc: 0.82102
[381] train's auc: 0.821017
[382] train's auc: 0.821031
[383] train's auc: 0.821079
[384] train's auc: 0.821106
[385] train's auc: 0.821135
[386] train's auc: 0.821184
[387] train's auc: 0.821118
[388] train's auc: 0.821244
[389] train's auc: 0.821212
[390] train's auc: 0.821213
[391] train's auc: 0.821253
[392] train's auc: 0.821294
[393] train's auc: 0.821212
[394] train's auc: 0.821244
[395] train's auc: 0.821213
[396] train's auc: 0.821318
[397] train's auc: 0.821367
[398] train's auc: 0.821332
[399] train's auc: 0.821471
[400] train's auc: 0.821507
[401] train's auc: 0.821512
[402] train's auc: 0.821624
[403] train's auc: 0.821585
[404] train's auc: 0.821573
[405] train's auc: 0.821607
[406] train's auc: 0.821622
[407] train's auc: 0.821514
[408] train's auc: 0.821648
[409] train's auc: 0.821634
[410] train's auc: 0.821706
[411] train's auc: 0.821706
[412] train's auc: 0.8217
[413] train's auc: 0.82174
[414] train's auc: 0.821803
[415] train's auc: 0.82184
[416] train's auc: 0.821833
[417] train's auc: 0.82186
[418] train's auc: 0.821896
[419] train's auc: 0.821916
[420] train's auc: 0.821992
[421] train's auc: 0.822059
[422] train's auc: 0.822132
[423] train's auc: 0.822126
[424] train's auc: 0.822127
[425] train's auc: 0.822154
[426] train's auc: 0.822136
[427] train's auc: 0.822192
[428] train's auc: 0.822167
[429] train's auc: 0.82219
[430] train's auc: 0.822203
[431] train's auc: 0.822211
[432] train's auc: 0.822237
[433] train's auc: 0.822232
[434] train's auc: 0.822234
[435] train's auc: 0.822279
[436] train's auc: 0.822285
[437] train's auc: 0.822367
[438] train's auc: 0.822423
[439] train's auc: 0.822425
[440] train's auc: 0.82248
[441] train's auc: 0.822518
[442] train's auc: 0.822538
[443] train's auc: 0.822523
[444] train's auc: 0.822511
[445] train's auc: 0.82259
[446] train's auc: 0.822602
[447] train's auc: 0.8226
[448] train's auc: 0.822669
[449] train's auc: 0.822658
[450] train's auc: 0.822679
[451] train's auc: 0.822651
[452] train's auc: 0.822712
[453] train's auc: 0.822696
[454] train's auc: 0.822638
[455] train's auc: 0.822718
[456] train's auc: 0.822674
[457] train's auc: 0.822659
[458] train's auc: 0.822702
[459] train's auc: 0.822728
[460] train's auc: 0.822854
[461] train's auc: 0.822827
[462] train's auc: 0.822833
[463] train's auc: 0.822876
[464] train's auc: 0.822914
[465] train's auc: 0.822916
[466] train's auc: 0.822973
[467] train's auc: 0.82296
[468] train's auc: 0.822955
[469] train's auc: 0.823032
[470] train's auc: 0.823031
[471] train's auc: 0.822986
[472] train's auc: 0.823019
[473] train's auc: 0.823005
[474] train's auc: 0.823042
[475] train's auc: 0.823021
[476] train's auc: 0.823086
[477] train's auc: 0.823159
[478] train's auc: 0.823189
[479] train's auc: 0.823175
[480] train's auc: 0.82319
[481] train's auc: 0.82336
[482] train's auc: 0.823331
[483] train's auc: 0.823414
[484] train's auc: 0.823483
[485] train's auc: 0.823589
[486] train's auc: 0.823563
[487] train's auc: 0.82361
[488] train's auc: 0.823556
[489] train's auc: 0.823634
[490] train's auc: 0.823679
[491] train's auc: 0.823726
[492] train's auc: 0.823707
[493] train's auc: 0.824194
[494] train's auc: 0.824189
[495] train's auc: 0.82427
[496] train's auc: 0.824259
[497] train's auc: 0.824314
[498] train's auc: 0.824297
[499] train's auc: 0.824325
[500] train's auc: 0.824351
[501] train's auc: 0.824359
[502] train's auc: 0.824385
[503] train's auc: 0.824413
[504] train's auc: 0.824399
[505] train's auc: 0.824433
[506] train's auc: 0.824458
[507] train's auc: 0.824491
[508] train's auc: 0.824509
[509] train's auc: 0.824462
[510] train's auc: 0.824526
[511] train's auc: 0.824559
[512] train's auc: 0.824591
[513] train's auc: 0.824575
[514] train's auc: 0.824591
[515] train's auc: 0.824581
[516] train's auc: 0.824555
[517] train's auc: 0.824625
[518] train's auc: 0.824638
[519] train's auc: 0.82466
[520] train's auc: 0.824657
[521] train's auc: 0.824678
[522] train's auc: 0.824684
[523] train's auc: 0.824701
[524] train's auc: 0.824743
[525] train's auc: 0.824766
[526] train's auc: 0.824784
[527] train's auc: 0.824806
[528] train's auc: 0.824829
[529] train's auc: 0.824782
[530] train's auc: 0.824774
[531] train's auc: 0.824798
[532] train's auc: 0.824797
[533] train's auc: 0.824816
[534] train's auc: 0.82483
[535] train's auc: 0.824822
[536] train's auc: 0.824838
[537] train's auc: 0.824826
[538] train's auc: 0.824906
[539] train's auc: 0.824922
[540] train's auc: 0.824955
[541] train's auc: 0.824962
[542] train's auc: 0.824998
[543] train's auc: 0.825025
[544] train's auc: 0.825091
[545] train's auc: 0.825089
[546] train's auc: 0.825108
[547] train's auc: 0.825118
[548] train's auc: 0.82514
[549] train's auc: 0.825203
[550] train's auc: 0.825168
[551] train's auc: 0.825171
[552] train's auc: 0.825195
[553] train's auc: 0.825226
[554] train's auc: 0.82522
[555] train's auc: 0.825225
[556] train's auc: 0.825201
[557] train's auc: 0.825259
[558] train's auc: 0.825228
[559] train's auc: 0.825262
[560] train's auc: 0.825271
[561] train's auc: 0.825319
[562] train's auc: 0.825349
[563] train's auc: 0.825328
[564] train's auc: 0.825366
[565] train's auc: 0.825405
[566] train's auc: 0.825381
[567] train's auc: 0.825418
[568] train's auc: 0.825436
[569] train's auc: 0.825446
[570] train's auc: 0.825449
[571] train's auc: 0.825445
[572] train's auc: 0.825401
[573] train's auc: 0.825417
[574] train's auc: 0.825437
[575] train's auc: 0.825515
[576] train's auc: 0.825558
[577] train's auc: 0.825471
[578] train's auc: 0.825489
[579] train's auc: 0.82548
[580] train's auc: 0.825548
[581] train's auc: 0.825472
[582] train's auc: 0.825546
[583] train's auc: 0.825533
[584] train's auc: 0.825581
[585] train's auc: 0.825582
[586] train's auc: 0.8256
[587] train's auc: 0.825618
[588] train's auc: 0.825601
[589] train's auc: 0.825637
[590] train's auc: 0.825647
[591] train's auc: 0.825644
[592] train's auc: 0.825661
[593] train's auc: 0.825666
[594] train's auc: 0.825667
[595] train's auc: 0.825703
[596] train's auc: 0.825689
[597] train's auc: 0.825708
[598] train's auc: 0.825744
[599] train's auc: 0.82577
[600] train's auc: 0.825742
[601] train's auc: 0.825784
[602] train's auc: 0.825788
[603] train's auc: 0.825776
[604] train's auc: 0.825784
[605] train's auc: 0.825788
[606] train's auc: 0.825812
[607] train's auc: 0.825818
[608] train's auc: 0.825812
[609] train's auc: 0.825797
[610] train's auc: 0.825819
[611] train's auc: 0.825859
[612] train's auc: 0.825874
[613] train's auc: 0.82586
[614] train's auc: 0.825855
[615] train's auc: 0.825878
[616] train's auc: 0.825913
[617] train's auc: 0.825888
[618] train's auc: 0.825862
[619] train's auc: 0.825931
[620] train's auc: 0.825956
[621] train's auc: 0.825948
[622] train's auc: 0.825972
[623] train's auc: 0.82598
[624] train's auc: 0.825943
[625] train's auc: 0.825983
[626] train's auc: 0.826016
[627] train's auc: 0.825994
[628] train's auc: 0.826032
[629] train's auc: 0.826064
[630] train's auc: 0.825848
[631] train's auc: 0.825868
[632] train's auc: 0.82589
[633] train's auc: 0.82593
[634] train's auc: 0.825894
[635] train's auc: 0.82589
[636] train's auc: 0.825886
[637] train's auc: 0.825963
[638] train's auc: 0.825989
[639] train's auc: 0.826016
[640] train's auc: 0.826105
[641] train's auc: 0.826124
[642] train's auc: 0.826098
[643] train's auc: 0.82609
[644] train's auc: 0.826114
[645] train's auc: 0.826086
[646] train's auc: 0.826085
[647] train's auc: 0.826108
[648] train's auc: 0.826123
[649] train's auc: 0.82613
[650] train's auc: 0.82613
[651] train's auc: 0.826138
[652] train's auc: 0.82613
[653] train's auc: 0.826115
[654] train's auc: 0.826143
[655] train's auc: 0.826138
[656] train's auc: 0.826081
[657] train's auc: 0.826062
[658] train's auc: 0.826061
[659] train's auc: 0.82605
[660] train's auc: 0.826087
[661] train's auc: 0.826057
[662] train's auc: 0.826061
[663] train's auc: 0.826087
[664] train's auc: 0.826084
[665] train's auc: 0.82612
[666] train's auc: 0.826021
[667] train's auc: 0.826008
[668] train's auc: 0.825974
[669] train's auc: 0.826038
[670] train's auc: 0.826027
[671] train's auc: 0.826063
[672] train's auc: 0.82605
[673] train's auc: 0.826061
[674] train's auc: 0.826102
[675] train's auc: 0.826081
[676] train's auc: 0.826127
[677] train's auc: 0.826108
[678] train's auc: 0.826143
[679] train's auc: 0.826133
[680] train's auc: 0.826097
[681] train's auc: 0.826105
[682] train's auc: 0.826124
[683] train's auc: 0.826118
[684] train's auc: 0.826139
[685] train's auc: 0.82613
[686] train's auc: 0.826172
[687] train's auc: 0.82616
[688] train's auc: 0.826154
[689] train's auc: 0.826195
[690] train's auc: 0.826162
[691] train's auc: 0.826149
[692] train's auc: 0.826169
[693] train's auc: 0.82621
[694] train's auc: 0.8262
[695] train's auc: 0.826223
[696] train's auc: 0.826195
[697] train's auc: 0.826223
[698] train's auc: 0.826204
[699] train's auc: 0.826191
[700] train's auc: 0.826182
[701] train's auc: 0.826186
[702] train's auc: 0.826164
[703] train's auc: 0.826164
[704] train's auc: 0.826216
[705] train's auc: 0.826184
[706] train's auc: 0.826227
[707] train's auc: 0.826194
[708] train's auc: 0.826047
[709] train's auc: 0.826017
[710] train's auc: 0.826049
[711] train's auc: 0.82608
[712] train's auc: 0.826097
[713] train's auc: 0.826201
[714] train's auc: 0.826139
[715] train's auc: 0.826125
[716] train's auc: 0.826177
[717] train's auc: 0.826179
[718] train's auc: 0.826148
[719] train's auc: 0.826232
[720] train's auc: 0.826226
[721] train's auc: 0.826263
[722] train's auc: 0.826279
[723] train's auc: 0.826267
[724] train's auc: 0.826284
[725] train's auc: 0.826279
[726] train's auc: 0.82632
[727] train's auc: 0.826291
[728] train's auc: 0.826258
[729] train's auc: 0.826352
[730] train's auc: 0.826343
[731] train's auc: 0.826331
[732] train's auc: 0.826395
[733] train's auc: 0.82636
[734] train's auc: 0.826334
[735] train's auc: 0.826373
[736] train's auc: 0.826483
[737] train's auc: 0.82636
[738] train's auc: 0.826347
[739] train's auc: 0.826414
[740] train's auc: 0.826402
[741] train's auc: 0.826509
[742] train's auc: 0.826423
[743] train's auc: 0.826388
[744] train's auc: 0.826391
[745] train's auc: 0.826456
[746] train's auc: 0.826419
[747] train's auc: 0.826489
[748] train's auc: 0.826388
[749] train's auc: 0.826449
[750] train's auc: 0.826415
[751] train's auc: 0.826433
[752] train's auc: 0.82645
[753] train's auc: 0.826476
[754] train's auc: 0.826466
[755] train's auc: 0.826488
[756] train's auc: 0.826483
[757] train's auc: 0.826506
[758] train's auc: 0.82655
[759] train's auc: 0.826535
[760] train's auc: 0.826517
[761] train's auc: 0.826585
[762] train's auc: 0.826669
[763] train's auc: 0.826571
[764] train's auc: 0.826564
[765] train's auc: 0.826636
[766] train's auc: 0.826672
[767] train's auc: 0.826631
[768] train's auc: 0.826629
[769] train's auc: 0.826596
[770] train's auc: 0.826623
[771] train's auc: 0.82664
[772] train's auc: 0.826603
[773] train's auc: 0.826544
[774] train's auc: 0.826774
[775] train's auc: 0.826588
[776] train's auc: 0.826812
[777] train's auc: 0.826814
[778] train's auc: 0.826849
[779] train's auc: 0.826781
[780] train's auc: 0.826695
[781] train's auc: 0.826746
[782] train's auc: 0.826832
[783] train's auc: 0.826809
[784] train's auc: 0.826799
[785] train's auc: 0.826833
[786] train's auc: 0.826818
[787] train's auc: 0.826846
[788] train's auc: 0.826851
[789] train's auc: 0.826876
[790] train's auc: 0.826862
[791] train's auc: 0.826892
[792] train's auc: 0.826905
[793] train's auc: 0.826911
[794] train's auc: 0.826904
[795] train's auc: 0.826921
[796] train's auc: 0.82693
[797] train's auc: 0.826941
[798] train's auc: 0.826934
[799] train's auc: 0.826944
[800] train's auc: 0.826938
[801] train's auc: 0.826958
[802] train's auc: 0.826947
[803] train's auc: 0.82697
[804] train's auc: 0.826984
[805] train's auc: 0.826994
[806] train's auc: 0.826921
[807] train's auc: 0.826939
[808] train's auc: 0.826944
[809] train's auc: 0.826941
[810] train's auc: 0.826932
[811] train's auc: 0.82696
[812] train's auc: 0.826957
[813] train's auc: 0.826966
[814] train's auc: 0.82699
[815] train's auc: 0.826998
[816] train's auc: 0.827002
[817] train's auc: 0.826963
[818] train's auc: 0.826971
[819] train's auc: 0.827003
[820] train's auc: 0.827
[821] train's auc: 0.827003
[822] train's auc: 0.827025
[823] train's auc: 0.826948
[824] train's auc: 0.826975
[825] train's auc: 0.826956
[826] train's auc: 0.826982
[827] train's auc: 0.826979
[828] train's auc: 0.827002
[829] train's auc: 0.827012
[830] train's auc: 0.827025
[831] train's auc: 0.827032
[832] train's auc: 0.827
[833] train's auc: 0.827041
[834] train's auc: 0.827002
[835] train's auc: 0.827074
[836] train's auc: 0.827081
[837] train's auc: 0.827013
[838] train's auc: 0.82702
[839] train's auc: 0.827036
[840] train's auc: 0.827038
[841] train's auc: 0.827044
[842] train's auc: 0.82705
[843] train's auc: 0.82706
[844] train's auc: 0.82707
[845] train's auc: 0.827073
[846] train's auc: 0.827082
[847] train's auc: 0.827085
[848] train's auc: 0.827376
[849] train's auc: 0.827397
[850] train's auc: 0.827393
[851] train's auc: 0.827504
[852] train's auc: 0.827395
[853] train's auc: 0.82751
[854] train's auc: 0.827504
[855] train's auc: 0.827551
[856] train's auc: 0.827557
[857] train's auc: 0.827577
[858] train's auc: 0.827569
[859] train's auc: 0.827582
[860] train's auc: 0.827573
[861] train's auc: 0.827584
[862] train's auc: 0.827593
[863] train's auc: 0.82762
[864] train's auc: 0.827622
[865] train's auc: 0.827594
[866] train's auc: 0.82758
[867] train's auc: 0.827521
[868] train's auc: 0.827536
[869] train's auc: 0.827576
[870] train's auc: 0.827575
[871] train's auc: 0.82762
[872] train's auc: 0.827604
[873] train's auc: 0.827604
[874] train's auc: 0.827643
[875] train's auc: 0.827644
[876] train's auc: 0.827646
[877] train's auc: 0.82765
[878] train's auc: 0.827637
[879] train's auc: 0.82764
[880] train's auc: 0.827648
[881] train's auc: 0.827652
[882] train's auc: 0.827709
[883] train's auc: 0.827658
[884] train's auc: 0.827667
[885] train's auc: 0.827727
[886] train's auc: 0.827728
[887] train's auc: 0.827774
[888] train's auc: 0.827746
[889] train's auc: 0.827658
[890] train's auc: 0.827651
[891] train's auc: 0.827608
[892] train's auc: 0.827606
[893] train's auc: 0.827675
[894] train's auc: 0.82768
[895] train's auc: 0.827742
[896] train's auc: 0.827701
[897] train's auc: 0.82774
[898] train's auc: 0.827747
[899] train's auc: 0.827759
[900] train's auc: 0.827718
[901] train's auc: 0.827746
[902] train's auc: 0.82773
[903] train's auc: 0.827712
[904] train's auc: 0.827739
[905] train's auc: 0.827729
[906] train's auc: 0.827656
[907] train's auc: 0.827688
[908] train's auc: 0.827703
[909] train's auc: 0.827724
[910] train's auc: 0.827758
[911] train's auc: 0.827757
[912] train's auc: 0.827763
[913] train's auc: 0.827772
[914] train's auc: 0.82776
[915] train's auc: 0.827771
[916] train's auc: 0.827732
[917] train's auc: 0.827742
[918] train's auc: 0.827708
[919] train's auc: 0.827738
[920] train's auc: 0.827712
[921] train's auc: 0.827696
[922] train's auc: 0.827754
[923] train's auc: 0.827718
[924] train's auc: 0.827798
[925] train's auc: 0.827807
[926] train's auc: 0.827809
[927] train's auc: 0.827827
[928] train's auc: 0.827841
[929] train's auc: 0.827842
[930] train's auc: 0.827848
[931] train's auc: 0.827853
[932] train's auc: 0.827907
[933] train's auc: 0.827902
[934] train's auc: 0.827938
[935] train's auc: 0.827953
[936] train's auc: 0.827964
[937] train's auc: 0.827977
[938] train's auc: 0.827987
[939] train's auc: 0.828005
[940] train's auc: 0.827964
[941] train's auc: 0.827969
[942] train's auc: 0.827995
[943] train's auc: 0.828023
[944] train's auc: 0.828014
[945] train's auc: 0.82802
[946] train's auc: 0.827993
[947] train's auc: 0.828071
[948] train's auc: 0.82803
[949] train's auc: 0.828056
[950] train's auc: 0.828053
[951] train's auc: 0.82805
[952] train's auc: 0.828059
[953] train's auc: 0.827993
[954] train's auc: 0.828005
[955] train's auc: 0.828014
[956] train's auc: 0.828032
[957] train's auc: 0.827955
[958] train's auc: 0.827982
[959] train's auc: 0.828007
[960] train's auc: 0.828023
[961] train's auc: 0.828035
[962] train's auc: 0.828054
[963] train's auc: 0.828048
[964] train's auc: 0.828038
[965] train's auc: 0.828019
[966] train's auc: 0.828032
[967] train's auc: 0.828013
[968] train's auc: 0.82801
[969] train's auc: 0.828024
[970] train's auc: 0.82812
[971] train's auc: 0.828136
[972] train's auc: 0.828137
[973] train's auc: 0.828133
[974] train's auc: 0.828142
[975] train's auc: 0.828145
[976] train's auc: 0.828152
[977] train's auc: 0.828088
[978] train's auc: 0.828091
[979] train's auc: 0.828107
[980] train's auc: 0.828117
[981] train's auc: 0.828115
[982] train's auc: 0.828123
[983] train's auc: 0.828115
[984] train's auc: 0.828114
[985] train's auc: 0.828123
[986] train's auc: 0.828157
[987] train's auc: 0.828131
[988] train's auc: 0.828094
[989] train's auc: 0.828095
[990] train's auc: 0.828084
[991] train's auc: 0.828111
[992] train's auc: 0.828125
[993] train's auc: 0.828129
[994] train's auc: 0.828133
[995] train's auc: 0.828143
[996] train's auc: 0.828143
[997] train's auc: 0.82815
[998] train's auc: 0.82816
[999] train's auc: 0.828178
[1000] train's auc: 0.828188
[1001] train's auc: 0.82816
[1002] train's auc: 0.828198
[1003] train's auc: 0.828179
[1004] train's auc: 0.828149
[1005] train's auc: 0.828193
[1006] train's auc: 0.82815
[1007] train's auc: 0.828145
[1008] train's auc: 0.828144
[1009] train's auc: 0.828139
[1010] train's auc: 0.828145
[1011] train's auc: 0.828159
[1012] train's auc: 0.828168
[1013] train's auc: 0.82814
[1014] train's auc: 0.828158
[1015] train's auc: 0.828146
[1016] train's auc: 0.828147
[1017] train's auc: 0.828145
[1018] train's auc: 0.828163
[1019] train's auc: 0.828166
[1020] train's auc: 0.828149
[1021] train's auc: 0.828175
[1022] train's auc: 0.828164
[1023] train's auc: 0.828085
[1024] train's auc: 0.828072
[1025] train's auc: 0.828071
[1026] train's auc: 0.82805
[1027] train's auc: 0.828051
[1028] train's auc: 0.828057
[1029] train's auc: 0.828114
[1030] train's auc: 0.828105
[1031] train's auc: 0.82811
[1032] train's auc: 0.828116
[1033] train's auc: 0.828101
[1034] train's auc: 0.828147
[1035] train's auc: 0.828095
[1036] train's auc: 0.828162
[1037] train's auc: 0.828142
[1038] train's auc: 0.828154
[1039] train's auc: 0.828185
[1040] train's auc: 0.828184
[1041] train's auc: 0.82817
[1042] train's auc: 0.828163
[1043] train's auc: 0.828128
[1044] train's auc: 0.82818
[1045] train's auc: 0.82824
[1046] train's auc: 0.82825
[1047] train's auc: 0.828269
[1048] train's auc: 0.828265
[1049] train's auc: 0.828246
[1050] train's auc: 0.828278
[1051] train's auc: 0.828312
[1052] train's auc: 0.828235
[1053] train's auc: 0.828242
[1054] train's auc: 0.828234
[1055] train's auc: 0.828247
[1056] train's auc: 0.828254
[1057] train's auc: 0.828267
[1058] train's auc: 0.828259
[1059] train's auc: 0.828234
[1060] train's auc: 0.828236
[1061] train's auc: 0.828242
[1062] train's auc: 0.828262
[1063] train's auc: 0.828258
[1064] train's auc: 0.828373
[1065] train's auc: 0.828385
[1066] train's auc: 0.828385
[1067] train's auc: 0.828398
[1068] train's auc: 0.828406
[1069] train's auc: 0.828429
[1070] train's auc: 0.828449
[1071] train's auc: 0.82844
[1072] train's auc: 0.82847
[1073] train's auc: 0.828463
[1074] train's auc: 0.828465
[1075] train's auc: 0.8285
[1076] train's auc: 0.828491
[1077] train's auc: 0.82848
[1078] train's auc: 0.828499
[1079] train's auc: 0.828495
[1080] train's auc: 0.82849
[1081] train's auc: 0.828471
[1082] train's auc: 0.828495
[1083] train's auc: 0.828487
[1084] train's auc: 0.82857
[1085] train's auc: 0.828564
[1086] train's auc: 0.828571
[1087] train's auc: 0.828562
[1088] train's auc: 0.828555
[1089] train's auc: 0.828572
[1090] train's auc: 0.828489
[1091] train's auc: 0.828489
[1092] train's auc: 0.82859
[1093] train's auc: 0.828492
[1094] train's auc: 0.82858
[1095] train's auc: 0.828584
[1096] train's auc: 0.8286
[1097] train's auc: 0.828622
[1098] train's auc: 0.828609
[1099] train's auc: 0.828596
[1100] train's auc: 0.828562
[1101] train's auc: 0.828566
[1102] train's auc: 0.828556
[1103] train's auc: 0.828545
[1104] train's auc: 0.828561
[1105] train's auc: 0.82869
[1106] train's auc: 0.828696
[1107] train's auc: 0.828694
[1108] train's auc: 0.828716
[1109] train's auc: 0.82872
[1110] train's auc: 0.828721
[1111] train's auc: 0.828723
[1112] train's auc: 0.828698
[1113] train's auc: 0.828705
[1114] train's auc: 0.828702
[1115] train's auc: 0.828682
[1116] train's auc: 0.828744
[1117] train's auc: 0.828777
[1118] train's auc: 0.82876
[1119] train's auc: 0.828773
[1120] train's auc: 0.828767
[1121] train's auc: 0.828782
[1122] train's auc: 0.828765
[1123] train's auc: 0.828777
[1124] train's auc: 0.82878
[1125] train's auc: 0.828764
[1126] train's auc: 0.828781
[1127] train's auc: 0.828769
[1128] train's auc: 0.828807
[1129] train's auc: 0.828824
[1130] train's auc: 0.828821
[1131] train's auc: 0.828807
[1132] train's auc: 0.828794
[1133] train's auc: 0.828726
[1134] train's auc: 0.82873
[1135] train's auc: 0.828739
[1136] train's auc: 0.828741
[1137] train's auc: 0.828742
[1138] train's auc: 0.828748
[1139] train's auc: 0.82876
[1140] train's auc: 0.828751
[1141] train's auc: 0.828747
[1142] train's auc: 0.828745
[1143] train's auc: 0.828747
[1144] train's auc: 0.82875
[1145] train's auc: 0.828727
[1146] train's auc: 0.828718
[1147] train's auc: 0.828724
[1148] train's auc: 0.828733
[1149] train's auc: 0.828731
[1150] train's auc: 0.828746
[1151] train's auc: 0.828747
[1152] train's auc: 0.82877
[1153] train's auc: 0.828754
[1154] train's auc: 0.828776
[1155] train's auc: 0.828754
[1156] train's auc: 0.8288
[1157] train's auc: 0.828767
[1158] train's auc: 0.828805
[1159] train's auc: 0.828812
[1160] train's auc: 0.828762
[1161] train's auc: 0.828768
[1162] train's auc: 0.828765
[1163] train's auc: 0.82877
[1164] train's auc: 0.828771
[1165] train's auc: 0.82877
[1166] train's auc: 0.828807
[1167] train's auc: 0.828798
[1168] train's auc: 0.828808
[1169] train's auc: 0.828794
[1170] train's auc: 0.828807
[1171] train's auc: 0.828785
[1172] train's auc: 0.828787
[1173] train's auc: 0.828799
[1174] train's auc: 0.828807
[1175] train's auc: 0.828799
[1176] train's auc: 0.828806
[1177] train's auc: 0.8288
[1178] train's auc: 0.828791
[1179] train's auc: 0.828776
[1180] train's auc: 0.828778
[1181] train's auc: 0.828771
[1182] train's auc: 0.828785
[1183] train's auc: 0.828764
[1184] train's auc: 0.828759
[1185] train's auc: 0.828762
[1186] train's auc: 0.828752
[1187] train's auc: 0.828749
[1188] train's auc: 0.828736
[1189] train's auc: 0.828761
[1190] train's auc: 0.828765
[1191] train's auc: 0.828774
[1192] train's auc: 0.828747
[1193] train's auc: 0.82875
[1194] train's auc: 0.828724
[1195] train's auc: 0.828747
[1196] train's auc: 0.828734
[1197] train's auc: 0.828739
[1198] train's auc: 0.828733
[1199] train's auc: 0.828774
[1200] train's auc: 0.82875
[1201] train's auc: 0.828738
[1202] train's auc: 0.828745
[1203] train's auc: 0.828743
[1204] train's auc: 0.828744
[1205] train's auc: 0.828725
[1206] train's auc: 0.828737
[1207] train's auc: 0.828727
[1208] train's auc: 0.82872
[1209] train's auc: 0.828704
[1210] train's auc: 0.828687
[1211] train's auc: 0.828711
[1212] train's auc: 0.828686
[1213] train's auc: 0.828671
[1214] train's auc: 0.828677
[1215] train's auc: 0.828668
[1216] train's auc: 0.828672
[1217] train's auc: 0.828664
[1218] train's auc: 0.828666
[1219] train's auc: 0.828668
[1220] train's auc: 0.828695
[1221] train's auc: 0.828698
[1222] train's auc: 0.828724
[1223] train's auc: 0.828684
[1224] train's auc: 0.82868
[1225] train's auc: 0.828708
[1226] train's auc: 0.828686
[1227] train's auc: 0.828673
[1228] train's auc: 0.828594
[1229] train's auc: 0.828676
[1230] train's auc: 0.828673
[1231] train's auc: 0.828602
[1232] train's auc: 0.828606
[1233] train's auc: 0.828603
[1234] train's auc: 0.828604
[1235] train's auc: 0.828617
[1236] train's auc: 0.828601
[1237] train's auc: 0.828592
[1238] train's auc: 0.828617
[1239] train's auc: 0.828646
[1240] train's auc: 0.828649
[1241] train's auc: 0.828616
[1242] train's auc: 0.828605
[1243] train's auc: 0.828615
[1244] train's auc: 0.828608
[1245] train's auc: 0.828629
[1246] train's auc: 0.828615
[1247] train's auc: 0.828633
[1248] train's auc: 0.828628
[1249] train's auc: 0.828633
[1250] train's auc: 0.828638
[1251] train's auc: 0.828676
[1252] train's auc: 0.828685
[1253] train's auc: 0.82869
[1254] train's auc: 0.828687
[1255] train's auc: 0.82869
[1256] train's auc: 0.828718
[1257] train's auc: 0.828724
[1258] train's auc: 0.828723
[1259] train's auc: 0.828711
[1260] train's auc: 0.828664
[1261] train's auc: 0.828651
[1262] train's auc: 0.828687
[1263] train's auc: 0.828652
[1264] train's auc: 0.82868
[1265] train's auc: 0.82865
[1266] train's auc: 0.828724
[1267] train's auc: 0.828638
[1268] train's auc: 0.828676
[1269] train's auc: 0.828678
[1270] train's auc: 0.828709
[1271] train's auc: 0.828675
[1272] train's auc: 0.828634
[1273] train's auc: 0.828583
[1274] train's auc: 0.828572
[1275] train's auc: 0.828621
[1276] train's auc: 0.828581
[1277] train's auc: 0.828558
[1278] train's auc: 0.828614
[1279] train's auc: 0.828528
[1280] train's auc: 0.828594
[1281] train's auc: 0.828566
[1282] train's auc: 0.828534
[1283] train's auc: 0.828514
[1284] train's auc: 0.828504
[1285] train's auc: 0.82851
[1286] train's auc: 0.828507
[1287] train's auc: 0.828493
[1288] train's auc: 0.828466
[1289] train's auc: 0.828473
[1290] train's auc: 0.828468
[1291] train's auc: 0.82847
[1292] train's auc: 0.828474
[1293] train's auc: 0.828489
[1294] train's auc: 0.828486
[1295] train's auc: 0.828495
[1296] train's auc: 0.828498
[1297] train's auc: 0.828495
[1298] train's auc: 0.828492
[1299] train's auc: 0.82849
[1300] train's auc: 0.828491
[1301] train's auc: 0.8285
[1302] train's auc: 0.828491
[1303] train's auc: 0.828496
[1304] train's auc: 0.828491
[1305] train's auc: 0.828499
[1306] train's auc: 0.828488
[1307] train's auc: 0.828492
[1308] train's auc: 0.828483
[1309] train's auc: 0.828483
[1310] train's auc: 0.82848
[1311] train's auc: 0.828464
[1312] train's auc: 0.828512
[1313] train's auc: 0.828501
[1314] train's auc: 0.828487
[1315] train's auc: 0.828477
[1316] train's auc: 0.828481
[1317] train's auc: 0.828446
[1318] train's auc: 0.828431
[1319] train's auc: 0.828433
[1320] train's auc: 0.828484
[1321] train's auc: 0.828431
[1322] train's auc: 0.828432
[1323] train's auc: 0.828444
[1324] train's auc: 0.828348
[1325] train's auc: 0.828343
[1326] train's auc: 0.828362
[1327] train's auc: 0.82835
[1328] train's auc: 0.828314
[1329] train's auc: 0.828304
[1330] train's auc: 0.82831
[1331] train's auc: 0.828313
[1332] train's auc: 0.828319
[1333] train's auc: 0.828311
[1334] train's auc: 0.828322
[1335] train's auc: 0.82834
[1336] train's auc: 0.828358
[1337] train's auc: 0.828346
[1338] train's auc: 0.828338
[1339] train's auc: 0.828349
[1340] train's auc: 0.828361
[1341] train's auc: 0.828359
[1342] train's auc: 0.828365
[1343] train's auc: 0.828369
[1344] train's auc: 0.828369
[1345] train's auc: 0.828387
[1346] train's auc: 0.828378
[1347] train's auc: 0.82837
[1348] train's auc: 0.828377
[1349] train's auc: 0.828374
[1350] train's auc: 0.828366
[1351] train's auc: 0.82837
[1352] train's auc: 0.828345
[1353] train's auc: 0.82835
[1354] train's auc: 0.828342
[1355] train's auc: 0.828339
[1356] train's auc: 0.828357
[1357] train's auc: 0.828335
[1358] train's auc: 0.828326
[1359] train's auc: 0.828324
[1360] train's auc: 0.82832
[1361] train's auc: 0.828312
[1362] train's auc: 0.828306
[1363] train's auc: 0.828314
[1364] train's auc: 0.82832
[1365] train's auc: 0.828365
[1366] train's auc: 0.828357
[1367] train's auc: 0.828373
[1368] train's auc: 0.828383
[1369] train's auc: 0.82839
[1370] train's auc: 0.828396
[1371] train's auc: 0.828412
[1372] train's auc: 0.828432
[1373] train's auc: 0.828432
[1374] train's auc: 0.82843
[1375] train's auc: 0.828436
[1376] train's auc: 0.828444
[1377] train's auc: 0.82835
[1378] train's auc: 0.828349
[1379] train's auc: 0.828355
[1380] train's auc: 0.828337
[1381] train's auc: 0.828353
[1382] train's auc: 0.828327
[1383] train's auc: 0.82833
[1384] train's auc: 0.828322
[1385] train's auc: 0.828318
[1386] train's auc: 0.828328
[1387] train's auc: 0.828326
[1388] train's auc: 0.828332
[1389] train's auc: 0.828321
[1390] train's auc: 0.828331
[1391] train's auc: 0.828327
[1392] train's auc: 0.828362
[1393] train's auc: 0.828363
[1394] train's auc: 0.828359
[1395] train's auc: 0.828357
[1396] train's auc: 0.828348
[1397] train's auc: 0.828344
[1398] train's auc: 0.828356
[1399] train's auc: 0.828353
[1400] train's auc: 0.828356
[1401] train's auc: 0.828347
[1402] train's auc: 0.828338
[1403] train's auc: 0.828344
[1404] train's auc: 0.828329
[1405] train's auc: 0.828302
[1406] train's auc: 0.8283
[1407] train's auc: 0.828303
[1408] train's auc: 0.828299
[1409] train's auc: 0.828256
[1410] train's auc: 0.828245
[1411] train's auc: 0.828243
[1412] train's auc: 0.828232
[1413] train's auc: 0.828252
[1414] train's auc: 0.828252
[1415] train's auc: 0.828255
[1416] train's auc: 0.828258
[1417] train's auc: 0.828229
[1418] train's auc: 0.828228
[1419] train's auc: 0.828241
[1420] train's auc: 0.828231
[1421] train's auc: 0.828227
[1422] train's auc: 0.828241
[1423] train's auc: 0.828229
[1424] train's auc: 0.828251
[1425] train's auc: 0.828247
[1426] train's auc: 0.828238
[1427] train's auc: 0.828234
[1428] train's auc: 0.828229
[1429] train's auc: 0.828241
Early stopping, best iteration is:
[1129] train's auc: 0.828824
0.962027536613172
|
scrape_data.ipynb | ###Markdown
Scraping the Data
###Code
import numpy as np
import pandas as pd
import urllib.request as urllib
from bs4 import BeautifulSoup, Comment
from selenium import webdriver
from datetime import datetime
import time
import random
import os
import string
###Output
_____no_output_____
###Markdown
In order to create my award models, the first step is to run this notebook that scrapes all of the data that we will use. We are scraping the data from basketball-reference.com, and will be scraping by using a gecko driver that will automatically open a firefox window. The driver will visit basketball-reference.com, and visit each player profile on the website to get all of the player season data. It will also visit each award voting page on the site to get all of the award data. In total, this script will run for a few hours before it has scraped all of the data. First, set the path of your gecko driver. I put my gecko driver file in the same directory as this project.
###Code
PATH=os.path.abspath(os.getcwd()) + '/geckodriver'
###Output
_____no_output_____
###Markdown
I will create two dataframes that will be used to store all of the data that I am scraping.
###Code
# get totals
player_seasons = pd.DataFrame(columns=['player', 'season', 'age', 'team', 'position', 'g', 'gs', 'mp', 'fg', 'fga', 'fg_pct',
'three_p', 'three_pa', 'three_pct', 'two_p', 'two_pa', 'two_pct', 'efg', 'ft',
'fta', 'ft_pct', 'orb', 'drb', 'trb', 'ast', 'stl', 'blk', 'tov','pf', 'pts', 'trp_dbl'])
player_seasons.set_index(['player', 'season'], inplace = True)
award_data = pd.DataFrame(columns=['player', 'season', 'award', 'first_place_votes', 'award_pts_won', 'award_pts_max'])
award_data.set_index(['player', 'season'], inplace = True)
###Output
_____no_output_____
###Markdown
Scraping Award DataThis is the code that scrapes the award data for each year. After filling in the award_data dataframe, we save this dataframe to a csv.
###Code
# get url of award voting results for a given year
def get_award_url(year):
return f"https://www.basketball-reference.com/awards/awards_{year}.html"
# extract award data from beautiful soup object and add it to award_rows_list
def scrape_award_data(award_name, soup):
#get rows of award votes from table
awardTable = soup.find("table", {"id": award_name})
if awardTable is None:
awardTable = soup.find("table", {"id": f"nba_{award_name}"})
if awardTable is not None:
awardRows = awardTable.find("tbody").find_all("tr")
print(f"Got rows of {award_name} players from table, starting to iterate through rows")
#iterate through votes on page, filling data into award_data dataframe
for row in awardRows:
if row.get('class') == None:
player_name = row.find("td", {"data-stat":"player"}).find("a").get_text()
first_place_votes = row.find("td", {"data-stat":"votes_first"}).get_text()
award_pts_won = row.find("td", {"data-stat":"points_won"}).get_text()
award_pts_max = row.find("td", {"data-stat":"points_max"}).get_text()
award_rows_list.append({'player': player_name, 'season': year, 'award': award_name,
'first_place_votes': first_place_votes, 'award_pts_won': award_pts_won,
'award_pts_max': award_pts_max})
browser = webdriver.Firefox(executable_path = PATH)
award_rows_list = []
# award voting data is available on bbref from 1956
years = range(1956, datetime.now().year)
for year in years:
sada = browser.get(get_award_url(year))
time.sleep(3)
source = browser.page_source
soup = BeautifulSoup(source, 'html.parser')
print(f"Year: {year}")
scrape_award_data('mvp', soup)
scrape_award_data('roy', soup)
scrape_award_data('dpoy', soup)
scrape_award_data('smoy', soup)
scrape_award_data('mip', soup)
time.sleep(random.randint(0,1))
browser.close()
award_data = pd.DataFrame(award_rows_list, columns=['player', 'season', 'award', 'first_place_votes', 'award_pts_won', 'award_pts_max'])
award_data.set_index(['player', 'season'], inplace = True)
award_data.to_csv('data/award_data.csv')
###Output
_____no_output_____
###Markdown
Scraping Player DataThis is the code that scrapes each player's statistics that they recorded each season. It loops through the player directory on basketball-reference.com, and scrapes data from each player's profile on the website. After filling in the player_seasons dataframe, we save this dataframe to a csv. WARNING: This takes a few hours to run, as there are thousands of players that we are scraping data for, and we are waiting 0-1 seconds before visiting each player's profile page (in order to avoid overwhelming basketball-reference.com).
###Code
# scrape all of the player basic stats data for each season
# get url of all players whose last name starts with the given letter
def get_letter_url(letter):
return f"https://www.basketball-reference.com/players/{letter}/"
browser = webdriver.Firefox(executable_path = PATH)
player_season_list = []
for letter in string.ascii_lowercase[::-1]:
print(f"Letter: {letter}")
time.sleep(random.randint(0,1))
html = urllib.urlopen(get_letter_url(letter))
soup = BeautifulSoup(html.read())
html.close()
#get rows of players from table
playerTable = soup.find("table", {"id":"players"})
playerRows = playerTable.find("tbody").find_all("tr")
#iterate through players on page, filling data into players dataframe
for row in playerRows:
if row.get('class') == None:
player_name = player_link = row.find("th", {"data-stat":"player"}).find("a").get_text()
player_link = row.find("th", {"data-stat":"player"}).find("a")['href']
full_player_link = f"https://www.basketball-reference.com{player_link}"
print(player_name)
time.sleep(random.randint(0,1))
sada = browser.get(full_player_link)
source = browser.page_source
player_soup = BeautifulSoup(source, 'html.parser')
totalsTable = player_soup.find("table", {"id":"totals"})
totalsRows = totalsTable.find("tbody").find_all("tr")
time.sleep(random.randint(0,1))
prev_yr = 0
for row_t in totalsRows:
league_soup = row_t.find("td", {"data-stat":"lg_id"})
if league_soup is not None and league_soup.find("a") is not None:
league = league_soup.find("a").get_text()
else:
league = 'N/A'
if league == "NBA":
season_str = row_t.find("th", {"data-stat":"season"}).find("a").get_text()[0:4]
year = int(season_str) + 1
if year == prev_yr:
team = row_t.find("td", {"data-stat":"team_id"}).find("a").get_text() + " "
update_team = player_season_list[-1]
update_team['team'] = update_team['team'] + team
player_season_list[-1] = update_team
else:
team_soup = row_t.find("td", {"data-stat":"team_id"})
if team_soup.find("a") is not None:
team = row_t.find("td", {"data-stat":"team_id"}).find("a").get_text() + " "
else:
team = ""
age = row_t.find("td", {"data-stat":"age"}).get_text()
position = row_t.find("td", {"data-stat":"pos"}).get_text()
g = row_t.find("td", {"data-stat":"g"}).get_text()
gs = row_t.find("td", {"data-stat":"gs"}).get_text()
mp = row_t.find("td", {"data-stat":"mp"}).get_text()
fg = row_t.find("td", {"data-stat":"fg"}).get_text()
fga = row_t.find("td", {"data-stat":"fga"}).get_text()
fg_pct = row_t.find("td", {"data-stat":"fg_pct"}).get_text()
if row_t.find("td", {"data-stat":"fg3"}) is not None:
three_p = row_t.find("td", {"data-stat":"fg3"}).get_text()
else:
three_p = 0
if row_t.find("td", {"data-stat":"fg3a"}) is not None:
three_pa = row_t.find("td", {"data-stat":"fg3a"}).get_text()
else:
three_pa = 0
if row_t.find("td", {"data-stat":"fg3_pct"}) is not None:
three_pct = row_t.find("td", {"data-stat":"fg3_pct"}).get_text()
else:
three_pct = 0
if row_t.find("td", {"data-stat":"fg2"}) is not None:
two_p = row_t.find("td", {"data-stat":"fg2"}).get_text()
else:
two_p = fg
if row_t.find("td", {"data-stat":"fg2a"}) is not None:
two_pa = row_t.find("td", {"data-stat":"fg2a"}).get_text()
else:
two_pa = fga
if row_t.find("td", {"data-stat":"fg2_pct"}) is not None:
two_pct = row_t.find("td", {"data-stat":"fg2_pct"}).get_text()
else:
two_pct = fg_pct
if row_t.find("td", {"data-stat":"efg_pct"}) is not None:
efg = row_t.find("td", {"data-stat":"efg_pct"}).get_text()
else:
efg = fg_pct
ft = row_t.find("td", {"data-stat":"ft"}).get_text()
fta = row_t.find("td", {"data-stat":"fta"}).get_text()
ft_pct = row_t.find("td", {"data-stat":"ft_pct"}).get_text()
if row_t.find("td", {"data-stat":"orb"}) is not None:
orb = row_t.find("td", {"data-stat":"orb"}).get_text()
else:
orb = ''
if row_t.find("td", {"data-stat":"drb"}) is not None:
drb = row_t.find("td", {"data-stat":"drb"}).get_text()
else:
drb = ''
trb = row_t.find("td", {"data-stat":"trb"}).get_text()
ast = row_t.find("td", {"data-stat":"ast"}).get_text()
if row_t.find("td", {"data-stat":"stl"}) is not None:
stl = row_t.find("td", {"data-stat":"stl"}).get_text()
else:
stl = ''
if row_t.find("td", {"data-stat":"blk"}) is not None:
blk = row_t.find("td", {"data-stat":"blk"}).get_text()
else:
blk = ''
if row_t.find("td", {"data-stat":"tov"}) is not None:
tov = row_t.find("td", {"data-stat":"tov"}).get_text()
else:
tov = ''
pf = row_t.find("td", {"data-stat":"pf"}).get_text()
pts = row_t.find("td", {"data-stat":"pts"}).get_text()
trp_dbl_soup = row_t.find("td", {"data-stat":"trp_dbl"})
if trp_dbl_soup is None:
trp_dbl = ''
else:
trp_dbl = trp_dbl_soup.get_text()
player_season_list.append({'player': player_name, 'season': year, 'age': age, 'team': team.strip(),
'position': position, 'g': g, 'gs': gs, 'mp': mp, 'fg': fg, 'fga': fga,
'fg_pct': fg_pct, 'three_p': three_p, 'three_pa': three_pa,
'three_pct': three_pct, 'two_p': two_p, 'two_pa': two_pa, 'two_pct': two_pct,
'efg': efg, 'ft': ft, 'fta': fta, 'ft_pct': ft_pct, 'orb': orb, 'drb': drb,
'trb': trb, 'ast': ast, 'stl': stl, 'blk': blk, 'tov': tov,'pf': pf,
'pts': pts, 'trp_dbl': trp_dbl})
prev_yr = year
browser.close()
player_seasons = pd.DataFrame(player_season_list, columns=['player', 'season', 'age', 'team', 'position', 'g', 'gs', 'mp', 'fg', 'fga', 'fg_pct',
'three_p', 'three_pa', 'three_pct', 'two_p', 'two_pa', 'two_pct', 'efg', 'ft',
'fta', 'ft_pct', 'orb', 'drb', 'trb', 'ast', 'stl', 'blk', 'tov','pf', 'pts', 'trp_dbl'])
player_seasons.set_index(['player', 'season'], inplace = True)
player_seasons
player_seasons.to_csv('data/player_seasons.csv')
###Output
_____no_output_____
###Markdown
Scrape Cost of Living Information From Numbeo.com First, import the necessary libraries.
###Code
import pandas as pd
import requests
from bs4 import BeautifulSoup
import time
###Output
_____no_output_____
###Markdown
Create a list of all city names in Canada. This will be appended to the url to get a specific url for each cities.
###Code
# Make request
import requests
cities_requests = requests.get("https://www.numbeo.com/cost-of-living/country_result.jsp?country=Canada")
# Create BeautifulSoup object
from bs4 import BeautifulSoup
cities_soup = BeautifulSoup(cities_requests.text)
# Get list of city names
cities = [cities_soup.find("select").find_all("option")[j].text for j in range(len(cities_soup.find("select").find_all("option")))]
###Output
_____no_output_____
###Markdown
Minor adjustments to city names that will be appended to the url.
###Code
# Adjust city names to be appended as url
cities = cities[1:]
cities = [city.replace(", ", "-").replace(" ","-") for city in cities]
cities_canada = cities.copy()
cities_canada = [city_canada+"-Canada" for city_canada in cities_canada]
cities = cities + cities_canada
###Output
_____no_output_____
###Markdown
From numbeo.com, scrape the cost of living for cities in Canada and present the results in a data frame. If there is no url for a specific city, it will be skipped.
###Code
costofliving = pd.DataFrame()
for city in cities:
try:
# Make request
col_requests = requests.get("https://www.numbeo.com/cost-of-living/in/"+city)
# Create BeautifulSoup object
col = BeautifulSoup(col_requests.text)
# Get city and country
try:
city, country = col.find("span", {"class":"purple_light"}).text.split(",")
except:
city, country = city, "Canada"
# Find the living cost table
table = col.find("table", {"class":"data_wide_table new_bar_table"})
# Get item and cost
table_item_cost = table.find_all("td")
item = list()
cost = list()
for i in range(len(table_item_cost)):
if i % 3 == 0:
item.append(table_item_cost[i].text)
if i % 3 == 1:
cost.append(float(table_item_cost[i].text.split()[0].replace(",","")))
# Store result in dataframe
df1 = pd.DataFrame({'city':[city], 'country':[country]})
df2 = pd.DataFrame({item[j]:[cost[j]] for j in range(len(item))})
df = pd.concat([df1,df2], axis=1)
costofliving = pd.concat([costofliving,df], axis=0)
except:
pass
time.sleep(.1)
costofliving = costofliving.reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Some city names can be found outside of Canada (e.g. London, UK), so here I removed rows which country is not Canada.
###Code
# Remove countries that is not Canada
costofliving['country'] = [row.strip().replace('ON', 'Canada').replace('BC','Canada') for row in costofliving['country']]
costofliving = costofliving[costofliving['country'] == 'Canada']
costofliving['city'] = [row.strip() for row in costofliving['city']]
costofliving = costofliving.drop_duplicates(subset=['city'])
###Output
_____no_output_____
###Markdown
Write the results to a csv file.
###Code
costofliving.to_csv("costofliving.csv", index=False)
###Output
_____no_output_____
###Markdown
Immoscout24.de ScraperEin Script zum dumpen (in `.csv` schreiben) von Immobilien, welche auf [immoscout24.de](http://immoscout24.de) angeboten werden
###Code
from bs4 import BeautifulSoup
import json
import urllib2
import random
from random import choice
import time
import geopandas as gpd
# urlquery from Achim Tack. Thank you!
# https://github.com/ATack/GoogleTrafficParser/blob/master/google_traffic_parser.py
def urlquery(url):
# function cycles randomly through different user agents and time intervals to simulate more natural queries
try:
sleeptime = float(random.randint(1,6))/5
time.sleep(sleeptime)
agents = ['Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_2) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1309.0 Safari/537.17',
'Mozilla/5.0 (compatible; MSIE 10.6; Windows NT 6.1; Trident/5.0; InfoPath.2; SLCC1; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; .NET CLR 2.0.50727) 3gpp-gba UNTRUSTED/1.0',
'Opera/12.80 (Windows NT 5.1; U; en) Presto/2.10.289 Version/12.02',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)',
'Mozilla/3.0',
'Mozilla/5.0 (iPhone; U; CPU like Mac OS X; en) AppleWebKit/420+ (KHTML, like Gecko) Version/3.0 Mobile/1A543a Safari/419.3',
'Mozilla/5.0 (Linux; U; Android 0.5; en-us) AppleWebKit/522+ (KHTML, like Gecko) Safari/419.3',
'Opera/9.00 (Windows NT 5.1; U; en)']
agent = choice(agents)
opener = urllib2.build_opener()
opener.addheaders = [('User-agent', agent)]
#print agent
html = opener.open(url).read()
time.sleep(sleeptime)
return html
except:
print "fehler in urlquery"
def immoscout24parser(url):
''' Parser holt aus Immoscout24.de Suchergebnisseiten die Immobilien '''
try:
soup = BeautifulSoup(urlquery(url), 'html.parser')
scripts = soup.findAll('script')
for script in scripts:
#print script.text.strip()
if 'IS24.resultList' in script.text.strip():
s = script.string.split('\n')
for line in s:
#print('\n\n\'%s\'' % line)
if line.strip().startswith('resultListModel'):
resultListModel = line.strip('resultListModel: ')
immo_json = json.loads(resultListModel[:-1])
searchResponseModel = immo_json[u'searchResponseModel']
resultlist_json = searchResponseModel[u'resultlist.resultlist']
return resultlist_json
except Exception, e:
print "fehler in immoscout24 parser: %s" % e
###Output
_____no_output_____
###Markdown
Main LoopGeht Wohnungen und Häuser, jeweils zum Kauf und Miete durch und sammelt die Daten
###Code
immos = {}
b = 'Berlin'
s = 'Berlin'
k = 'Wohnung'
w = 'Miete'
page = 0
print('Suche %s / %s' % (k, w))
while True:
page+=1
url = 'http://www.immobilienscout24.de/Suche/S-T/P-%s/%s-%s/%s/%s?pagerReporting=true' % (page, k, w, b, s)
# Because of some timeout or immoscout24.de errors,
# we try until it works \o/
resultlist_json = None
while resultlist_json is None:
try:
resultlist_json = immoscout24parser(url)
numberOfPages = int(resultlist_json[u'paging'][u'numberOfPages'])
pageNumber = int(resultlist_json[u'paging'][u'pageNumber'])
except:
pass
if page>numberOfPages:
break
# Get the data
for resultlistEntry in resultlist_json['resultlistEntries'][0][u'resultlistEntry']:
realEstate_json = resultlistEntry[u'resultlist.realEstate']
realEstate = {}
realEstate[u'Miete/Kauf'] = w
realEstate[u'Haus/Wohnung'] = k
realEstate['address'] = realEstate_json['address']['description']['text']
realEstate['city'] = realEstate_json['address']['city']
realEstate['postcode'] = realEstate_json['address']['postcode']
realEstate['quarter'] = realEstate_json['address']['quarter']
try:
realEstate['lat'] = realEstate_json['address'][u'wgs84Coordinate']['latitude']
realEstate['lon'] = realEstate_json['address'][u'wgs84Coordinate']['longitude']
except:
realEstate['lat'] = None
realEstate['lon'] = None
realEstate['title'] = realEstate_json['title']
realEstate['numberOfRooms'] = realEstate_json['numberOfRooms']
realEstate['livingSpace'] = realEstate_json['livingSpace']
realEstate['balcony'] = realEstate_json['balcony']
realEstate['builtInKitchen'] = realEstate_json['builtInKitchen']
realEstate['garden'] = realEstate_json['garden']
realEstate['price'] = realEstate_json['price']['value']
realEstate['privateOffer'] = realEstate_json['privateOffer']
realEstate['floorplan'] = realEstate_json['floorplan']
realEstate['from'] = realEstate_json['companyWideCustomerId']
realEstate['ID'] = realEstate_json[u'@id']
realEstate['url'] = u'https://www.immobilienscout24.de/expose/%s' % realEstate['ID']
immos[realEstate['ID']] = realEstate
print('Scrape Page %i/%i (%i Immobilien %s %s gefunden)' % (page, numberOfPages, len(immos), k, w))
###Output
Suche Wohnung / Miete
Scrape Page 1/167 (20 Immobilien Wohnung Miete gefunden)
Scrape Page 2/167 (40 Immobilien Wohnung Miete gefunden)
Scrape Page 3/167 (60 Immobilien Wohnung Miete gefunden)
Scrape Page 4/167 (80 Immobilien Wohnung Miete gefunden)
Scrape Page 5/167 (100 Immobilien Wohnung Miete gefunden)
Scrape Page 6/167 (120 Immobilien Wohnung Miete gefunden)
Scrape Page 7/167 (140 Immobilien Wohnung Miete gefunden)
Scrape Page 8/167 (160 Immobilien Wohnung Miete gefunden)
Scrape Page 9/167 (180 Immobilien Wohnung Miete gefunden)
Scrape Page 10/167 (200 Immobilien Wohnung Miete gefunden)
Scrape Page 11/167 (220 Immobilien Wohnung Miete gefunden)
Scrape Page 12/167 (240 Immobilien Wohnung Miete gefunden)
Scrape Page 13/167 (260 Immobilien Wohnung Miete gefunden)
Scrape Page 14/167 (280 Immobilien Wohnung Miete gefunden)
Scrape Page 15/167 (300 Immobilien Wohnung Miete gefunden)
Scrape Page 16/167 (320 Immobilien Wohnung Miete gefunden)
Scrape Page 17/167 (340 Immobilien Wohnung Miete gefunden)
Scrape Page 18/167 (360 Immobilien Wohnung Miete gefunden)
Scrape Page 19/167 (380 Immobilien Wohnung Miete gefunden)
Scrape Page 20/167 (400 Immobilien Wohnung Miete gefunden)
Scrape Page 21/167 (420 Immobilien Wohnung Miete gefunden)
Scrape Page 22/167 (440 Immobilien Wohnung Miete gefunden)
Scrape Page 23/167 (460 Immobilien Wohnung Miete gefunden)
Scrape Page 24/167 (480 Immobilien Wohnung Miete gefunden)
Scrape Page 25/167 (500 Immobilien Wohnung Miete gefunden)
Scrape Page 26/167 (520 Immobilien Wohnung Miete gefunden)
Scrape Page 27/167 (540 Immobilien Wohnung Miete gefunden)
Scrape Page 28/167 (560 Immobilien Wohnung Miete gefunden)
Scrape Page 29/167 (580 Immobilien Wohnung Miete gefunden)
Scrape Page 30/167 (600 Immobilien Wohnung Miete gefunden)
Scrape Page 31/167 (620 Immobilien Wohnung Miete gefunden)
Scrape Page 32/167 (640 Immobilien Wohnung Miete gefunden)
Scrape Page 33/167 (660 Immobilien Wohnung Miete gefunden)
Scrape Page 34/167 (680 Immobilien Wohnung Miete gefunden)
Scrape Page 35/167 (700 Immobilien Wohnung Miete gefunden)
Scrape Page 36/167 (720 Immobilien Wohnung Miete gefunden)
Scrape Page 37/167 (740 Immobilien Wohnung Miete gefunden)
Scrape Page 38/167 (760 Immobilien Wohnung Miete gefunden)
Scrape Page 39/167 (780 Immobilien Wohnung Miete gefunden)
Scrape Page 40/167 (800 Immobilien Wohnung Miete gefunden)
Scrape Page 41/167 (820 Immobilien Wohnung Miete gefunden)
Scrape Page 42/167 (840 Immobilien Wohnung Miete gefunden)
Scrape Page 43/167 (860 Immobilien Wohnung Miete gefunden)
Scrape Page 44/167 (880 Immobilien Wohnung Miete gefunden)
Scrape Page 45/167 (900 Immobilien Wohnung Miete gefunden)
Scrape Page 46/167 (920 Immobilien Wohnung Miete gefunden)
Scrape Page 47/167 (940 Immobilien Wohnung Miete gefunden)
Scrape Page 48/167 (960 Immobilien Wohnung Miete gefunden)
Scrape Page 49/167 (980 Immobilien Wohnung Miete gefunden)
Scrape Page 50/167 (1000 Immobilien Wohnung Miete gefunden)
Scrape Page 51/167 (1020 Immobilien Wohnung Miete gefunden)
Scrape Page 52/167 (1040 Immobilien Wohnung Miete gefunden)
Scrape Page 53/167 (1060 Immobilien Wohnung Miete gefunden)
Scrape Page 54/167 (1080 Immobilien Wohnung Miete gefunden)
Scrape Page 55/167 (1100 Immobilien Wohnung Miete gefunden)
Scrape Page 56/167 (1120 Immobilien Wohnung Miete gefunden)
Scrape Page 57/167 (1139 Immobilien Wohnung Miete gefunden)
Scrape Page 58/167 (1159 Immobilien Wohnung Miete gefunden)
Scrape Page 59/167 (1179 Immobilien Wohnung Miete gefunden)
Scrape Page 60/167 (1199 Immobilien Wohnung Miete gefunden)
Scrape Page 61/167 (1219 Immobilien Wohnung Miete gefunden)
Scrape Page 62/167 (1239 Immobilien Wohnung Miete gefunden)
Scrape Page 63/167 (1259 Immobilien Wohnung Miete gefunden)
Scrape Page 64/167 (1279 Immobilien Wohnung Miete gefunden)
Scrape Page 65/167 (1299 Immobilien Wohnung Miete gefunden)
Scrape Page 66/167 (1319 Immobilien Wohnung Miete gefunden)
Scrape Page 67/167 (1339 Immobilien Wohnung Miete gefunden)
Scrape Page 68/167 (1359 Immobilien Wohnung Miete gefunden)
Scrape Page 69/167 (1379 Immobilien Wohnung Miete gefunden)
Scrape Page 70/167 (1399 Immobilien Wohnung Miete gefunden)
Scrape Page 71/167 (1419 Immobilien Wohnung Miete gefunden)
Scrape Page 72/167 (1439 Immobilien Wohnung Miete gefunden)
Scrape Page 73/167 (1459 Immobilien Wohnung Miete gefunden)
Scrape Page 74/167 (1479 Immobilien Wohnung Miete gefunden)
Scrape Page 75/167 (1499 Immobilien Wohnung Miete gefunden)
Scrape Page 76/167 (1519 Immobilien Wohnung Miete gefunden)
Scrape Page 77/167 (1539 Immobilien Wohnung Miete gefunden)
Scrape Page 78/167 (1559 Immobilien Wohnung Miete gefunden)
Scrape Page 79/167 (1579 Immobilien Wohnung Miete gefunden)
Scrape Page 80/167 (1599 Immobilien Wohnung Miete gefunden)
Scrape Page 81/167 (1619 Immobilien Wohnung Miete gefunden)
Scrape Page 82/167 (1639 Immobilien Wohnung Miete gefunden)
Scrape Page 83/167 (1659 Immobilien Wohnung Miete gefunden)
Scrape Page 84/167 (1679 Immobilien Wohnung Miete gefunden)
Scrape Page 85/167 (1699 Immobilien Wohnung Miete gefunden)
Scrape Page 86/167 (1719 Immobilien Wohnung Miete gefunden)
Scrape Page 87/167 (1739 Immobilien Wohnung Miete gefunden)
Scrape Page 88/167 (1759 Immobilien Wohnung Miete gefunden)
Scrape Page 89/167 (1779 Immobilien Wohnung Miete gefunden)
Scrape Page 90/167 (1799 Immobilien Wohnung Miete gefunden)
Scrape Page 91/167 (1819 Immobilien Wohnung Miete gefunden)
Scrape Page 92/167 (1839 Immobilien Wohnung Miete gefunden)
Scrape Page 93/167 (1859 Immobilien Wohnung Miete gefunden)
Scrape Page 94/167 (1879 Immobilien Wohnung Miete gefunden)
Scrape Page 95/167 (1899 Immobilien Wohnung Miete gefunden)
Scrape Page 96/167 (1919 Immobilien Wohnung Miete gefunden)
Scrape Page 97/167 (1939 Immobilien Wohnung Miete gefunden)
Scrape Page 98/167 (1959 Immobilien Wohnung Miete gefunden)
Scrape Page 99/167 (1979 Immobilien Wohnung Miete gefunden)
Scrape Page 100/167 (1999 Immobilien Wohnung Miete gefunden)
Scrape Page 101/167 (2019 Immobilien Wohnung Miete gefunden)
Scrape Page 102/167 (2039 Immobilien Wohnung Miete gefunden)
Scrape Page 103/167 (2059 Immobilien Wohnung Miete gefunden)
Scrape Page 104/167 (2079 Immobilien Wohnung Miete gefunden)
Scrape Page 105/167 (2099 Immobilien Wohnung Miete gefunden)
Scrape Page 106/167 (2119 Immobilien Wohnung Miete gefunden)
Scrape Page 107/167 (2139 Immobilien Wohnung Miete gefunden)
Scrape Page 108/167 (2159 Immobilien Wohnung Miete gefunden)
Scrape Page 109/167 (2179 Immobilien Wohnung Miete gefunden)
Scrape Page 110/167 (2199 Immobilien Wohnung Miete gefunden)
Scrape Page 111/167 (2219 Immobilien Wohnung Miete gefunden)
Scrape Page 112/167 (2239 Immobilien Wohnung Miete gefunden)
Scrape Page 113/167 (2259 Immobilien Wohnung Miete gefunden)
Scrape Page 114/167 (2279 Immobilien Wohnung Miete gefunden)
Scrape Page 115/167 (2299 Immobilien Wohnung Miete gefunden)
Scrape Page 116/167 (2319 Immobilien Wohnung Miete gefunden)
Scrape Page 117/167 (2339 Immobilien Wohnung Miete gefunden)
Scrape Page 118/167 (2359 Immobilien Wohnung Miete gefunden)
Scrape Page 119/167 (2379 Immobilien Wohnung Miete gefunden)
Scrape Page 120/167 (2399 Immobilien Wohnung Miete gefunden)
Scrape Page 121/167 (2419 Immobilien Wohnung Miete gefunden)
Scrape Page 122/167 (2439 Immobilien Wohnung Miete gefunden)
Scrape Page 123/167 (2459 Immobilien Wohnung Miete gefunden)
Scrape Page 124/167 (2479 Immobilien Wohnung Miete gefunden)
Scrape Page 125/167 (2499 Immobilien Wohnung Miete gefunden)
Scrape Page 126/167 (2519 Immobilien Wohnung Miete gefunden)
Scrape Page 127/167 (2539 Immobilien Wohnung Miete gefunden)
Scrape Page 128/167 (2559 Immobilien Wohnung Miete gefunden)
Scrape Page 129/167 (2579 Immobilien Wohnung Miete gefunden)
Scrape Page 130/167 (2599 Immobilien Wohnung Miete gefunden)
Scrape Page 131/167 (2619 Immobilien Wohnung Miete gefunden)
Scrape Page 132/167 (2639 Immobilien Wohnung Miete gefunden)
Scrape Page 133/167 (2659 Immobilien Wohnung Miete gefunden)
Scrape Page 134/167 (2679 Immobilien Wohnung Miete gefunden)
Scrape Page 135/167 (2699 Immobilien Wohnung Miete gefunden)
Scrape Page 136/167 (2719 Immobilien Wohnung Miete gefunden)
Scrape Page 137/167 (2739 Immobilien Wohnung Miete gefunden)
Scrape Page 138/167 (2759 Immobilien Wohnung Miete gefunden)
Scrape Page 139/167 (2779 Immobilien Wohnung Miete gefunden)
Scrape Page 140/167 (2799 Immobilien Wohnung Miete gefunden)
Scrape Page 141/167 (2819 Immobilien Wohnung Miete gefunden)
Scrape Page 142/167 (2839 Immobilien Wohnung Miete gefunden)
Scrape Page 143/167 (2859 Immobilien Wohnung Miete gefunden)
Scrape Page 144/167 (2879 Immobilien Wohnung Miete gefunden)
Scrape Page 145/167 (2899 Immobilien Wohnung Miete gefunden)
Scrape Page 146/167 (2919 Immobilien Wohnung Miete gefunden)
Scrape Page 147/167 (2939 Immobilien Wohnung Miete gefunden)
Scrape Page 148/167 (2959 Immobilien Wohnung Miete gefunden)
Scrape Page 149/167 (2979 Immobilien Wohnung Miete gefunden)
Scrape Page 150/167 (2999 Immobilien Wohnung Miete gefunden)
Scrape Page 151/167 (3019 Immobilien Wohnung Miete gefunden)
Scrape Page 152/167 (3039 Immobilien Wohnung Miete gefunden)
Scrape Page 153/167 (3059 Immobilien Wohnung Miete gefunden)
Scrape Page 154/167 (3079 Immobilien Wohnung Miete gefunden)
Scrape Page 155/167 (3099 Immobilien Wohnung Miete gefunden)
Scrape Page 156/167 (3119 Immobilien Wohnung Miete gefunden)
Scrape Page 157/167 (3139 Immobilien Wohnung Miete gefunden)
Scrape Page 158/167 (3159 Immobilien Wohnung Miete gefunden)
Scrape Page 159/167 (3179 Immobilien Wohnung Miete gefunden)
Scrape Page 160/167 (3199 Immobilien Wohnung Miete gefunden)
Scrape Page 161/167 (3219 Immobilien Wohnung Miete gefunden)
Scrape Page 162/167 (3239 Immobilien Wohnung Miete gefunden)
Scrape Page 163/167 (3259 Immobilien Wohnung Miete gefunden)
Scrape Page 164/167 (3279 Immobilien Wohnung Miete gefunden)
Scrape Page 165/167 (3299 Immobilien Wohnung Miete gefunden)
Scrape Page 166/166 (3319 Immobilien Wohnung Miete gefunden)
###Markdown
Datenaufbereitung & CleaningDie gesammelten Daten werden in ein sauberes Datenformat konvertiert, welches z.B. auch mit Excel gelesen werden kann. Weiterhin werden die Ergebnisse pseudonymisiert, d.h. die Anbieter bekommen eindeutige Nummern statt Klarnamen.
###Code
import datetime
timestamp = datetime.datetime.strftime(datetime.datetime.now(), '%Y-%m-%d-%H-%M')
import pandas as pd
df = pd.DataFrame(immos).T
df.index.name = 'ID'
len(df)
df.head()
df['price_sq_m'] = df['price'] / df['livingSpace']
df.head()
df.columns
df.shape
###Output
_____no_output_____
###Markdown
Alles Dumpen
###Code
f = open('%s-%s-%s.csv' % (timestamp, k, w), 'wb')
f.write('# %s %s from immoscout24.de on %s\n' % (k,w,timestamp))
df[(df['Haus/Wohnung']==k) & (df['Miete/Kauf']==w)].to_csv(f, encoding='utf-8')
f.close()
#df.to_excel('%s-%s-%s.xlsx' % (timestamp, k, w))
###Output
_____no_output_____
###Markdown
Configure client
###Code
REFRESH_TOKEN = "WzdwNZfOzHxjvJvqUgA96CqCrJ0j1roy0"
client = QTClient(REFRESH_TOKEN)
###Output
_____no_output_____
###Markdown
Resolve tickers
###Code
positions = pd.DataFrame(client.get_account_positions())
watch_securities = pd.read_csv('/Users/kzabashta/Downloads/seclist.csv',
names=["ticker", "description"]).set_index("ticker")
symbols = dict(zip(positions["symbolId"], positions["symbol"]))
watch_securities = pd.read_csv('/Users/kzabashta/Downloads/seclist.csv',
names=["ticker", "description"]).set_index("ticker")
additional_securities = list(watch_securities.index)
for symbol in additional_securities:
matched_securities = client.search(symbol)
if len(matched_securities) > 1:
if not "." in matched_securities[0]['symbol'] and len(matched_securities[0]['symbol'].split()) == 1:
match_symbol = symbol.split()[0].strip()
match = list(filter(lambda x: x['symbol'] == "%s.TO" % match_symbol and x["isTradable"] == True,
matched_securities))
if len(match) == 1:
symbols[match[0]['symbolId']] = match[0]['symbol']
else:
symbols[matched_securities[0]['symbolId']] = matched_securities[0]['symbol']
else:
symbols[matched_securities[0]['symbolId']] = matched_securities[0]['symbol']
###Output
_____no_output_____
###Markdown
Get market data for all tickers
###Code
FROM_DATE = '2016-10-01T00:00:00-05:00'
TO_DATE = '2019-10-20T23:59:59-05:00'
historicals = pd.DataFrame()
for symbolId, symbol in symbols.items():
try:
candles = pd.DataFrame(client.get_candles(symbolId, FROM_DATE, TO_DATE, 'OneDay')['candles'])
candles['end'] = pd.to_datetime(candles['end'], format='%Y-%m-%d')
candles['symbol'] = symbol
candles['symbolId'] = symbolId
historicals = historicals.append(candles)
except:
print ("Could not get market data for %s" % symbol)
traceback.print_exc()
historicals = historicals.set_index(["symbol", "end"])
###Output
Could not get market data for VXX
###Markdown
Save the data
###Code
historicals.to_csv("historicals.csv")
so = s.sort_values(kind="quicksort")
###Output
_____no_output_____ |
sample_data/pybeads_demo.ipynb | ###Markdown
- original paper of BEADS https://linkinghub.elsevier.com/retrieve/pii/S0169743914002032- preprint http://www.laurent-duval.eu/Articles/Ning_X_2014_j-chemometr-intell-lab-syst_chromatogram_bedusbeads-preprint.pdf- MATLAB toolbox http://eeweb.poly.edu/iselesni/pubs/BEADS_toolbox.zip- pybeads repository https://github.com/skotaro/pybeads- 日本語の解説 https://qiita.com/skotaro/items/d943bc9a50da9410e9cb Packages an function
###Code
# if you haven't, you can install the package here
!pip install pybeads
import pybeads as be
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams['figure.facecolor'] = 'w'
def sigmoid(x):
return 1 / (1 + np.exp(-x))
###Output
_____no_output_____
###Markdown
Real chromatograms + artificial noise Without additional background- The provided chromatograms have a 'well-behaved' background feature that the both ends smoothly approach to zero.- よくみると実は両端で0に漸近しているとてもお行儀の良いBGが設定されている
###Code
# Eight chromatograms with different background levels look like this
data = np.genfromtxt('chromatograms_and_noise.csv', skip_header=4, delimiter=',')
fig, axes = plt.subplots(1, 2, figsize=(15, 4))
for i in range(8):
axes[0].plot(data[:, i], label=i)
axes[1].plot(data[:, i], '.-', label=i)
axes[1].set_ylim(0, 100)
axes[1].set_xlim(1500, 3500)
axes[1].legend(ncol=4)
# We are going to use forth data + noise
y = data[:, 3] + data[:, 8]
print(y.shape)
fig, axes = plt.subplots(1, 2, figsize=(15, 3))
axes[0].plot(y)
axes[1].plot(y)
axes[1].set_ylim(-10, 200)
# It takes 450 ms for 4000 data points.
fc = 0.006
d = 1
r = 6
amp = 0.8
lam0 = 0.5 * amp
lam1 = 5 * amp
lam2 = 4 * amp
Nit = 15
pen = 'L1_v2'
%timeit signal_est, bg_est, cost = be.beads(y, d, fc, r, Nit, lam0, lam1, lam2, pen, conv=None)
# Repeat this line because timeit command does not save the ouputs.
signal_est, bg_est, cost = be.beads(y, d, fc, r, Nit, lam0, lam1, lam2, pen, conv=None)
fig, axes = plt.subplots(3, 1, figsize=(12, 7), sharex=True)
fig.subplots_adjust(hspace=0)
fig.patch.set_color('white')
axes[0].plot(y, c='k', label='original data')
axes[0].plot(bg_est, c='r', label='BG estimated by BEADS')
axes[0].legend()
axes[0].set_ylim(-20, 350)
axes[0].set_xlim(0, 4000)
axes[1].plot(signal_est, label='signal estimated by BEADS')
axes[1].legend()
axes[1].set_ylim(-20, 350)
axes[2].plot(y-signal_est-bg_est, label='noise estimated by BEADS')
axes[2].set_ylim(-35, 35)
axes[2].legend()
plt.plot(cost, '.-')
###Output
_____no_output_____
###Markdown
Adding constant backgroud- This simple additional BG gets the job undone because now the both ends is off the zero line- こんな単純なBGを出すだけでだめになる。両端が0になっていないため。
###Code
bg_const = 50
signal_est, bg_est, cost = be.beads(y+bg_const, d, fc, r, Nit, lam0, lam1, lam2, pen, conv=None)
fig, axes = plt.subplots(3, 1, figsize=(12, 7), sharex=True)
fig.subplots_adjust(hspace=0)
fig.patch.set_color('white')
axes[0].plot(y+bg_const, c='k', label='original data')
axes[0].plot(bg_est, c='r', label='BG estimated by BEADS')
axes[0].legend()
axes[0].set_ylim(-20, 350)
axes[0].set_xlim(0, 4000)
axes[1].plot(signal_est, label='signal estimated by BEADS')
axes[1].legend()
axes[1].set_ylim(-20, 350)
axes[2].plot(y+bg_const-signal_est-bg_est, label='noise estimated by BEADS')
axes[2].set_ylim(-35, 35)
axes[2].legend()
###Output
_____no_output_____
###Markdown
Adding nasty background- But it works if you do a trick.- こんなになってもある工夫をするとうまくいく。
###Code
# Oh, it's awful...
bg = 5e-5*(np.linspace(0, 3999, num=4000)-2000)**2
y_difficult = y + bg
plt.plot(y_difficult)
plt.ylim(0, 350)
###Output
_____no_output_____
###Markdown
Extending the data with sigmoid function- To make the both ends smoothly approaching to zero, we extend the data with sigmoid function.- 両端を滑らかにゼロに近づけるため、シグモイド関数で延長してみる
###Code
xscale_l, xscale_r = 30, 30
dx = 1
y_difficult_l = y_difficult[0]*sigmoid(1/xscale_l*np.arange(-5*xscale_l, 5*xscale_l, dx))
y_difficult_r = y_difficult[-1]*sigmoid(-1/xscale_r*np.arange(-5*xscale_r, 5*xscale_r, dx))
y_difficult_ext = np.hstack([y_difficult_l, y_difficult, y_difficult_r])
len_l, len_o, len_r = len(y_difficult_l), len(y_difficult), len(y_difficult_r)
plt.plot(range(len_l, len_l+len_o), y_difficult)
plt.plot(y_difficult_l, 'C1')
plt.plot(range(len_l+len_o, len_l+len_o+len_r), y_difficult_r, 'C1')
plt.ylim(0, 350)
# Very close.
signal_est, bg_est, cost = be.beads(y_difficult_ext, d, fc, r, Nit, lam0, lam1, lam2, pen, conv=None)
fig, axes = plt.subplots(3, 1, figsize=(12, 7), sharex=True)
fig.subplots_adjust(hspace=0)
fig.patch.set_color('white')
axes[0].plot(y_difficult_ext, c='k', label='original data')
axes[0].plot(bg_est, c='r', label='BG estimated by BEADS')
axes[0].legend()
axes[0].set_ylim(-20, 350)
axes[1].plot(signal_est, label='signal estimated by BEADS')
axes[1].legend()
axes[1].set_ylim(-20, 350)
axes[2].plot(y_difficult_ext-signal_est-bg_est, label='noise estimated by BEADS')
axes[2].set_ylim(-35, 35)
axes[2].legend()
# We need the extension more stretched.
xscale_l, xscale_r = 100, 100
dx = 1
y_difficult_l = y_difficult[0]*sigmoid(1/xscale_l*np.arange(-5*xscale_l, 5*xscale_l, dx))
y_difficult_r = y_difficult[-1]*sigmoid(-1/xscale_r*np.arange(-5*xscale_r, 5*xscale_r, dx))
y_difficult_ext = np.hstack([y_difficult_l, y_difficult, y_difficult_r])
len_l, len_o, len_r = len(y_difficult_l), len(y_difficult), len(y_difficult_r)
plt.plot(range(len_l, len_l+len_o), y_difficult)
plt.plot(y_difficult_l, 'C1')
plt.plot(range(len_l+len_o, len_l+len_o+len_r), y_difficult_r, 'C1')
plt.ylim(0, 350)
signal_est, bg_est, cost = be.beads(y_difficult_ext, d, fc, r, Nit, lam0, lam1, lam2, pen, conv=None)
fig, axes = plt.subplots(3, 1, figsize=(12, 7), sharex=True)
fig.subplots_adjust(hspace=0)
fig.patch.set_color('white')
axes[0].plot(y_difficult_ext, c='k', label='original data')
axes[0].plot(bg_est, c='r', label='BG estimated by BEADS')
axes[0].legend()
axes[0].set_ylim(-20, 350)
axes[1].plot(signal_est, label='signal estimated by BEADS')
axes[1].legend()
axes[1].set_ylim(-20, 350)
axes[2].plot(y_difficult_ext-signal_est-bg_est, label='noise estimated by BEADS')
axes[2].set_ylim(-35, 35)
axes[2].legend()
###Output
_____no_output_____ |
numpy/HW/HW-numpy-exercise-revised.ipynb | ###Markdown
Selected Exercises for `numpy` Import numpy
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
查看numpy的版本号
###Code
np.__version__
###Output
_____no_output_____
###Markdown
创建一个10*10的矩阵,其元素全是1
###Code
a = np.ones((10, 10))
a
###Output
_____no_output_____
###Markdown
创建一个3*3的矩阵,元素分别为0到8
###Code
b = np.arange(0, 9, 1)
b.reshape((3, 3))
###Output
_____no_output_____
###Markdown
检查下面这个矩阵x元素的最大值和最小值
###Code
x = np.random.random((10, 10))
x.max(), x.min()
###Output
_____no_output_____
###Markdown
下面的矩阵x是5 * 5的、元素全是1的矩阵,如何给它加一个宽度为1的边框,使得新矩阵是7*7,而且边缘一圈的值为0?
###Code
x = np.ones((5, 5))
m = np.zeros((1, 5))
y = np.concatenate([m, x, m], axis=0)
n = np.zeros((7, 1))
np.concatenate([n, y, n], axis=1)
### Better Answer
np.pad(x, pad_width=1, constant_values=0)
###Output
_____no_output_____
###Markdown
创建一个5 * 5 的矩阵,以随机数进行填充。并把这个矩阵进行标准化(即减去平均值再除以标准差)。标准化之后的标准差是多少?
###Code
array = np.random.rand(5, 5)
((array - array.mean()) / array.std()).std()
###Output
_____no_output_____
###Markdown
把数组x中值大于3小于8的数都添一个负号,再打印出来检查看看对不对
###Code
x = np.arange(11)
x[(x > 3) & (x < 8)] *= -1
print(x)
###Output
[ 0 1 2 3 -4 -5 -6 -7 8 9 10]
###Markdown
如何把1.34353保留两位小数?如何只保留整数?如何四舍五入保留?
###Code
np.around(1.34353, decimals=2), np.around(1.34353) #负数我没有懂是什么意思
### Answer
round(1.34353, 2), round(1.34353), int(1.34353)
###Output
_____no_output_____
###Markdown
利用`np.intersect1d`函数,试问下列两个array有哪些共同元素?分别在什么位置?
###Code
np.intersect1d([1, 2, 3, 4, 5], [2, 4, 6])
###Output
_____no_output_____
###Markdown
今天几号?可以这么问numpy哈哈哈哈
###Code
np.datetime64('today')
###Output
_____no_output_____
###Markdown
把下面这个array首位颠倒一下
###Code
x = np.random.randint(low=0, high=100, size=(10))
x
#x.reverse() 为啥用不了
### Answer
x[::-1]
###Output
_____no_output_____
###Markdown
对下面这个array从大到小排序,然后打印看看
###Code
x = np.random.randint(low=0, high=100, size=(10)).reshape(1, -1)
x
print(np.ndarray.sort(x, axis=1)) #aaaaaa怎么回事弄不出来
### Answer
y = np.sort(x)[0]
y[::-1]
###Output
_____no_output_____
###Markdown
X和Y是平面直角坐标系上点的坐标,请把这些点的坐标转化为极坐标。
###Code
Z = np.random.random((10,2))
X, Y = Z[:,0], Z[:,1]
x = np.arctan( Y / X ) / 180 * np.pi
y = np.sqrt( X ** 2 + Y ** 2)
### Answer
x = np.arctan( Y / X ) * 180 / np.pi ## rad to deg: np.rad2deg()
y = np.sqrt( X ** 2 + Y ** 2)
###Output
_____no_output_____
###Markdown
搞明白这句话是在干什么
###Code
list(zip(X, Y))
#zip(X, Y)把X, Y变成zip对象,再用list变成列表
#### 咋跟没说一样咧
###Output
_____no_output_____
###Markdown
把Z最大值的位置处的元素换为-99
###Code
Z = np.random.random(10)
Z[np.argmax(Z)] = -99
###Output
_____no_output_____
###Markdown
找出下面这个array中哪个元素最接近15?
###Code
Z = np.random.randint(low=0, high=25, size=(30))
np.argmin(Z - 15)
###Output
_____no_output_____
###Markdown
搞明白下面这段是在干甚
###Code
X, Y = np.meshgrid(np.linspace(-1,1,10), np.linspace(-1,1,10))
D = np.sqrt(X * X + Y * Y)
sigma, mu = 1.0, 0.0
G = np.exp(-((D - mu)**2 / (2.0 * sigma**2)))
import matplotlib.pyplot as plt
plt.imshow(G)
plt.colorbar()
plt.show()
#坐标为(X, Y)的点距离原点的距离是以0为均值,以1为标准差的正态分布(公式前的根号下2pi呢?)
### 看的是color 不是距离 OwO
###Output
_____no_output_____
###Markdown
请用np.load函数加载储存在`array-with-nan.npy`文件中的numpy矩阵,并用图像观察之。1) 白色的点是啥子?2) 检查一下矩阵里是否存在`np.nan` (Not A Number). 如果存在,这些nan在何处?index分别是什么?3) 将这些nan都换成9,再画出图来看看(记得画出colorbar),白点变成啥子了?
###Code
S = np.load('array-with-nan.npy')
plt.imshow(S)
plt.colorbar()
plt.show()
# 1) nan 'not a number'没有数字,也就没有颜色可以代表
np.where(np.isnan(S))
S[np.where(np.isnan(S))] = 9
plt.imshow(S)
plt.colorbar()
plt.show()
# 3) 变成了✨
###Output
_____no_output_____
###Markdown
如何把变量x = True变成False?
###Code
x = True
np.logical_not(x)
### Answer
not True
###Output
_____no_output_____ |
Exercise-3-Question.ipynb | ###Markdown
DO NOT CHANGE THE LINE BELOW. If you are developing in a localenvironment, then grab mnist.npz from the Coursera Jupyter Notebookand place it inside a local folder and edit the path to that location
###Code
path = f"{getcwd()}/drive/My Drive/datasets/ex3-coursera-mnist/mnist.npz"
config = tf.compat.v1.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.compat.v1.Session(config=config)
###Output
_____no_output_____
###Markdown
GRADED FUNCTION: train_mnist_conv
###Code
def train_mnist_conv():
# Please write your code only where you are indicated.
# please do not remove model fitting inline comments.
# YOUR CODE STARTS HERE
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if (logs.get('acc') >= 0.998):
print('\Reached 99.8% accuracy so cancelling training')
self.model.stop_training = True
# YOUR CODE ENDS HERE
callbacks = myCallback()
mnist = tf.keras.datasets.mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data(path=path)
# YOUR CODE STARTS HERE
training_images = training_images.reshape(60000, 28, 28, 1)
training_images = training_images / 255.0
test_images = test_images.reshape(10000, 28, 28, 1)
test_images = test_images / 255.0
# YOUR CODE ENDS HERE
model = tf.keras.models.Sequential([
# YOUR CODE STARTS HERE
tf.keras.layers.Conv2D(64, (3, 3), activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
# model fitting
model.summary()
history = model.fit(training_images, training_labels, epochs=20, callbacks=[callbacks])
# model fitting
return history.epoch, history.history['acc'][-1]
_, _ = train_mnist_conv()
###Output
_____no_output_____ |
Pt_2/05_02_Principal_component_analysis_PCA/05_02_end.ipynb | ###Markdown
Chapter 5 - Dimensionality Reduction Methods Segment 2 - Principal component analysis (PCA)
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import pylab as plt
import seaborn as sb
from IPython.display import Image
from IPython.core.display import HTML
from pylab import rcParams
import sklearn
from sklearn import datasets
from sklearn import decomposition
from sklearn.decomposition import PCA
%matplotlib inline
rcParams['figure.figsize'] = 5, 4
sb.set_style('whitegrid')
###Output
_____no_output_____
###Markdown
PCA on the iris dataset
###Code
iris = datasets.load_iris()
X = iris.data
variable_names = iris.feature_names
print(variable_names)
X[0:10,]
pca = decomposition.PCA()
iris_pca = pca.fit_transform(X)
pca.explained_variance_ratio_
print(f"The first and second features contains 97% of the relevant information for the dataset")
pca.explained_variance_ratio_.sum()
comps = pd.DataFrame(pca.components_, columns=variable_names)
comps
sb.heatmap(comps, cmap="Blues", annot=True)
###Output
_____no_output_____ |
.ipynb_checkpoints/IMDB-checkpoint.ipynb | ###Markdown
Data Preparation IMDB Crawler
###Code
# import package
import pandas as pd
import time
import urllib.request
from lxml.html import fromstring
from bs4 import BeautifulSoup
# download html
def download(url):
print('Downloading:', url)
request = urllib.request.Request(url)
request.add_header('User-agent', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.102 Safari/537.36')
resp = urllib.request.urlopen(request)
html = resp.read().decode('utf-8')
return html
# content to be scrape
Name = []
Year = []
Rate = []
Level = []
Directors = []
Writers = []
Stars = []
Genres = []
Runtime = []
Country = []
Language = []
Budget = []
Box_Office_USA = []
Box_Office_World = []
start_url = download('https://www.imdb.com/chart/top/?ref_=nv_mv_250')
domain = 'https://www.imdb.com/'
start_soup = BeautifulSoup(start_url)
# scrape every item
for k in range(250):
sub_html = start_soup.find_all('tbody')[0].find_all('a')[2*k+1].get('href')
url = download(domain + sub_html)
time.sleep(3)
tree = fromstring(url)
soup = BeautifulSoup(url)
name = soup.find('span',{'id':'titleYear'}).previous_sibling
Name.append(name.replace(name[-1],''))
Year.append(tree.xpath('//*[@id="titleYear"]/a')[0].text_content())
Rate.append(tree.xpath('//*[@id="title-overview-widget"]/div[1]/div[2]/div/div[1]/div[1]/div[1]/strong/span')[0].text_content())
Level.append(soup.find('div',{'class':'subtext'}).span.previous_sibling.strip())
try:
Directors.append(soup.find(text='Director:').parent.parent.find('a').get_text())
except AttributeError:
directors = [k.get_text() for k in soup.find(text='Directors:').parent.parent.find_all('a')]
Directors.append('/'.join(directors))
try:
writers = [k.get_text() for k in soup.find(text='Writers:').parent.parent.find_all('a')]
Writers.append('/'.join(writers))
except AttributeError:
Writers.append(soup.find(text='Writer:').parent.parent.find('a').get_text())
stars = [k.get_text() for k in soup.find(text='Stars:').parent.parent.find_all('a')]
Stars.append('/'.join(stars))
genres = [k.get_text().strip() for k in soup.find(text='Genres:').parent.parent.find_all('a')]
Genres.append('/'.join(genres))
try:
Runtime.append(soup.find(text='Runtime:').parent.parent.time.get_text())
except:
Runtime.append(None)
countries = [k.get_text() for k in soup.find(text='Country:').parent.parent.find_all('a')]
Country.append('/'.join(countries))
languages = [k.get_text() for k in soup.find(text='Language:').parent.parent.find_all('a')]
Language.append('/'.join(languages))
try:
Budget.append(soup.find(text='Budget:').parent.next_sibling.strip())
except AttributeError:
Budget.append(None)
try:
Box_Office_USA.append(soup.find(text='Gross USA:').parent.next_sibling.strip())
except AttributeError:
Box_Office_USA.append(None)
try:
Box_Office_World.append(soup.find(text='Cumulative Worldwide Gross:').parent.next_sibling.strip())
except AttributeError:
Box_Office_World.append(None)
# combine each column
Name_pd = pd.DataFrame(Name)
Year_pd = pd.DataFrame(Year)
Rate_pd = pd.DataFrame(Rate)
Level_pd = pd.DataFrame(Level)
Directors_pd = pd.DataFrame(Directors)
Writers_pd = pd.DataFrame(Writers)
Stars_pd = pd.DataFrame(Stars)
Genres_pd = pd.DataFrame(Genres)
Runtime_pd = pd.DataFrame(Runtime)
Country_pd = pd.DataFrame(Country)
Language_pd = pd.DataFrame(Language)
Budget_pd = pd.DataFrame(Budget)
Box_Office_USA_pd = pd.DataFrame(Box_Office_USA)
Box_Office_World_pd = pd.DataFrame(Box_Office_World)
movie_data = pd.concat([Name_pd,Year_pd,Rate_pd,Level_pd,Directors_pd,Writers_pd,Stars_pd,Genres_pd,Runtime_pd,\
Country_pd,Language_pd,Budget_pd,Box_Office_USA_pd,Box_Office_World_pd],axis=1)
movie_data.columns=['Name','Year','Rate','Level','Directors','Writers','Stars','Genres','Runtime','Country',\
'Language','Budget','Box_Office_USA','Box_Office_World']
# output
outputpath='F:/imdb_movie_all.csv' ## The path need to be altered!
movie_data.to_csv(outputpath,sep=',',index=False,header=True,encoding='utf_8_sig')
###Output
_____no_output_____
###Markdown
IMDB Sentiment Analysis
###Code
import os
from os import listdir
import random
import string
import nltk
import numpy as np
import pandas as pd
from collections import Counter
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn import set_config
from sklearn.metrics import f1_score
from sklearn.linear_model import LogisticRegression
from nltk.corpus import stopwords
set_config(print_changed_only=False)
###Output
_____no_output_____
###Markdown
Classes
###Code
class Review:
def __init__(self, text, polarity):
self.text = text.translate(str.maketrans(' ', ' ',
string.punctuation)).replace("br", "")
self.polarity = polarity
class Features:
def __init__(self, reviews):
self.reviews = reviews.text
self.new_text = self.Stopwords()
def Stopwords(self):
sw_nltk = stopwords.words('english')
words = [word for word in self.reviews.split()
if word.lower() not in sw_nltk]
new_text = " ".join(words)
return new_text
class Target:
def __init__(self, reviews):
self.reviews = reviews.polarity
class Vectorization:
def __init__(self, X):
self.X = X
# X_train, X_test CountVectorizer() transformations
self.X_train_vectors_count = self.count_vectorization()[0]
self.X_test_vectors_count = self.count_vectorization()[1]
self.Vectorizer_count = self.count_vectorization()[2]
# X_train, X_test TfidfVectorizer() transformation
self.X_train_vectors_tfidf = self.tfidf_vectorization()[0]
self.X_test_vectors_tfidf = self.tfidf_vectorization()[1]
self.Vectorizer_tfidf = self.tfidf_vectorization()[2]
def count_vectorization(self):
vectorizer = CountVectorizer()
X_train_vectors = vectorizer.fit_transform(self.X[0])
X_test_vectors = vectorizer.transform(self.X[1])
return (X_train_vectors, X_test_vectors, vectorizer)
def tfidf_vectorization(self):
vectorizer = TfidfVectorizer(lowercase=True, min_df=0.0001)
X_train_vectors = vectorizer.fit_transform(self.X[0])
X_test_vectors = vectorizer.transform(self.X[1])
return (X_train_vectors, X_test_vectors, vectorizer)
class Logistic_Regression:
def __init__(self, X_train, y_train):
self.X_train = X_train
self.y_train = y_train
self.model = self.Model()
def Model(self):
clf_log = LogisticRegression(max_iter=1000)
clf_log.fit(self.X_train, self.y_train)
return (clf_log)
class Metrics:
def __init__(self, model, X_test, y_test):
self.model = model
self.X_test = X_test
self.y_test = y_test
self.mean_score = self.mean_score()
self.f1_score = self.F1_score()
def mean_score(self):
return self.model.score(self.X_test, self.y_test)
def F1_score(self):
return f1_score(self.y_test, self.model.predict(self.X_test),
average=None, labels=['positive', 'negative'])
###Output
_____no_output_____
###Markdown
Store negative reviews
###Code
reviews = []
for links in os.listdir('./aclImdb_v1/aclImdb/test/neg'):
with open('./aclImdb_v1/aclImdb/test/neg/{}'.format(links),
encoding="utf8") as f:
for line in f:
reviews.append(Review(line, 'negative'))
###Output
_____no_output_____
###Markdown
Store positive reviews
###Code
for links in os.listdir('./aclImdb_v1/aclImdb/test/pos'):
with open('./aclImdb_v1/aclImdb/test/pos/{}'.format(links),
encoding="utf8") as f:
for line in f:
reviews.append(Review(line, 'positive'))
###Output
_____no_output_____
###Markdown
Shuffle the reviews so that it isn't directly positive reviews followed by negative
###Code
random.shuffle(reviews)
###Output
_____no_output_____
###Markdown
Seperate into X and y variables
###Code
X = []
for line in reviews:
X.append(Features(line).new_text)
y = []
for line in reviews:
y.append(Target(line).reviews)
###Output
_____no_output_____
###Markdown
Train-test split
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.33,
random_state=42)
###Output
_____no_output_____
###Markdown
Vectorize X_train and X_test
###Code
vectorize = Vectorization((X_train, X_test))
###Output
_____no_output_____
###Markdown
Bag of words vectorization
###Code
X_train_vectors_bow = vectorize.X_train_vectors_count
X_test_vectors_bow = vectorize.X_test_vectors_count
###Output
_____no_output_____
###Markdown
Tfidf Vectorization
###Code
X_train_vectors_tfidf = vectorize.X_train_vectors_tfidf
X_test_vectors_tfidf = vectorize.X_test_vectors_tfidf
###Output
_____no_output_____
###Markdown
Classification Logistic Regression
###Code
model_bow = Logistic_Regression(X_train_vectors_bow, y_train).model
model_tfidf = Logistic_Regression(X_train_vectors_tfidf, y_train).model
X_train_vectors_tfidf
###Output
_____no_output_____
###Markdown
Evaluation Mean Accuracy
###Code
# Mean Accuracy
print (Metrics(model_bow, X_test_vectors_bow, y_test).mean_score)
print (Metrics(model_tfidf, X_test_vectors_tfidf, y_test).mean_score)
###Output
0.8797575757575757
0.8900606060606061
###Markdown
F1 Scores
###Code
print (Metrics(model_bow, X_test_vectors_bow, y_test).f1_score)
print (Metrics(model_tfidf, X_test_vectors_tfidf, y_test).f1_score)
###Output
[0.8804243 0.87908337]
[0.89164974 0.88842416]
###Markdown
I want to improve our scores. There are a couple of things that we can do.* Making each string lowercase* Removing punctuation* Removing common words such as "and", "to", "or", "the", "is", and more* Adding weights to words Save our model, and vectorizer
###Code
import pickle
model_name = 'imdb.pkl'
pickle.dump(model_tfidf, open('./webapp/model/{}'.format(model_name), 'wb'))
loaded_model = pickle.load(open('./webapp/model/{}'.format(model_name), 'rb'))
Vector = vectorize.Vectorizer_tfidf
vector_name = 'tfidf1.pkl'
pickle.dump(Vector, open('./webapp/model/{}'.format(vector_name), "wb"))
tf1 = pickle.load(open('./webapp/model/{}'.format(vector_name), 'rb'))
###Output
_____no_output_____
###Markdown
Testing out pickled model and vectorizer Converting string into vectorizer form
###Code
string = ['bad bad bad bad bad bad', 'good good good good', 'bad good bad', 'This movie is awful. How can anyone spend their time with this garbage?!']
tf1_vectored = tf1.transform(string)
###Output
_____no_output_____
###Markdown
Predicting the result
###Code
loaded_model.predict(tf1_vectored)
###Output
_____no_output_____
###Markdown
HOLA AMIGOS
###Code
% reset
from sklearn.model_selection import train_test_split
import numpy as np
from helpers import *
import os
import nltk
import re
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from nltk.corpus import wordnet
import pathlib
from multiprocessing import Process,Lock,Value
%load_ext autoreload
%autoreload 2
dirs = ['Data/aclImdb/train/pos','Data/aclImdb/train/neg','Data/aclImdb/train/unsup']
clean_root = speedy(dirs,15)
from pathlib import Path
class MyDocs(object):
def __init__(self, dirname):
self.dirname = dirname
def __iter__(self):
pathlist = Path(self.dirname).glob('**/*')
for path in pathlist:
path_in_str = str(path)
if os.path.isfile(path_in_str):
f=open(path_in_str)
yield f.read().split()
from gensim.models import Word2Vec,TfidfModel,Phrases
from gensim.corpora import Dictionary
import gensim
docs = MyDocs(clean_root)
bigram_transformer = Phrases(docs)
wv_model = Word2Vec(bigram_transformer[docs], size=300, window=5, min_count=100, workers=6)
def tf_idf_model(doc_path,model_name=None):
model_path = f'models/tf_{model_name}'
dict_path = f'models/dict_{model_name}'
model_exists = os.path.isfile(model_path)
dict_exists = os.path.isfile(dict_path)
dir_exists = os.path.isdir('models')
save = model_name!=None
docs = MyDocs(doc_path)
bow_dict = Dictionary.load(dict_path) if dict_exists else Dictionary(docs)
corpus = [bow_dict.doc2bow(doc) for doc in docs]
tf_model = TfidfModel.load(model_path) if model_exists else TfidfModel(corpus)
if save:
if not dir_exists:
pathlib.Path.mkdir('models')
if not model_exists:
tf_model.save(model_path)
if not dict_exists:
bow_dict.save(dict_path)
return corpus,bow_dict,tf_model
pos_corpus, pos_dict, pos_tf_model = tf_idf_model(f'{clean_root}/aclImdb/train/pos','pos')
neg_corpus, neg_dict, neg_tf_model = tf_idf_model(f'{clean_root}/aclImdb/train/neg','neg')
def doc2vec_factory(bow_dict,tf_model,word2vec):
def doc2vec(doc_bow):
doc = np.array([bow_dict[word_id] for word_id,freq in doc_bow])
doc_tf_idf = np.array([pair[1] for pair in tf_model[doc_bow] if bow_dict[pair[0]] in word2vec.wv.vocab])
vecs = np.array([word2vec.wv[word] for word in doc if word in word2vec.wv.vocab])
doc_vec = np.sum(doc_tf_idf.reshape(-1,1)*vecs,axis=0)
return doc_vec.reshape(-1,1)
return doc2vec
pos_doc2vec = doc2vec_factory(pos_dict,pos_tf_model,wv_model)
neg_doc2vec = doc2vec_factory(neg_dict,neg_tf_model,wv_model)
pos_vecs = []
for corpus in pos_corpus:
pos_vecs.append(pos_doc2vec(corpus))
pos_vecs = np.array(pos_vecs)
neg_vecs = []
for corpus in neg_corpus:
neg_vecs.append(neg_doc2vec(corpus))
neg_vecs = np.array(neg_vecs)
neg_vecs = neg_vecs.reshape(-1,300)
pos_vecs = pos_vecs.reshape(-1,300)
X = np.concatenate((pos_vecs,neg_vecs),axis=0)
y = np.asarray([1]*pos_vecs.shape[0] + [-1]*pos_vecs.shape[0])
X_train,X_val,y_train,y_val = train_test_split(X, y, test_size=0.33, random_state=42)
from sklearn.svm import SVC
clf = SVC(C=1)
clf.fit(X_train, y_train)
np.mean(clf.predict(X_val) == y_val)
from sklearn.ensemble import RandomForestClassifier
forest = RandomForestClassifier(n_estimators=500,min_samples_split=50,n_jobs=-1)
forest.fit(X_train,y_train)
np.mean(forest.predict(X_val) == y_val)
from sklearn.ensemble import AdaBoostClassifier
ada = AdaBoostClassifier(n_estimators=200)
ada.fit(X_train,y_train)
np.mean(ada.predict(X_val) == y_val)
from sklearn.ensemble import GradientBoostingClassifier
gboost = GradientBoostingClassifier(n_estimators=300)
gboost.fit(X_train,y_train)
np.mean(gboost.predict(X_val) == y_val)
###Output
_____no_output_____ |
scripts/d21-en/tensorflow/chapter_deep-learning-computation/custom-layer.ipynb | ###Markdown
Custom LayersOne factor behind deep learning's successis the availability of a wide range of layersthat can be composed in creative waysto design architectures suitablefor a wide variety of tasks.For instance, researchers have invented layersspecifically for handling images, text,looping over sequential data,andperforming dynamic programming.Sooner or later, you will encounter or inventa layer that does not exist yet in the deep learning framework.In these cases, you must build a custom layer.In this section, we show you how. (**Layers without Parameters**)To start, we construct a custom layerthat does not have any parameters of its own.This should look familiar if you recall ourintroduction to block in :numref:`sec_model_construction`.The following `CenteredLayer` class simplysubtracts the mean from its input.To build it, we simply need to inheritfrom the base layer class and implement the forward propagation function.
###Code
import tensorflow as tf
class CenteredLayer(tf.keras.Model):
def __init__(self):
super().__init__()
def call(self, inputs):
return inputs - tf.reduce_mean(inputs)
###Output
_____no_output_____
###Markdown
Let us verify that our layer works as intended by feeding some data through it.
###Code
layer = CenteredLayer()
layer(tf.constant([1, 2, 3, 4, 5]))
###Output
_____no_output_____
###Markdown
We can now [**incorporate our layer as a componentin constructing more complex models.**]
###Code
net = tf.keras.Sequential([tf.keras.layers.Dense(128), CenteredLayer()])
###Output
_____no_output_____
###Markdown
As an extra sanity check, we can send random datathrough the network and check that the mean is in fact 0.Because we are dealing with floating point numbers,we may still see a very small nonzero numberdue to quantization.
###Code
Y = net(tf.random.uniform((4, 8)))
tf.reduce_mean(Y)
###Output
_____no_output_____
###Markdown
[**Layers with Parameters**]Now that we know how to define simple layers,let us move on to defining layers with parametersthat can be adjusted through training.We can use built-in functions to create parameters, whichprovide some basic housekeeping functionality.In particular, they govern access, initialization,sharing, saving, and loading model parameters.This way, among other benefits, we will not need to writecustom serialization routines for every custom layer.Now let us implement our own version of the fully-connected layer.Recall that this layer requires two parameters,one to represent the weight and the other for the bias.In this implementation, we bake in the ReLU activation as a default.This layer requires to input arguments: `in_units` and `units`, whichdenote the number of inputs and outputs, respectively.
###Code
class MyDense(tf.keras.Model):
def __init__(self, units):
super().__init__()
self.units = units
def build(self, X_shape):
self.weight = self.add_weight(
name='weight', shape=[X_shape[-1], self.units],
initializer=tf.random_normal_initializer())
self.bias = self.add_weight(name='bias', shape=[self.units],
initializer=tf.zeros_initializer())
def call(self, X):
linear = tf.matmul(X, self.weight) + self.bias
return tf.nn.relu(linear)
###Output
_____no_output_____
###Markdown
Next, we instantiate the `MyDense` classand access its model parameters.
###Code
dense = MyDense(3)
dense(tf.random.uniform((2, 5)))
dense.get_weights()
###Output
_____no_output_____
###Markdown
We can [**directly carry out forward propagation calculations using custom layers.**]
###Code
dense(tf.random.uniform((2, 5)))
###Output
_____no_output_____
###Markdown
We can also (**construct models using custom layers.**)Once we have that we can use it just like the built-in fully-connected layer.
###Code
net = tf.keras.models.Sequential([MyDense(8), MyDense(1)])
net(tf.random.uniform((2, 64)))
###Output
_____no_output_____ |
jupyter-notebooks/analytics/user-guide/01_getting_started_with_the_planet_analytics_api.ipynb | ###Markdown
Planet Analytics API TutorialGetting Started with Planet Analytics API Overview1. [Introduction](1.-Introduction) > Introduction to Planet Analytics capabilities and concepts: Feeds, Subscriptions, Results.2. [API Mechanics](2.-API-mechanics)> Overview of available API endpoints, specifications, documentation, and authentication. We'll also introduce the `Requests` library.3. [Making our first request to the Planet Analytics API](3.-Making-our-first-request-to-the-Planet-Analytics-API)> Use the `Requests` python library to authenticate, connect, and download data from the Planet Analytics API4. [Working with Planet Analytics Data](4.-Working-with-Planet-Analytics-API-data)> Explore and visualize Planet Analytics data using `GeoPandas` and `GeoViews` 1. IntroductionThe Planet Analytics API leverages computer vision to transform Planet imagery into *analytic feeds* that detect and classify objects, identify geographic features, and monitor change over time across the globe. This tutorial series corresponds with the Analytics Feeds [User Guide](https://developers.planet.com/docs/analytics/), and is intended to help developers access the Planet Analytics API in order to build applications and solutions on top of this unique dataset.This installment of the tutorial series will focus on connecting to the API and serve as an overview of the concepts of **Feeds** and **Subscriptions**, the two basic building blocks of an "analytics feed". Feeds A **Feed** represents an analytic derived from Planet imagery. Each feed essentially represents an analytic capability that has been uniquely configured to optimize performance and quality, and each have unique types of outputs. For example, a Road Detection Feed represents roads detected on monthly Planet Basemaps, and outputs raster "segmentation mask" data. Several types of Feeds are currently available on the Planet Analytics API, and we plan on releasing new feeds in the future. Subscriptions Users have *subscriptions to feeds* in a specific Area of Interest (AOI) and Time Interval of Interest (TOI). For example, a **Subscription** could be Road Detections over 12 months in San Francisco, California. "Subscribing" to a Feed is how a user can initiate the process of leveraging an analytic capability over an AOI/TOI and generate analytics datasets called "Results".We'll be covering how to access the available Feeds and Subscriptions in this tutorial. ResultsWhen new Planet imagery is published that intersects a Subscription's AOI and TOI, Planet’s computer vision models process the imagery and the output is added to a "collection" of **Results** associated with the Subscription. The next tutorial in this series will examine results in more detail.Visit us at [planet.com](https://www.planet.com/products/analytics/) to learn more about Planet's unique analytics capabilities and offerings. 2. API mechanics--- API EndpointsThe Planet Analytics API can be accessed at the following base URL: `api.planet.com/analytics`The main endpoints available correspond to the three types of data exposed on the API: **Feeds**, **Subscriptions**, and **Results*** `/feeds` - Feeds* `/subscriptions` - Subscriptions* `/collections` - Subscription Results "Collections" * `/collections/{COLLECTION ID}/items` - Subscription Results "Features" (Detections)In this tutorial, we'll make some example requests to the Planet Analytics API programmatically to demonstrate how the **Feeds** and **Subscriptions** endpoints work and what data they can provide us.The next tutorials in this series will cover **Results** endpoints and working with analytics data in more detail.Before we dive into working with these endpoints, let's go over some more API mechanics and make sure we know how to access the documentation! DocumentationThe documentation extensively lists all the available endpoints and their available `HTTP methods`, as well as any options or `query parameters` that are available for us to control.For a full listing and more information on each endpoint, view the interactive API documentation website: [developers.planet.com/docs/analytics](https://developers.planet.com/docs/analytics/)There you can view in depth listings on all available endpoints, query parameters, and response payloads and statuses. API SpecificationsThe Planet Analytics API follows a RESTful (Representational State Transfer) API interface over HTTP: HTTP BasicsCommunicating with the Planet Analytics API is achieved via [Hypertext Transfer Protocol](https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol) (HTTP) by sending "HTTP requests" from a "client" (your computer or another server) to the Planet Analytics API server, which will issue "responses" back to the requester. There are many ways to make an HTTP request including command line programs such as [cURL](https://curl.haxx.se/) and libraries written for specific programming languages like [httplib](https://docs.python.org/2/library/httplib.html) in Python.Requests can also be made from your favorite web browser, or other graphical user interfaces (GUI) such as [Insomnia](https://insomnia.rest/). We can even use [QGIS](https://www.qgis.org) to request our Planet Analytics API results.To facilitate ease-of-use and ensure quality and compliance for a wide variety of applications, the Planet Analytics API implements two standardized specifications: * OpenAPI* WFS 3.0 OpenAPIThe Planet Analytics API conforms to the **[OpenAPI Specfication](https://github.com/OAI/OpenAPI-Specification)**. > The OpenAPI Specification, formerly known as the Swagger Specification, is the world’s standard for defining RESTful interfaces. The OAS enables developers to design a technology-agnostic API interface that forms the basis of their API development and consumption. WFS 3.0The Planet Analytics API's **Results** (`collections` and `items`) endpoints follow the [Open Geospatial Consortium's](http://www.opengeospatial.org) (OGC) [Web Feature Service 3.0](https://github.com/opengeospatial/WFS_FES) (WFS) specification.> A Web Feature Service (WFS) is a standard API that represents collections of geospatial data.Conformance information for the Planet Analytics API is available at [https://api.planet.com/analytics/conformance](https://api.planet.com/analytics/conformance)You can view the the Planet Analytics API spec `swagger` definition document at **[https://api.planet.com/analytics/swagger.json](https://api.planet.com/analytics/swagger.json)** Using the Requests libraryIn this tutorial, we'll use the **[Requests](http://docs.python-requests.org)** python library to make our `HTTP requests` to the Planet Analytics API.> Requests is an elegant and simple HTTP library for Python, built for human beings.*Remember, any libraries or applications that can perform HTTP requests can be used to access the Planet Analytics API. The mechanics will be fairly identical to how we use the Requests library here, so feel free to try your favorite client!* API Authentication Setup authentication for RequestsSome of the content on the Planet Analytics API is only be available to specific users. We'll need to authenticate each of our requests to the Planet Analytics API in order to access the content that is available to us. We can do this by setting an `Authorization` [`HTTP header`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers) on our requests to one of the three available authentication types:* `api-key`* `Basic`* `Bearer` (JWT - JSON Web Token)To find more information on each of these authentication approaches, see the [authentication section](https://api.planet.com/analytics/docssection/Authentication) in the Planet Analytics API documentation. Basic AuthenticationWe'll use [Basic Authentication](https://api.planet.com/analytics/docssection/Authentication/Basic) to authenticate with the Planet Analytics API, via Requests' [`HTTPBasicAuth`](http://docs.python-requests.org/en/master/user/authentication/?highlight=authbasic-authentication) package. Using the Requests library's helpful shorthand argument `auth=('USERNAME','PASSWORD')` makes it very easy to send an authenticated HTTP request! We can use either our Planet `username` and `password`, or simply pass in our `Planet API key` as the username. To find your Planet API key, you can visit your [Planet Account Page](https://www.planet.com/account). If you don't have an account yet, [sign up for one here](https://www.planet.com/login/?mode=signup) !Let's set our API key in a variable called `API_KEY`.
###Code
# Here, we've already stored our Planet API key as an environment variable on our system
# We use the `os` package to read it into the notebook.
import os
API_KEY = os.environ['PL_API_KEY']
# Alternatively, you can just set your API key directly as a string variable:
# API_KEY = "YOUR_PLANET_API_KEY_HERE"
# Use our API key as the basic authentication username
apiAuth = (API_KEY, '')
# Alternatively, you can use your Planet username and password
# apiAuth = ("[email protected]", 'mypassword')
###Output
_____no_output_____
###Markdown
After we've got our API credentials set up, we're ready to continue to the next section and make our first request! 3. Making our first request to the Planet Analytics API--- Let's start to explore the **Feeds** and **Subscriptions** data available to us by interacting directly with the API and making our first request! Configure the base URLOnce we have our authentication variable set up, we can create a variable that holds the "base URL" of the Planet Analytics API. This is the root URL onto which we add endpoints.
###Code
# Planet Analytics API base url
PAA_BASE_URL = "https://api.planet.com/analytics/"
###Output
_____no_output_____
###Markdown
First request: Get a list of available FeedsLet's make a request to get information on all our available `Feeds`. The request should go to the following address: https://api.planet.com/analytics/feeds Setup the Request Endpoint
###Code
# Define our endpoint to point to "feeds"
feeds_endpoint = "feeds"
# Construct the URL for the HTTP request
#(Planet Analytics API base URL + desired endpoint)
request_url = PAA_BASE_URL + feeds_endpoint
###Output
_____no_output_____
###Markdown
Make the RequestSince we're making a `GET` request, we'll use Requests' `.get` method. Now, let's create our request by passing our request URL and auth variable. Running the next cell should make a call out to the Planet Analytics API.
###Code
# Import the requests library
import requests
# Make the GET request
response = requests.get(request_url, auth=apiAuth)
print(response)
###Output
_____no_output_____
###Markdown
If our request call above was **successful** we should get back a response with a [`200 OK`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200) `HTTP status code`! If we get any other type response, we may be doing something wrong and be running into an error. There should be a message and an `HTTP status code` in the response with more information to help us debug. All expected response codes and their messages are listed in the `Response Schema` section for each endpoint in the Planet Analytics API documentation.Most importantly, our successful response also delivers a payload in the response `body` that should contain **[`JSON`](https://www.json.org/)** data. *Congratulations!* You've just made your first request to the Planet Analytics API!Next, let's take a look at the data we received. Reading Response DataWe need to decode our response `JSON` to inspect the payload. Once we do, we should see a `data` property in the payload that contains a list or array of **Feed** objects:
###Code
# Decode the response JSON body to a python dict
response_json = response.json()
print(response_json)
###Output
_____no_output_____
###Markdown
If the response field 'data' property is an empty array, this means you don't have any Analaytic subscriptions in your account. Please contact your customer support manager or contact sales at [https://www.planet.com/contact-sales/](https://www.planet.com/contact-sales/) to resolve this problem.Further on in this tutorial we'll cover the specifics of this response, but for now let's focus on the format of our responses. JSON ResponsesIf you're used to working with `JSON` then you should be able to understand the above output. `JSON` contains `key-value` pairs of data. All responses from the Planet Analytics API should return `JSON` data, either single `JSON` objects or sometimes nested lists of objects along with other properties. The raw, unformatted output above is a bit hard to read for most humans though...Using the `json` python package, we can "beautify" the response data to make it easier to read:
###Code
import json
# "Beautify" JSON with json.dumps
beautified_json = json.dumps(response_json, sort_keys=True, indent=4)
print(beautified_json)
###Output
_____no_output_____
###Markdown
Alternatively, we can also use python's [`pprint`](https://docs.python.org/3.7/library/pprint.html) module to "beautify" our dict
###Code
# Use the pprint module
import pprint
pp = pprint.PrettyPrinter(indent=4)
# Beautify our response_json dict with pp
pp.pprint(response_json)
###Output
_____no_output_____
###Markdown
Finally, let's export the "beautified" data into a `JSON` file that we'll call `feeds.json`:
###Code
# Write a new .json file to disk
with open('feeds.json', 'w') as file:
file.write(beautified_json)
# Alternatively, you could write the data directly from the response without beautifying it first
# with open('subscriptions.json', 'w') as file:
# file.write(response.text)
###Output
_____no_output_____
###Markdown
The code in the cell above should save a new file with our feeds list. Interpreting Response DataOur response from the `feeds` endpoint request contains a property called "`data`", which in this case contains a list of individual **Feed** objects available on the API. The `subscriptions` endpoint also contains a `data` property with analogous objects for **Subscriptions**. The `data` property in the top level response object contains most of the data we'd expect to be looking for, while the other properties like `links` are meta data. LinksWe also see the OpenAPI style `links` property which is useful to get direct links to the request we made. We'll see similar `links` properties for both **Subscriptions** and **Results** that will give us an easy way to follow pagination and get related assets as well!Since the **Results** section of the API follows the `WFS3` spec, the responses will look a bit different. We'll go in depth on Results in the second tutorial in this series. Query ParametersMany endpoints also expose `query parameters` like `before` and `limit`, that can help us paginate through responses or look for specific types of responses. We'll cover some of these later on, but you can always refer the documentation to see which `query parameters` are available for each endpoint. Putting it all togetherWe've broken the steps down above, but making this request is as easy as this simple one-liner in python code:```requests.get("https://api.planet.com/analytics/feeds", auth=(API_KEY,'')).json()``` 4. Working with Planet Analytics API data---Let's take a closer look at **Feeds** and **Subscriptions** by looking at a single Feed and a list of Subscriptions for a Feed. Feed InfoThe response from the `/feeds/` endpoint gives us a list of available Feeds under that `data` property. We'll discuss what the data for each of these Feeds means next, but first let's use the `id` property to request an individual Feed directly. The `id` is a [Universally Unique Identifier](https://en.wikipedia.org/wiki/Universally_unique_identifier) (UUID) string, and as the name suggests, is unique to each element.Here's how we can construct the url to a single Feed using it's `id`:
###Code
# Get a feed ID the user has access to
feed_id = requests.get("https://api.planet.com/analytics/feeds", auth=(API_KEY,'')).json()['data'][0]['id']
print('feed_id: {}'.format(feed_id))
# URL to request a single feed
single_feed_url = PAA_BASE_URL + feeds_endpoint + "/" + feed_id
print('single_feed_url: {}'.format(single_feed_url))
###Output
_____no_output_____
###Markdown
Now let's make a request using the single Feed url:
###Code
# Get the single Feed from our GET request response
single_feed = requests.get(single_feed_url, auth=(API_KEY,'')).json()
print(json.dumps(single_feed, indent=1))
###Output
_____no_output_____
###Markdown
We can see that the **Feed** includes information like the `title`, `description`, `id`, and dates indicating when the Feed was `created` and `updated`. We also see a `target` property containing `type`, which will let us know what kind of **Results** the Feed generates (collections of features vs raster mosaics).Under the `source` property, we see the configuration for the source imagery that the Feed operates on. The `query` property under `source` `config` should be familiar if you've worked with the [Planet Data API](https://developers.planet.com/docs/api/), and we can see which Planet `item type` the Feed is configured to use (ex. `PSScene3Band`). Finally the `links` property is also available as we've seen before.In the next section, let's take a look at a **Subscription** associated with a particular **Feed**. Working with SubscriptionsWe can get a list of available **Subscriptions** in the same we we did for Feeds, by making a request to the `/subscriptions` endpoint. We can also get a single Subscription using it's `id` by appending it to the Subscriptions endpoint: `/subscriptions/{SUBSCRIPTION_ID}`.
###Code
# Define the subscriptions endpoint
subscriptions_endpoint = 'subscriptions'
# Construct a URL to list all available subscriptions
feed_subscriptions_url = PAA_BASE_URL + subscriptions_endpoint
print(feed_subscriptions_url)
###Output
_____no_output_____
###Markdown
The `/subscriptions` endpoint additionally exposes a `query parameter` that let's us get a list of available Subscriptions that are associated with a particular Feed, using the Feed's `id`. The parameter is called `feedID` and takes a valid Feed `id` uuid.Let's make a request that lists all Subscriptions for the Feed we just looked at:
###Code
# Set query parameters for the request
# Use the feedID`
feed_subscriptions_params = {"feedID": feed_id}
# Make the request to the api
feed_subscriptions_response = requests.get(feed_subscriptions_url, params=feed_subscriptions_params, auth=(API_KEY,'')).json()
# Get the list of subscriptions from the 'data' property of the response
subscriptions = feed_subscriptions_response['data']
# Print the number of subscriptions found for the given feed
print("{} subscriptions found for Feed with id:\n{}\n".format(len(subscriptions), feed_id))
# Print the subscriptions list
print(json.dumps(subscriptions, indent=1))
###Output
_____no_output_____
###Markdown
We should now have a list of **Subscriptions** associated with the Feed from the previous section! Subscription InfoOnce again, we can see that each Subscription object in the list contains properties for `id`, `title`, `description`, `links`, and timestamps for `created` and `updated`.For Subscriptions, we also see a `feedID` property which contains the `uuid` of the associated **Feed**. Time of Interest (TOI) There are also two additional timestamps available in our Subscription data, under the `startTime` and `endTime` properties. These two timestamps are used to indicate the "Time of Interest" (TOI) for the Subscription, meaning the Subscription will process over Planet imagery that was collected or published (according to the Feed configuration) over that time span. No `endTime` property means that the Subscription will continue to run indefinitely. Subscription Geometry (AOI)The Subscription's `geometry` property is a [`GeoJSON geometry`](https://tools.ietf.org/html/rfc7946section-3.1) object, and indicates the shape and location of the "Area of Interest" (AOI) where the Feed is processing Planet imagery and making new detections.Valid geometry `types` for subscriptions are: * [Point](https://tools.ietf.org/html/rfc7946section-3.1.2)* [LineString](https://tools.ietf.org/html/rfc7946section-3.1.4)* [Polygon](https://tools.ietf.org/html/rfc7946section-3.1.6)* [MultiPoint](https://tools.ietf.org/html/rfc7946section-3.1.3)* [MultiLineString](https://tools.ietf.org/html/rfc7946section-3.1.5)* [MultiPolygon](https://tools.ietf.org/html/rfc7946section-3.1.7)Here's an example of what a Subscription's geometry looks like:
###Code
subscriptions[0]['geometry']
###Output
_____no_output_____
###Markdown
We can see that the `geometry` object is made up of **longitutde** and **latitude** coordinate pairs, and has a `GeoJSON` type property, in this case "Polygon". You may already have a sense of what part of the Earth our Subscription AOI covers from these coordinates, but let's see how we can use Python to visualize the AOI and also explore our data in the next section. Exploring and visualizing Planet Analytics DataThe python data science eco-system is teeming with useful libraries, packages, and tools to help us explore our data. Let's use [Pandas](https://pandas.pydata.org/) to take a closer look. PandasIn order to make our Planet Analytics API data a little easier to work with in our Jupyter Notebook, we can take our Subscriptions list from the response data and convert it to a `Pandas DataFrame`. Python Data Analysis Library (pandas) > pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language.It's pretty easy to create a new `DataFrame` from our `JSON` data. We just need to pass in our Subscriptions array/list:
###Code
import pandas as pd
# Create a Pandas DataFrame with our subscriptions list data
df = pd.DataFrame(subscriptions)
# Same as:
# pd.DataFrame.from_dict(subscriptions)
# Show the first 5 rows of our DataFrame in a table
df.head()
###Output
_____no_output_____
###Markdown
Great! We now have a neat table output that's easy to read and comes with some built in data manipulation functionality. This should be familiar to anyone that's done some data-science with Python and Jupyter Notebooks. We can see all the properties we discussed earlier are now our DataFrame columns. Let's browse the titles and descriptions for our Subscriptions:
###Code
df[['title', 'description']].head(20)
###Output
_____no_output_____
###Markdown
GeoPandasSince one of the most important aspects of our Subscription data is the location/AOI or `geometry` which it covers, we can also use **[GeoPandas](http://geopandas.org/)** to work with our Subscriptions. > GeoPandas extends the datatypes used by `pandas` to allow spatial operations on geometric types. Geometric operations are performed by [`shapely`](https://github.com/Toblerity/Shapely). Geopandas further depends on [`Fiona`](https://fiona.readthedocs.io) for file access and `descartes` and `matplotlib` for plotting.In order to take advantage of GeoPanda's geographic functionality, we'll need to convert each subscription geometry from a `GeoJSON geometry` object to a [`shapely`](https://github.com/Toblerity/Shapely) geometry object. GeoPandas will be able to understand the geometry natively from `shapely` and create a [`GeoSeries`](http://geopandas.org/data_structures.htmlgeoseries), which it uses to unlock the additional geospatial capabilities it brings to Pandas.
###Code
import geopandas as gpd
from shapely.geometry import shape
# Create a new GeoPandas DataFrame from our subscriptions data
gdf = gpd.GeoDataFrame(subscriptions)
# Transform geometry column values into shapely objects
gdf.set_geometry(gdf['geometry'].apply(shape), inplace=True)
# Show first 5 subscriptions
gdf.head()
###Output
_____no_output_____
###Markdown
Our geometry data type has now changed to a `GeoSeries`:
###Code
# Get the type for the geometry column
type(gdf['geometry'])
###Output
_____no_output_____
###Markdown
Let's take a look at a single subscription in a bit more detail.
###Code
# Select the first subscription row as a DataFrame
subscription_1 = gdf.iloc[[0]]
# Print out the GeoDataFrame for the first subscription row
subscription_1
###Output
_____no_output_____
###Markdown
Geospatial applications with GeoPandasGeoPandas allows us to manipulate and do calculations on our **Subscription** `geometry`. Example: Getting the Subscription AOI's AreaFor starters, let's use GeoPandas to get the **area** of our geometry (in km²)! Before we do, we need to understand how GeoPandas handles our data in a bit more detail. Geometry CRSThe coordinate pairs that make up our original `GeoJSON` geometries are all expressed in [Longitude and Latitude](https://en.wikipedia.org/wiki/Geographic_coordinate_systemLatitude_and_longitude). But the geometric shapes we created with `shapely` in our GeoDataFrame's `GeoSeries` are currently just a collection of coordinates in an arbitrary space.Before we can do geospatial queries on our `DataFrame`, we need to set a [coordinate reference system](https://en.wikipedia.org/wiki/Spatial_reference_system) (CRS) for the `GeoSeries` to ensure our calculations are done in the correct units, for example `meters`.*For more about how GeoPandas handles projections, visit the GeoPandas documentation [here](http://geopandas.org/projections.htmlmanaging-projections).*Let's start by checking for a CRS in our row's `GeoSeries`:
###Code
# Check our initial CRS
initial_crs = subscription_1.crs
print("Initial CRS: {}".format(initial_crs))
# Should be 'None' at first
###Output
_____no_output_____
###Markdown
We didn't specify a CRS when we created our `GeoSeries`, so `None` is expected here. So what CRS should we use then? According to the [GeoJSON spec](https://tools.ietf.org/html/rfc7946section-4), the CRS used by `GeoJSON` is the [World Geodetic System 1984](https://en.wikipedia.org/wiki/World_Geodetic_System) (`WGS84`). >The coordinate reference system for all GeoJSON coordinates is a geographic coordinate reference system, using the World Geodetic System 1984 (`WGS84`) datum, with longitude and latitude units of decimal degrees. This means our original geometry data uses `WGS84`. The EPSG code for `WGS84` is [`EPSG:4326`](http://spatialreference.org/ref/epsg/wgs-84/). **This will be true for any geographic (GeoJSON) data available via the Planet Analytics API.**Let's get the CRS definition for `EPSG:4326` using a helper function from the [Fiona](https://fiona.readthedocs.io) library (represented as a mapping of [proj.4](https://proj4.org/) parameters):
###Code
from fiona.crs import from_epsg
# Get a projection definition using Fiona's `from_epsg` to access the proj4 definition
proj_def = from_epsg('4326')['init']
print("Projection Definition: {}".format(proj_def))
###Output
_____no_output_____
###Markdown
Now that we've got the definition, let's set the intial CRS on our `GeoSeries` to `WGS84` using the `proj_def` from the previous cell:
###Code
# Set the GeoSeries CRS
subscription_1.crs = proj_def
# Check our NEW CRS
print("New CRS: {}".format(subscription_1.crs))
###Output
_____no_output_____
###Markdown
Perfect! Our `GeoSeries` now uses the correct CRS!Now that we've understood what's going on under the hood, there's actually a way we could have set the CRS when we initially transformed our `GeoJSON` geometries into `shapely` objects to create the GeoSeries by using the `crs` argument and passing in our projection definition `proj_def`:```gdf.set_geometry(gdf['geometry'].apply(shape), inplace=True, crs=proj_def)```Either way, it's important that an intial CRS is set on our `GeoSeries`* so that we can re-project the data if we need to, which we'll see is the case when calculating the area. Undoubtedly these concepts will be important when we take other approaches as well.\* *So far in this notebook, we've only set the CRS on the first subscription row. The alternative method described in this cell would set the CRS for all rows' geometries* Projected Coordinate SystemsNow that we have an initial CRS set to `WGS84`, we know that the current units of our `GeoSeries` are in (decimal) **degrees**. Since we're interested in getting the area in **square kilometers**, first we'll need to reproject out geometry to a Cartesian projected coordinate system like [EPSG:3857](https://epsg.io/3857), whose units are expressed in **meters**:
###Code
# Re-project row geometry to EPSG:3857
projected = subscription_1.to_crs(epsg=3857)
# Display the CRS of our re-projected row geometry
projected.crs
###Output
_____no_output_____
###Markdown
Calculate the area of the subscription geometryFinally we can use the `.area` [`GeoSeries` attribute](http://geopandas.org/data_structures.htmlattributes) on the re-projected `GeoSeries` to get the area of our subscription geometry
###Code
# Get the area (will be in meters)
area_m = projected.area.values[0]
# Convert the area to squared kilometers and round to two decimal places
area_km2 = round(area_m / 100000, 2)
# Print the subscription area in km2
print("\n Subscription Area: {} km\xB2".format(area_km2))
###Output
_____no_output_____
###Markdown
Visualizing the subscription geometry Let's visually inspect at our first subscription's geometry. We can use [GeoPanda's built in `.plot()` method](http://geopandas.org/mapping.html) to render a [matplotlib](https://matplotlib.org/) chart:
###Code
# Plot the subscription geometry
subscription_1.plot()
###Output
_____no_output_____
###Markdown
The above gives us the shape and location of the subscription geometry, but doesn't provide much context... Interactive Visualizations with GeoViewsLet's use **[GeoViews](https://geoviews.org/)** to create an interactive web map with the subscription geometry so we can see a bit more context around the geometry. GeoViews is built on [HoloViews](http://holoviews.org/) and uses [Cartopy](http://scitools.org.uk/cartopy), [matplotlib](http://matplotlib.org/) and [Bokeh](http://bokeh.pydata.org/) for visualizations. It is released as part of the [PyViz](http://pyviz.org/) suite.> GeoViews is a Python library that makes it easy to explore and visualize geographical, meteorological, and oceanographic datasets, such as those used in weather, climate, and remote sensing research.First, let's import GeoViews and set the rendering backends:
###Code
# Import GeoViews
import geoviews as gv
# Set GeoViews' rendering backends
gv.extension('bokeh')
###Output
_____no_output_____
###Markdown
Although we already know that the subscription geometry's shape is a polygon from both the `GeoJSON geometry` `type` property and our previous plot, let's see how we can find the geometry `type` using GeoPandas. This can help us understand how to use GeoViews to plot our geometry.
###Code
# Get the geometry type from our subscription `GeoDataFrame`
geo_type = subscription_1.geometry.type.values[0]
print("Geometry Type: {}".format(geo_type))
###Output
_____no_output_____
###Markdown
Again, the geometry `type` is a `Polygon`, so we can use GeoView's `gv.Polygons` to easily render our GeoDataFrame as an interactive plot using `bokeh` as the rendering backend:
###Code
# Use GeoViews to render a polygon
polygon = gv.Polygons(subscription_1).opts(padding=0.1)
polygon
###Output
_____no_output_____
###Markdown
We can also use the GeoViews' [`Shape`](http://geo.holoviews.org/Geometries.html) object to automatically render our geometry without explicitly using the `gv.Polygons` object, by passing the `shapely` object which underlies our `GeoSeries`.> The `gv.Shape` object wraps around any shapely geometry
###Code
# Create the label from the subscription title
label = "Subscription Geometry for '{}'".format(subscription_1['title'].values[0])
# Create a GeoViews shape object for our subscription plot
subscription_plot = gv.Shape(subscription_1['geometry'][0], label=label).opts(style=dict(alpha=0.3))
###Output
_____no_output_____
###Markdown
Using the `*` operator, we can combine (overlay) various GeoViews (and Holoviews) plots.
###Code
# List built in GeoViews tile sources
# help(gv.tile_sources)
# Create a basemap
basemap = gv.tile_sources.CartoLight
# Create a "web map" by combining a tile layer plot with the subscription plot into a gv.Overlay object
webmap = basemap * subscription_plot
# Render the webmap using some options
webmap.opts(width=800, height=400, padding=0.2)
###Output
_____no_output_____ |
getting-started/geoDB/11_EDC-geodb_manage-datasets.ipynb | ###Markdown
Euro Data Cube - geoDB: Getting Started (11)The Euro Data Cube Jupyter Lab environment has all dependencies preinstalled and the necessary credentials prepared as environment variables.To run this notebook outside of this environment please follow the setup outlined [here](./../99_EDC_Setup.ipynb). Mangage DatasetsThis notebook shows how to load, delete and update geoDB collections.
###Code
from xcube_geodb.core.geodb import GeoDBClient
geodb = GeoDBClient()
geodb.whoami
ds = geodb.get_collections()
ds
###Output
_____no_output_____
###Markdown
Creating collectionsOnce the connection has bee4n established you will be able to create a table for datasets. The table will contain standard properties (fields). The lsit properties can be extended by the user.
###Code
# Have a look at fiona feature schema
collections = {
"land_use":
{
"crs": 3794,
"properties":
{
"RABA_PID": "float",
"RABA_ID": "float",
"D_OD": "date"
}
}
}
# return obj implementing __repr_html__
geodb.create_collections(collections)
ds = geodb.get_collections()
ds
###Output
_____no_output_____
###Markdown
Loading data into a datasetOnce the table has been created, you can load data into the dataset. The example below loads a shapefile. The attributes of the shapefile correspond to the dataset's properties.
###Code
import geopandas
gdf = geopandas.read_file('data/sample/land_use.shp')
gdf
geodb.insert_into_collection('land_use', gdf)
###Output
_____no_output_____
###Markdown
geodb.get_collection('land_use', query="raba_id=eq.7000") Delete from a Collection
###Code
geodb.delete_from_collection('land_use', query="raba_id=eq.7000")
geodb.get_collection('land_use', query="raba_id=eq.7000")
###Output
_____no_output_____
###Markdown
Updating a Collection
###Code
geodb.get_collection('land_use', query="raba_id=eq.1300")
geodb.update_collection('land_use', query="raba_id=eq.1300", values={'d_od': '2000-01-01'})
geodb.get_collection('land_use', query="raba_id=eq.1300")
###Output
_____no_output_____
###Markdown
Managing Properties of a Collection
###Code
geodb.get_collections()
geodb.get_properties('land_use')
geodb.add_property('land_use', "test_prop", 'integer')
geodb.get_properties('land_use')
geodb.drop_property('land_use', 'test_prop')
geodb.get_properties('land_use')
geodb.add_properties('land_use', properties={'test1': 'integer', 'test2': 'date'})
geodb.get_properties('land_use')
geodb.drop_properties('land_use', properties=['test1', 'test2'])
geodb.get_properties('land_use')
###Output
_____no_output_____ |
classification/DeViSE_keras.ipynb | ###Markdown
DeViSE in KerasKeras implementation of the Neural Network described in Andrea Frome and al. 2013 [DeViSE: A Deep Visual-Semantic Embedding Model](https://papers.nips.cc/paper/5204-devise-a-deep-visual-semantic-embedding-model).More on word2vec check this [link](https://skymind.ai/wiki/word2vec). Run in Google Colab View source on GitHub
###Code
%matplotlib inline
%reload_ext autoreload
%autoreload 2
!pip install nmslib
!pip install git+https://github.com/facebookresearch/fastText.git
import warnings
warnings.filterwarnings("ignore")
import tensorflow as tf
from tensorflow.keras.applications.resnet50 import ResNet50
from tensorflow.keras.utils import Sequence
from tensorflow.keras.layers import Dense, Dropout, GlobalAveragePooling2D, BatchNormalization
from tensorflow.keras.optimizers import Adam
from tensorflow.keras import backend as K
from tensorflow.keras.models import Model, load_model
from tensorflow.keras import preprocessing
import pandas as pd
import numpy as np
from tqdm import tqdm, tqdm_notebook
import io
import pathlib
import collections
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import seaborn as sns
from PIL import Image
from random import randint
import pickle
import fastText as ft
from google.colab import drive
drive.mount('/content/gdrive')
###Output
Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&scope=email%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdocs.test%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive.photos.readonly%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fpeopleapi.readonly&response_type=code
Enter your authorization code:
··········
Mounted at /content/gdrive
###Markdown
Data
###Code
path = pathlib.Path('.')
imagenet_path = path/'tiny-imagenet-200'
word2vec_path = path
###Output
_____no_output_____
###Markdown
ImageNetDownload the [tiny](https://tiny-imagenet.herokuapp.com/) ImageNet dataset of size 236 MB, or the full dataset which is of size 155 GB and can be found [here](https://www.kaggle.com/c/imagenet-object-localization-challenge/data).
###Code
!curl -O http://cs231n.stanford.edu/tiny-imagenet-200.zip
!unzip -q tiny-imagenet-200.zip
!ls tiny-imagenet-200/
!ls tiny-imagenet-200/train/n01443537/images | head
!head tiny-imagenet-200/words.txt
!head tiny-imagenet-200/wnids.txt
!wc -l tiny-imagenet-200/words.txt
!wc -l tiny-imagenet-200/wnids.txt
###Output
200 tiny-imagenet-200/wnids.txt
###Markdown
Word2Vec Download word2vecDownload a decent [word2vec](https://skymind.ai/wiki/word2vec) representation like Facebook's [FastText](https://fasttext.cc/docs/en/english-vectors.html).
###Code
!curl -O https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki-news-300d-1M.vec.zip
!unzip -q wiki-news-300d-1M.vec.zip
!curl -O https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.en.zip
!unzip wiki.en.zip
!head wiki.en.vec
###Output
_____no_output_____
###Markdown
Load the word vectors
###Code
# load the word2vec representations into numpy arrays
def load_vectors(fname):
input = io.open(fname, 'r', encoding='utf-8', newline='\n', errors='ignore')
# read first line which contains number of tokens and vector dimension
num, dim = map(int, input.readline().split())
mean = np.zeros((dim), dtype=np.float32)
# read line and convert tokens word2vec representation into numpy arrays
data = {}
for line in tqdm(input):
tokens = line.rstrip().split(' ')
data[tokens[0]] = [np.float32(i) for i in tokens[1:]]
mean = mean + data[tokens[0]]
# set the vector representation for unkown words with the mean vector
mean = mean / num
word2vec = data # word2vec = collections.defaultdict(lambda:mean, data)
return word2vec, mean
word2vec, word2vec_mean = load_vectors(word2vec_path/'wiki.en.vec') # 'wiki-news-300d-1M.vec'
en_vecs = ft.load_model('wiki.en.bin')
def get_vecs(lang, ft_vecs):
vecd = {w:ft_vecs.get_word_vector(w) for w in ft_vecs.get_words()}
pickle.dump(vecd, open(PATH/f'wiki.{lang}.pkl','wb'))
return vecd
word2vec = get_vecs('en', en_vecs)
word2vec = pickle.load(open(path/'wiki.en.pkl','rb'))
###Output
_____no_output_____
###Markdown
Play with word2vecGet the vector for few words and check the correlation coefficents between each of them
###Code
boat_vec = en_vecs.get_word_vector('boat') # word2vec['boat']
plane_vec = en_vecs.get_word_vector('plane') # word2vec['plane']
orange_vec = en_vecs.get_word_vector('orange') # word2vec['orange']
slug_vec = en_vecs.get_word_vector('slug') # word2vec['slug']
np.corrcoef(boat_vec, plane_vec)
np.corrcoef(boat_vec, orange_vec)
np.corrcoef(boat_vec, slug_vec)
###Output
_____no_output_____
###Markdown
ImageNet / Word2Vec mappingMap imagenet classes (with their [synset](https://wordnet.princeton.edu/) repsentation) to their word2vec representation
###Code
def load_wordnet(fname, delimeter=' ', synset_idx=0, word_idx=1):
input = io.open(fname, 'r', encoding='utf-8', newline='\n', errors='ignore')
syn2word, word2syn = {}, {}
for line in tqdm(input):
tokens = line.rstrip().split(delimeter)
synset = tokens[synset_idx]
word = tokens[word_idx].lower() # clean words like Arabian_camel
# synset to word
syn2word[synset] = word
# word to synset
word2syn[word] = synset
return syn2word, word2syn
syn2word, word2syn = load_wordnet(imagenet_path/'words.txt', '\t') # 1k
###Output
82115it [00:00, 583919.98it/s]
###Markdown
read the synset ids for the training images
###Code
classes = [id.rstrip() for id in open(imagenet_path/'wnids.txt')]
classes[2], syn2word[classes[2]], classes[100], syn2word[classes[100]]
#np.corrcoef(word2vec[syn2word[classes[2]]], word2vec[syn2word[classes[100]]])
np.corrcoef(en_vecs.get_word_vector(syn2word[classes[2]]), en_vecs.get_word_vector(syn2word[classes[100]]))
###Output
_____no_output_____
###Markdown
Check that we have a proper word2vec for each class
###Code
columns = ['id', 'word', 'vec']
data = {'id': [], 'word': [], 'vec': []}
for id in classes:
data['id'].append(id)
data['word'].append(syn2word[id])
vec = en_vecs.get_word_vector(syn2word[id]) # vec = word2vec[syn2word[id]]
data['vec'].append(vec[:5])
df = pd.DataFrame(data, columns=columns); df.head()
###Output
_____no_output_____
###Markdown
a lot of the class lables in this tiny imagenet dataset cannot be found in the FastText word2vecs. As an altertive, will you use the labels from the real ImageNet dataset
###Code
!curl -O https://gist.githubusercontent.com/aaronpolhamus/964a4411c0906315deb9f4a3723aac57/raw/aa66dd9dbf6b56649fa3fab83659b2acbf3cbfd1/map_clsloc.txt
syn2word, word2syn = load_wordnet(path/'map_clsloc.txt', delimeter=' ', synset_idx=0, word_idx=2)
columns = ['id', 'word', 'vec']
data = {'id': [], 'word': [], 'vec': []}
for id in classes:
data['id'].append(id)
data['word'].append(syn2word[id])
vec = en_vecs.get_word_vector(syn2word[id]) # vec = word2vec[syn2word[id]]
data['vec'].append(vec[:5])
df = pd.DataFrame(data, columns=columns); df.head()
###Output
_____no_output_____
###Markdown
same problem, a lot of the classes in the dataset just do not exist in word2vec, will try a new way for combined words. basically take the average of word2vec for each word of the composite word. E.g. w2v('arabian_camel') = (w2v('arabian') + w2v('camel'))/2
###Code
EN_WORDS = en_vecs.get_words()
def get_vec_by_word(word, vec_size=300):
# if word is in w2v then return it's vector
#if word in word2vec: return word2vec[word]
if word in EN_WORDS:
return en_vecs.get_word_vector(word)
# otherwise take the average of vectors of each word in this comopsite word
vec = np.zeros((vec_size), dtype=np.float32)
words = word.split('_')
# return the vectors mean if the word is composite of one item
if len(words)==1:
#return word2vec_mean
return np.random.random((vec_size))
for w in words:
vec = vec + get_vec_by_word(w)
return vec / len(words)
columns = ['id', 'word', 'vec']
data = {'id': [], 'word': [], 'vec': []}
for id in classes:
data['id'].append(id)
data['word'].append(syn2word[id])
vec = en_vecs.get_word_vector(syn2word[id]) # vec = word2vec[syn2word[id]]
data['vec'].append(vec[:5])
df = pd.DataFrame(data, columns=columns); df.head()
###Output
_____no_output_____
###Markdown
ModelResNet-50 based model
###Code
batch_size = 64
classes_size = 300
###Output
_____no_output_____
###Markdown
Data
###Code
class ImageGenerator(Sequence):
"""Generator for a sequence of Images"""
def __init__(self, path, fnames, labels, classes_size, batch_size, image_size=(224, 224), shuffle=True):
self.path = path
self.image_size, self.batch_size = image_size, batch_size
self.items, self.items_size = fnames, len(fnames)
self.labels = labels
self.classes_size = classes_size
self.indexes = np.arange(self.items_size)
self.shuffle= shuffle
self.on_epoch_end()
def load_urls_(self, indexes):
"""Load the urls of the images into a tensor"""
# init target arrays
images = np.zeros((self.batch_size, self.image_size[0], self.image_size[1], 3), dtype=np.float32)
labels = np.zeros((self.batch_size, self.classes_size), dtype=np.float32)
# Find list of urls in this batch
urls = [self.path/self.items[k] for k in indexes]
lbls = [self.labels[k] for k in indexes]
for index, img_path in enumerate(urls):
# read image from url
img = preprocessing.image.load_img(img_path, target_size=self.image_size)
img_data = preprocessing.image.img_to_array(img)
# read the proper label
lbl_data = lbls[index]
# append data
images[index, :] = img_data
labels[index, :] = lbl_data
return images, labels
def on_epoch_end(self):
"""Rearrange the indexes after each epoch"""
self.indexes = np.arange(self.items_size)
if self.shuffle == True:
np.random.shuffle(self.indexes)
def __len__(self):
"""Number of batches per epoch"""
return int(np.floor(self.items_size / self.batch_size))
def __getitem__(self, index):
"""Generate one batch of data"""
# Generate indexes of the batch
indexes = self.indexes[index*self.batch_size:(index+1)*self.batch_size]
# Generate data for the batch
X, y = self.load_urls_(indexes)
return X, y
###Output
_____no_output_____
###Markdown
A helper class for dealing with image datasets, gathers training/validation/test data into one place
###Code
def get_label_name_from_fname(self, fname):
"""Get the name of the image label from the filename"""
return '_'.join(fname.split('_')[:-1]).lower()
def get_label_index_from_fname(self, fname):
"""Get the index of the label from the filename"""
lbl_name = self.get_label_name_from_fname(fname)
return self.classes.index(lbl_name)
# get the word from a given image filename
def get_label_from_fname(fname):
fname = fname.split('/')
if 'train' in fname:
index = fname.index('train') + 1
synset = fname[index]
word = syn2word[synset]
return word
print('cannot find word for', fname)
return None
# get the word2vec representation from a given image filename
def get_word2vec_from_fname(fname):
fname = fname.split('/')
if 'train' in fname:
index = fname.index('train') + 1
synset = fname[index]
word = syn2word[synset]
vec = get_vec_by_word(word)
return vec
print('cannot find word2vec representation for', fname)
return None
!head tiny-imagenet-200/val/val_annotations.txt
###Output
val_0.JPEG n03444034 0 32 44 62
val_1.JPEG n04067472 52 55 57 59
val_2.JPEG n04070727 4 0 60 55
val_3.JPEG n02808440 3 3 63 63
val_4.JPEG n02808440 9 27 63 48
val_5.JPEG n04399382 7 0 59 63
val_6.JPEG n04179913 0 0 63 56
val_7.JPEG n02823428 5 0 57 63
val_8.JPEG n04146614 0 31 60 60
val_9.JPEG n02226429 0 3 63 57
###Markdown
data bunch for gathering train/validation/test sets in one place
###Code
def ceildiv(a, b):
return -(-a // b)
def plots_from_files(imspaths, figsize=(10,5), rows=1, titles=None, maintitle=None):
"""Plot the images in a grid"""
f = plt.figure(figsize=figsize)
if maintitle is not None: plt.suptitle(maintitle, fontsize=10)
for i in range(len(imspaths)):
sp = f.add_subplot(rows, ceildiv(len(imspaths), rows), i+1)
sp.axis('Off')
if titles is not None: sp.set_title(titles[i], fontsize=16)
img = plt.imread(imspaths[i])
plt.imshow(img)
class DataBunch():
"""An image data bunch"""
def __init__(self, path, classes_size, train_gen, valid_gen, test_gen=None):
self.path = path
self.cls_size = classes_size
self.train_gen = train_gen
self.valid_gen = valid_gen
self.test_gen = test_gen
def show_bunch(self, get_title, rows=3, figsize=(7, 6), **kwargs):
"""Show a bunch of images from the dataset"""
imspaths = np.random.choice(self.train_gen.items, 9)
# '_'.join(p.split('_')[:-1]).lower()
titles = [get_title(p) for p in imspaths]
imspaths = [(path/p).as_posix() for p in imspaths]
plots_from_files(imspaths, figsize, rows, titles)
@property
def c(self):
return self.classes
@property
def classes_size(self):
return self.cls_size
###Output
_____no_output_____
###Markdown
training dataset Load data
###Code
pattern = '*/images/*.JPEG'
# training set
train_fnames = [str(p) for p in tqdm((imagenet_path/'train').glob(pattern))]
train_labels = [get_word2vec_from_fname(str(p)) for p in tqdm((imagenet_path/'train').glob(pattern))]
# generator
train_gen = ImageGenerator(path, train_fnames, train_labels, classes_size, batch_size)
print('Training set has %d batches of size %d' % (len(train_gen), batch_size))
###Output
_____no_output_____
###Markdown
Store data for later use
###Code
import pickle
pickle.dump(train_fnames, open('/content/gdrive/My Drive/data/devise_train_fnames.pkl','wb'))
pickle.dump(train_labels, open('/content/gdrive/My Drive/data/devise_train_labels.pkl','wb'))
train_fnames = pickle.load(open('/content/gdrive/My Drive/data/devise_train_fnames.pkl','rb'))
train_labels = pickle.load(open('/content/gdrive/My Drive/data/devise_train_labels.pkl','rb'))
###Output
_____no_output_____
###Markdown
validation dataset
###Code
# validation set
valid_fnames = []
valid_labels = []
for line in open('tiny-imagenet-200/val/val_annotations.txt'):
fname, synset = line.split('\t')[:2]
fname = (imagenet_path/'val/images')/fname
valid_fnames.append(fname.as_posix())
word = syn2word[synset]
vec = get_vec_by_word(word)
valid_labels.append(vec)
# generator
valid_gen = ImageGenerator(path, valid_fnames, valid_labels, classes_size, batch_size)
print('Validation set has %d batches of size %d' % (len(valid_gen), batch_size))
###Output
Validation set has 156 batches of size 64
###Markdown
testing dataset
###Code
test_fnames = [p for p in (imagenet_path/'test').glob('images/*.JPEG')]
test_gen = ImageGenerator(path, test_fnames, [], classes_size, batch_size)
print('Test set has %d batches of size %d' % (len(test_gen), batch_size))
###Output
Test set has 156 batches of size 64
###Markdown
databunch
###Code
# combine all datasets into a bunch
data = DataBunch(path, 300, train_gen, valid_gen, test_gen)
data.show_bunch(get_label_from_fname)
###Output
_____no_output_____
###Markdown
LearnerA ResNet-50 based architecture
###Code
class Learner():
"""Base learner object"""
def __init__(self):
pass
# freeze all layers of the model (from left to right)
@classmethod
def freeze(cls, model, limit=None):
# handle negative indices
if limit != None and limit < -1:
limit = limit + len(model.layers)
# loop for all valid indices and mark the corresponding layer
for index, layer in enumerate(model.layers):
if limit != None and index > limit:
break
layer.trainable = False
# unfreeze all layers of the model up to the given layer index (from right to left)
@classmethod
def unfreeze(cls, model, limit=None):
# handle negative indices
if limit != None and limit < -1:
limit = limit + len(model.layers)
for index, layer in enumerate(model.layers):
if limit != None and index < limit:
continue
layer.trainable = True
class ImageClassificationLearner(Learner):
"""Image classification learner"""
def __init__(self, data, archi, loss='categorical_crossentropy', metrics=['accuracy']):
self.data = data
self.model = self._create_model(archi)
# compile the model to before training
adam = Adam(lr=0.001, epsilon=0.01, decay=0.0001)
self.model.compile(adam, loss, metrics)
def _create_model(self, archi):
model1 = ResNet50(weights='imagenet')
# 1. freeze the original model up to the last layer we will keep
Learner.freeze(model1, -3)
# 2. create a new model that will be chained to the output of our base model
x = model1.layers[-3].output # shape should be (bs=None, 7, 7, 2048)
x = Dropout(rate=0.3)(x) # shape should be (bs=None, 7, 7, 2048)
x = GlobalAveragePooling2D()(x) # shape should be (bs=None, 2048)
x = Dense(1024, activation='relu')(x) # shape should be (bs=None, 1024)
x = BatchNormalization()(x)
y = Dense(self.data.classes_size, activation='linear')(x) # shape should be (bs=None, num_champti
model2 = Model(inputs=model1.input, outputs=y)
return model2
def fit(self, epochs=5):
# fit the model using the previous generators
train_gen = self.data.train_gen
valid_gen = self.data.valid_gen
history = self.model.fit_generator(generator=train_gen, validation_data=valid_gen, epochs=epochs, use_multiprocessing=True)
return history
###Output
_____no_output_____
###Markdown
Loss function Cosine Distance
###Code
# the loss function as an inverse cosine distance
def cosine_loss(y, y_hat):
# unit-normalize y and y_hat ()
y = tf.math.l2_normalize(y, axis=1)
y_hat = tf.math.l2_normalize(y_hat, axis=1)
# cosine distance for normalized tensors
loss = tf.losses.cosine_distance(y, y_hat, axis=1)
return loss
###Output
_____no_output_____
###Markdown
Training
###Code
learner = ImageClassificationLearner(data, loss=cosine_loss, archi='resnet50')
history = learner.fit()
plt.plot(history.history['loss'], label="train")
plt.plot(history.history['val_loss'], label="valid")
# Add legend
plt.legend(loc='top left')
# Add title and x, y labels
plt.title("Losses over epoch", fontsize=16, fontweight='bold')
#plt.suptitle("Random Walk Suptitle", fontsize=10)
plt.xlabel("epoch")
plt.ylabel("Loss")
plt.show()
plt.plot(history.history['acc'], label="train")
plt.plot(history.history['val_acc'], label="valid")
# Add legend
plt.legend(loc='top left')
# Add title and x, y labels
plt.title("Accuracy over epoch", fontsize=16, fontweight='bold')
#plt.suptitle("Random Walk Suptitle", fontsize=10)
plt.xlabel("epoch")
plt.ylabel("Accuracy")
plt.show()
# Save the trained model in an HDF5 file
learner.model.save('devise.h5')
#!cp devise.h5 /content/gdrive/My\ Drive/data/models
!cp /content/gdrive/My\ Drive/data/models/devise.h5 .
# Load the trained model
model = load_model('devise.h5', custom_objects={'cosine_loss': cosine_loss})
model.summary()
###Output
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_2 (InputLayer) (None, 224, 224, 3) 0
__________________________________________________________________________________________________
conv1_pad (ZeroPadding2D) (None, 230, 230, 3) 0 input_2[0][0]
__________________________________________________________________________________________________
conv1 (Conv2D) (None, 112, 112, 64) 9472 conv1_pad[0][0]
__________________________________________________________________________________________________
bn_conv1 (BatchNormalization) (None, 112, 112, 64) 256 conv1[0][0]
__________________________________________________________________________________________________
activation_49 (Activation) (None, 112, 112, 64) 0 bn_conv1[0][0]
__________________________________________________________________________________________________
pool1_pad (ZeroPadding2D) (None, 114, 114, 64) 0 activation_49[0][0]
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D) (None, 56, 56, 64) 0 pool1_pad[0][0]
__________________________________________________________________________________________________
res2a_branch2a (Conv2D) (None, 56, 56, 64) 4160 max_pooling2d_1[0][0]
__________________________________________________________________________________________________
bn2a_branch2a (BatchNormalizati (None, 56, 56, 64) 256 res2a_branch2a[0][0]
__________________________________________________________________________________________________
activation_50 (Activation) (None, 56, 56, 64) 0 bn2a_branch2a[0][0]
__________________________________________________________________________________________________
res2a_branch2b (Conv2D) (None, 56, 56, 64) 36928 activation_50[0][0]
__________________________________________________________________________________________________
bn2a_branch2b (BatchNormalizati (None, 56, 56, 64) 256 res2a_branch2b[0][0]
__________________________________________________________________________________________________
activation_51 (Activation) (None, 56, 56, 64) 0 bn2a_branch2b[0][0]
__________________________________________________________________________________________________
res2a_branch2c (Conv2D) (None, 56, 56, 256) 16640 activation_51[0][0]
__________________________________________________________________________________________________
res2a_branch1 (Conv2D) (None, 56, 56, 256) 16640 max_pooling2d_1[0][0]
__________________________________________________________________________________________________
bn2a_branch2c (BatchNormalizati (None, 56, 56, 256) 1024 res2a_branch2c[0][0]
__________________________________________________________________________________________________
bn2a_branch1 (BatchNormalizatio (None, 56, 56, 256) 1024 res2a_branch1[0][0]
__________________________________________________________________________________________________
add_16 (Add) (None, 56, 56, 256) 0 bn2a_branch2c[0][0]
bn2a_branch1[0][0]
__________________________________________________________________________________________________
activation_52 (Activation) (None, 56, 56, 256) 0 add_16[0][0]
__________________________________________________________________________________________________
res2b_branch2a (Conv2D) (None, 56, 56, 64) 16448 activation_52[0][0]
__________________________________________________________________________________________________
bn2b_branch2a (BatchNormalizati (None, 56, 56, 64) 256 res2b_branch2a[0][0]
__________________________________________________________________________________________________
activation_53 (Activation) (None, 56, 56, 64) 0 bn2b_branch2a[0][0]
__________________________________________________________________________________________________
res2b_branch2b (Conv2D) (None, 56, 56, 64) 36928 activation_53[0][0]
__________________________________________________________________________________________________
bn2b_branch2b (BatchNormalizati (None, 56, 56, 64) 256 res2b_branch2b[0][0]
__________________________________________________________________________________________________
activation_54 (Activation) (None, 56, 56, 64) 0 bn2b_branch2b[0][0]
__________________________________________________________________________________________________
res2b_branch2c (Conv2D) (None, 56, 56, 256) 16640 activation_54[0][0]
__________________________________________________________________________________________________
bn2b_branch2c (BatchNormalizati (None, 56, 56, 256) 1024 res2b_branch2c[0][0]
__________________________________________________________________________________________________
add_17 (Add) (None, 56, 56, 256) 0 bn2b_branch2c[0][0]
activation_52[0][0]
__________________________________________________________________________________________________
activation_55 (Activation) (None, 56, 56, 256) 0 add_17[0][0]
__________________________________________________________________________________________________
res2c_branch2a (Conv2D) (None, 56, 56, 64) 16448 activation_55[0][0]
__________________________________________________________________________________________________
bn2c_branch2a (BatchNormalizati (None, 56, 56, 64) 256 res2c_branch2a[0][0]
__________________________________________________________________________________________________
activation_56 (Activation) (None, 56, 56, 64) 0 bn2c_branch2a[0][0]
__________________________________________________________________________________________________
res2c_branch2b (Conv2D) (None, 56, 56, 64) 36928 activation_56[0][0]
__________________________________________________________________________________________________
bn2c_branch2b (BatchNormalizati (None, 56, 56, 64) 256 res2c_branch2b[0][0]
__________________________________________________________________________________________________
activation_57 (Activation) (None, 56, 56, 64) 0 bn2c_branch2b[0][0]
__________________________________________________________________________________________________
res2c_branch2c (Conv2D) (None, 56, 56, 256) 16640 activation_57[0][0]
__________________________________________________________________________________________________
bn2c_branch2c (BatchNormalizati (None, 56, 56, 256) 1024 res2c_branch2c[0][0]
__________________________________________________________________________________________________
add_18 (Add) (None, 56, 56, 256) 0 bn2c_branch2c[0][0]
activation_55[0][0]
__________________________________________________________________________________________________
activation_58 (Activation) (None, 56, 56, 256) 0 add_18[0][0]
__________________________________________________________________________________________________
res3a_branch2a (Conv2D) (None, 28, 28, 128) 32896 activation_58[0][0]
__________________________________________________________________________________________________
bn3a_branch2a (BatchNormalizati (None, 28, 28, 128) 512 res3a_branch2a[0][0]
__________________________________________________________________________________________________
activation_59 (Activation) (None, 28, 28, 128) 0 bn3a_branch2a[0][0]
__________________________________________________________________________________________________
res3a_branch2b (Conv2D) (None, 28, 28, 128) 147584 activation_59[0][0]
__________________________________________________________________________________________________
bn3a_branch2b (BatchNormalizati (None, 28, 28, 128) 512 res3a_branch2b[0][0]
__________________________________________________________________________________________________
activation_60 (Activation) (None, 28, 28, 128) 0 bn3a_branch2b[0][0]
__________________________________________________________________________________________________
res3a_branch2c (Conv2D) (None, 28, 28, 512) 66048 activation_60[0][0]
__________________________________________________________________________________________________
res3a_branch1 (Conv2D) (None, 28, 28, 512) 131584 activation_58[0][0]
__________________________________________________________________________________________________
bn3a_branch2c (BatchNormalizati (None, 28, 28, 512) 2048 res3a_branch2c[0][0]
__________________________________________________________________________________________________
bn3a_branch1 (BatchNormalizatio (None, 28, 28, 512) 2048 res3a_branch1[0][0]
__________________________________________________________________________________________________
add_19 (Add) (None, 28, 28, 512) 0 bn3a_branch2c[0][0]
bn3a_branch1[0][0]
__________________________________________________________________________________________________
activation_61 (Activation) (None, 28, 28, 512) 0 add_19[0][0]
__________________________________________________________________________________________________
res3b_branch2a (Conv2D) (None, 28, 28, 128) 65664 activation_61[0][0]
__________________________________________________________________________________________________
bn3b_branch2a (BatchNormalizati (None, 28, 28, 128) 512 res3b_branch2a[0][0]
__________________________________________________________________________________________________
activation_62 (Activation) (None, 28, 28, 128) 0 bn3b_branch2a[0][0]
__________________________________________________________________________________________________
res3b_branch2b (Conv2D) (None, 28, 28, 128) 147584 activation_62[0][0]
__________________________________________________________________________________________________
bn3b_branch2b (BatchNormalizati (None, 28, 28, 128) 512 res3b_branch2b[0][0]
__________________________________________________________________________________________________
activation_63 (Activation) (None, 28, 28, 128) 0 bn3b_branch2b[0][0]
__________________________________________________________________________________________________
res3b_branch2c (Conv2D) (None, 28, 28, 512) 66048 activation_63[0][0]
__________________________________________________________________________________________________
bn3b_branch2c (BatchNormalizati (None, 28, 28, 512) 2048 res3b_branch2c[0][0]
__________________________________________________________________________________________________
add_20 (Add) (None, 28, 28, 512) 0 bn3b_branch2c[0][0]
activation_61[0][0]
__________________________________________________________________________________________________
activation_64 (Activation) (None, 28, 28, 512) 0 add_20[0][0]
__________________________________________________________________________________________________
res3c_branch2a (Conv2D) (None, 28, 28, 128) 65664 activation_64[0][0]
__________________________________________________________________________________________________
bn3c_branch2a (BatchNormalizati (None, 28, 28, 128) 512 res3c_branch2a[0][0]
__________________________________________________________________________________________________
activation_65 (Activation) (None, 28, 28, 128) 0 bn3c_branch2a[0][0]
__________________________________________________________________________________________________
res3c_branch2b (Conv2D) (None, 28, 28, 128) 147584 activation_65[0][0]
__________________________________________________________________________________________________
bn3c_branch2b (BatchNormalizati (None, 28, 28, 128) 512 res3c_branch2b[0][0]
__________________________________________________________________________________________________
activation_66 (Activation) (None, 28, 28, 128) 0 bn3c_branch2b[0][0]
__________________________________________________________________________________________________
res3c_branch2c (Conv2D) (None, 28, 28, 512) 66048 activation_66[0][0]
__________________________________________________________________________________________________
bn3c_branch2c (BatchNormalizati (None, 28, 28, 512) 2048 res3c_branch2c[0][0]
__________________________________________________________________________________________________
add_21 (Add) (None, 28, 28, 512) 0 bn3c_branch2c[0][0]
activation_64[0][0]
__________________________________________________________________________________________________
activation_67 (Activation) (None, 28, 28, 512) 0 add_21[0][0]
__________________________________________________________________________________________________
res3d_branch2a (Conv2D) (None, 28, 28, 128) 65664 activation_67[0][0]
__________________________________________________________________________________________________
bn3d_branch2a (BatchNormalizati (None, 28, 28, 128) 512 res3d_branch2a[0][0]
__________________________________________________________________________________________________
activation_68 (Activation) (None, 28, 28, 128) 0 bn3d_branch2a[0][0]
__________________________________________________________________________________________________
res3d_branch2b (Conv2D) (None, 28, 28, 128) 147584 activation_68[0][0]
__________________________________________________________________________________________________
bn3d_branch2b (BatchNormalizati (None, 28, 28, 128) 512 res3d_branch2b[0][0]
__________________________________________________________________________________________________
activation_69 (Activation) (None, 28, 28, 128) 0 bn3d_branch2b[0][0]
__________________________________________________________________________________________________
res3d_branch2c (Conv2D) (None, 28, 28, 512) 66048 activation_69[0][0]
__________________________________________________________________________________________________
bn3d_branch2c (BatchNormalizati (None, 28, 28, 512) 2048 res3d_branch2c[0][0]
__________________________________________________________________________________________________
add_22 (Add) (None, 28, 28, 512) 0 bn3d_branch2c[0][0]
activation_67[0][0]
__________________________________________________________________________________________________
activation_70 (Activation) (None, 28, 28, 512) 0 add_22[0][0]
__________________________________________________________________________________________________
res4a_branch2a (Conv2D) (None, 14, 14, 256) 131328 activation_70[0][0]
__________________________________________________________________________________________________
bn4a_branch2a (BatchNormalizati (None, 14, 14, 256) 1024 res4a_branch2a[0][0]
__________________________________________________________________________________________________
activation_71 (Activation) (None, 14, 14, 256) 0 bn4a_branch2a[0][0]
__________________________________________________________________________________________________
res4a_branch2b (Conv2D) (None, 14, 14, 256) 590080 activation_71[0][0]
__________________________________________________________________________________________________
bn4a_branch2b (BatchNormalizati (None, 14, 14, 256) 1024 res4a_branch2b[0][0]
__________________________________________________________________________________________________
activation_72 (Activation) (None, 14, 14, 256) 0 bn4a_branch2b[0][0]
__________________________________________________________________________________________________
res4a_branch2c (Conv2D) (None, 14, 14, 1024) 263168 activation_72[0][0]
__________________________________________________________________________________________________
res4a_branch1 (Conv2D) (None, 14, 14, 1024) 525312 activation_70[0][0]
__________________________________________________________________________________________________
bn4a_branch2c (BatchNormalizati (None, 14, 14, 1024) 4096 res4a_branch2c[0][0]
__________________________________________________________________________________________________
bn4a_branch1 (BatchNormalizatio (None, 14, 14, 1024) 4096 res4a_branch1[0][0]
__________________________________________________________________________________________________
add_23 (Add) (None, 14, 14, 1024) 0 bn4a_branch2c[0][0]
bn4a_branch1[0][0]
__________________________________________________________________________________________________
activation_73 (Activation) (None, 14, 14, 1024) 0 add_23[0][0]
__________________________________________________________________________________________________
res4b_branch2a (Conv2D) (None, 14, 14, 256) 262400 activation_73[0][0]
__________________________________________________________________________________________________
bn4b_branch2a (BatchNormalizati (None, 14, 14, 256) 1024 res4b_branch2a[0][0]
__________________________________________________________________________________________________
activation_74 (Activation) (None, 14, 14, 256) 0 bn4b_branch2a[0][0]
__________________________________________________________________________________________________
res4b_branch2b (Conv2D) (None, 14, 14, 256) 590080 activation_74[0][0]
__________________________________________________________________________________________________
bn4b_branch2b (BatchNormalizati (None, 14, 14, 256) 1024 res4b_branch2b[0][0]
__________________________________________________________________________________________________
activation_75 (Activation) (None, 14, 14, 256) 0 bn4b_branch2b[0][0]
__________________________________________________________________________________________________
res4b_branch2c (Conv2D) (None, 14, 14, 1024) 263168 activation_75[0][0]
__________________________________________________________________________________________________
bn4b_branch2c (BatchNormalizati (None, 14, 14, 1024) 4096 res4b_branch2c[0][0]
__________________________________________________________________________________________________
add_24 (Add) (None, 14, 14, 1024) 0 bn4b_branch2c[0][0]
activation_73[0][0]
__________________________________________________________________________________________________
activation_76 (Activation) (None, 14, 14, 1024) 0 add_24[0][0]
__________________________________________________________________________________________________
res4c_branch2a (Conv2D) (None, 14, 14, 256) 262400 activation_76[0][0]
__________________________________________________________________________________________________
bn4c_branch2a (BatchNormalizati (None, 14, 14, 256) 1024 res4c_branch2a[0][0]
__________________________________________________________________________________________________
activation_77 (Activation) (None, 14, 14, 256) 0 bn4c_branch2a[0][0]
__________________________________________________________________________________________________
res4c_branch2b (Conv2D) (None, 14, 14, 256) 590080 activation_77[0][0]
__________________________________________________________________________________________________
bn4c_branch2b (BatchNormalizati (None, 14, 14, 256) 1024 res4c_branch2b[0][0]
__________________________________________________________________________________________________
activation_78 (Activation) (None, 14, 14, 256) 0 bn4c_branch2b[0][0]
__________________________________________________________________________________________________
res4c_branch2c (Conv2D) (None, 14, 14, 1024) 263168 activation_78[0][0]
__________________________________________________________________________________________________
bn4c_branch2c (BatchNormalizati (None, 14, 14, 1024) 4096 res4c_branch2c[0][0]
__________________________________________________________________________________________________
add_25 (Add) (None, 14, 14, 1024) 0 bn4c_branch2c[0][0]
activation_76[0][0]
__________________________________________________________________________________________________
activation_79 (Activation) (None, 14, 14, 1024) 0 add_25[0][0]
__________________________________________________________________________________________________
res4d_branch2a (Conv2D) (None, 14, 14, 256) 262400 activation_79[0][0]
__________________________________________________________________________________________________
bn4d_branch2a (BatchNormalizati (None, 14, 14, 256) 1024 res4d_branch2a[0][0]
__________________________________________________________________________________________________
activation_80 (Activation) (None, 14, 14, 256) 0 bn4d_branch2a[0][0]
__________________________________________________________________________________________________
res4d_branch2b (Conv2D) (None, 14, 14, 256) 590080 activation_80[0][0]
__________________________________________________________________________________________________
bn4d_branch2b (BatchNormalizati (None, 14, 14, 256) 1024 res4d_branch2b[0][0]
__________________________________________________________________________________________________
activation_81 (Activation) (None, 14, 14, 256) 0 bn4d_branch2b[0][0]
__________________________________________________________________________________________________
res4d_branch2c (Conv2D) (None, 14, 14, 1024) 263168 activation_81[0][0]
__________________________________________________________________________________________________
bn4d_branch2c (BatchNormalizati (None, 14, 14, 1024) 4096 res4d_branch2c[0][0]
__________________________________________________________________________________________________
add_26 (Add) (None, 14, 14, 1024) 0 bn4d_branch2c[0][0]
activation_79[0][0]
__________________________________________________________________________________________________
activation_82 (Activation) (None, 14, 14, 1024) 0 add_26[0][0]
__________________________________________________________________________________________________
res4e_branch2a (Conv2D) (None, 14, 14, 256) 262400 activation_82[0][0]
__________________________________________________________________________________________________
bn4e_branch2a (BatchNormalizati (None, 14, 14, 256) 1024 res4e_branch2a[0][0]
__________________________________________________________________________________________________
activation_83 (Activation) (None, 14, 14, 256) 0 bn4e_branch2a[0][0]
__________________________________________________________________________________________________
res4e_branch2b (Conv2D) (None, 14, 14, 256) 590080 activation_83[0][0]
__________________________________________________________________________________________________
bn4e_branch2b (BatchNormalizati (None, 14, 14, 256) 1024 res4e_branch2b[0][0]
__________________________________________________________________________________________________
activation_84 (Activation) (None, 14, 14, 256) 0 bn4e_branch2b[0][0]
__________________________________________________________________________________________________
res4e_branch2c (Conv2D) (None, 14, 14, 1024) 263168 activation_84[0][0]
__________________________________________________________________________________________________
bn4e_branch2c (BatchNormalizati (None, 14, 14, 1024) 4096 res4e_branch2c[0][0]
__________________________________________________________________________________________________
add_27 (Add) (None, 14, 14, 1024) 0 bn4e_branch2c[0][0]
activation_82[0][0]
__________________________________________________________________________________________________
activation_85 (Activation) (None, 14, 14, 1024) 0 add_27[0][0]
__________________________________________________________________________________________________
res4f_branch2a (Conv2D) (None, 14, 14, 256) 262400 activation_85[0][0]
__________________________________________________________________________________________________
bn4f_branch2a (BatchNormalizati (None, 14, 14, 256) 1024 res4f_branch2a[0][0]
__________________________________________________________________________________________________
activation_86 (Activation) (None, 14, 14, 256) 0 bn4f_branch2a[0][0]
__________________________________________________________________________________________________
res4f_branch2b (Conv2D) (None, 14, 14, 256) 590080 activation_86[0][0]
__________________________________________________________________________________________________
bn4f_branch2b (BatchNormalizati (None, 14, 14, 256) 1024 res4f_branch2b[0][0]
__________________________________________________________________________________________________
activation_87 (Activation) (None, 14, 14, 256) 0 bn4f_branch2b[0][0]
__________________________________________________________________________________________________
res4f_branch2c (Conv2D) (None, 14, 14, 1024) 263168 activation_87[0][0]
__________________________________________________________________________________________________
bn4f_branch2c (BatchNormalizati (None, 14, 14, 1024) 4096 res4f_branch2c[0][0]
__________________________________________________________________________________________________
add_28 (Add) (None, 14, 14, 1024) 0 bn4f_branch2c[0][0]
activation_85[0][0]
__________________________________________________________________________________________________
activation_88 (Activation) (None, 14, 14, 1024) 0 add_28[0][0]
__________________________________________________________________________________________________
res5a_branch2a (Conv2D) (None, 7, 7, 512) 524800 activation_88[0][0]
__________________________________________________________________________________________________
bn5a_branch2a (BatchNormalizati (None, 7, 7, 512) 2048 res5a_branch2a[0][0]
__________________________________________________________________________________________________
activation_89 (Activation) (None, 7, 7, 512) 0 bn5a_branch2a[0][0]
__________________________________________________________________________________________________
res5a_branch2b (Conv2D) (None, 7, 7, 512) 2359808 activation_89[0][0]
__________________________________________________________________________________________________
bn5a_branch2b (BatchNormalizati (None, 7, 7, 512) 2048 res5a_branch2b[0][0]
__________________________________________________________________________________________________
activation_90 (Activation) (None, 7, 7, 512) 0 bn5a_branch2b[0][0]
__________________________________________________________________________________________________
res5a_branch2c (Conv2D) (None, 7, 7, 2048) 1050624 activation_90[0][0]
__________________________________________________________________________________________________
res5a_branch1 (Conv2D) (None, 7, 7, 2048) 2099200 activation_88[0][0]
__________________________________________________________________________________________________
bn5a_branch2c (BatchNormalizati (None, 7, 7, 2048) 8192 res5a_branch2c[0][0]
__________________________________________________________________________________________________
bn5a_branch1 (BatchNormalizatio (None, 7, 7, 2048) 8192 res5a_branch1[0][0]
__________________________________________________________________________________________________
add_29 (Add) (None, 7, 7, 2048) 0 bn5a_branch2c[0][0]
bn5a_branch1[0][0]
__________________________________________________________________________________________________
activation_91 (Activation) (None, 7, 7, 2048) 0 add_29[0][0]
__________________________________________________________________________________________________
res5b_branch2a (Conv2D) (None, 7, 7, 512) 1049088 activation_91[0][0]
__________________________________________________________________________________________________
bn5b_branch2a (BatchNormalizati (None, 7, 7, 512) 2048 res5b_branch2a[0][0]
__________________________________________________________________________________________________
activation_92 (Activation) (None, 7, 7, 512) 0 bn5b_branch2a[0][0]
__________________________________________________________________________________________________
res5b_branch2b (Conv2D) (None, 7, 7, 512) 2359808 activation_92[0][0]
__________________________________________________________________________________________________
bn5b_branch2b (BatchNormalizati (None, 7, 7, 512) 2048 res5b_branch2b[0][0]
__________________________________________________________________________________________________
activation_93 (Activation) (None, 7, 7, 512) 0 bn5b_branch2b[0][0]
__________________________________________________________________________________________________
res5b_branch2c (Conv2D) (None, 7, 7, 2048) 1050624 activation_93[0][0]
__________________________________________________________________________________________________
bn5b_branch2c (BatchNormalizati (None, 7, 7, 2048) 8192 res5b_branch2c[0][0]
__________________________________________________________________________________________________
add_30 (Add) (None, 7, 7, 2048) 0 bn5b_branch2c[0][0]
activation_91[0][0]
__________________________________________________________________________________________________
activation_94 (Activation) (None, 7, 7, 2048) 0 add_30[0][0]
__________________________________________________________________________________________________
res5c_branch2a (Conv2D) (None, 7, 7, 512) 1049088 activation_94[0][0]
__________________________________________________________________________________________________
bn5c_branch2a (BatchNormalizati (None, 7, 7, 512) 2048 res5c_branch2a[0][0]
__________________________________________________________________________________________________
activation_95 (Activation) (None, 7, 7, 512) 0 bn5c_branch2a[0][0]
__________________________________________________________________________________________________
res5c_branch2b (Conv2D) (None, 7, 7, 512) 2359808 activation_95[0][0]
__________________________________________________________________________________________________
bn5c_branch2b (BatchNormalizati (None, 7, 7, 512) 2048 res5c_branch2b[0][0]
__________________________________________________________________________________________________
activation_96 (Activation) (None, 7, 7, 512) 0 bn5c_branch2b[0][0]
__________________________________________________________________________________________________
res5c_branch2c (Conv2D) (None, 7, 7, 2048) 1050624 activation_96[0][0]
__________________________________________________________________________________________________
bn5c_branch2c (BatchNormalizati (None, 7, 7, 2048) 8192 res5c_branch2c[0][0]
__________________________________________________________________________________________________
add_31 (Add) (None, 7, 7, 2048) 0 bn5c_branch2c[0][0]
activation_94[0][0]
__________________________________________________________________________________________________
activation_97 (Activation) (None, 7, 7, 2048) 0 add_31[0][0]
__________________________________________________________________________________________________
dropout_1 (Dropout) (None, 7, 7, 2048) 0 activation_97[0][0]
__________________________________________________________________________________________________
global_average_pooling2d_1 (Glo (None, 2048) 0 dropout_1[0][0]
__________________________________________________________________________________________________
dense_2 (Dense) (None, 1024) 2098176 global_average_pooling2d_1[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 1024) 4096 dense_2[0][0]
__________________________________________________________________________________________________
dense_3 (Dense) (None, 300) 307500 batch_normalization_1[0][0]
==================================================================================================
Total params: 25,997,484
Trainable params: 2,407,724
Non-trainable params: 23,589,760
__________________________________________________________________________________________________
###Markdown
Search
###Code
train_gen = None
import gc
gc.collect()
###Output
_____no_output_____
###Markdown
Search: imagenet classes run predictions on the training dataset to get word vector representations for the classes in the dataset
###Code
pattern = '*/images/*.JPEG'
fnames = [p.as_posix() for p in (imagenet_path/'train').glob(pattern)]
# parameters
image_size = (224, 224)
data_size = len(fnames)
dimensions = 300
# place holders for X and y
y = np.zeros((data_size, dimensions), dtype=np.float32)
y_hat = np.zeros((data_size, dimensions), dtype=np.float32)
# read images and labels
for i, p in tqdm(enumerate(fnames)):
# original label
y[i, :] = get_word2vec_from_fname(p)
# predicted label
img = preprocessing.image.load_img(p, target_size=image_size)
img_data = preprocessing.image.img_to_array(img)
y_hat[i, :] = model.predict(img_data[None])
y.shape, y_hat.shape
!curl -O http://files.fast.ai/data/classids.txt
!cat classids.txt | wc -l
###Output
82115
###Markdown
split the syn2word dictionnary into two list of synset and another one of vectors
###Code
syns_1k, words_1k = zip(*syn2word.items())
wvs_1k = [get_vec_by_word(word) for word in words_1k]
###Output
_____no_output_____
###Markdown
split english words in dictionnay (cannot really do this on Collaboratory it will crash)
###Code
words_all, wvs_all = zip(*word2vec.items())
len(syns_1k), len(words_1k), len(wvs_1k), len(words_all), len(wvs_all)
import nmslib
def create_index(a):
# Initializes a new index
index = nmslib.init(space='angulardist')
# Add the datapoints to the index
index.addDataPointBatch(a)
# Create the index for querying
index.createIndex()
return index
def get_knns(index, vecs):
"""Get approximate K nearest neighbours for each vector in the list"""
return zip(*index.knnQueryBatch(vecs, k=10, num_threads=4))
def get_knn(index, vec):
"""Get approximate K nearest neighbours of a vector."""
return index.knnQuery(vec, k=10)
# this is too slow, avoiding it
#from sklearn.neighbors import NearestNeighbors
#wvs_index_sklearn = NearestNeighbors(n_neighbors=10, algorithm='ball_tree').fit(wvs_all)
#distances, indices = wvs_index_sklearn.kneighbors(wvs_all)
###Output
_____no_output_____
###Markdown
KNN on imagenet labels create an index against the word vector representation in the class labels, then find the nearest words for what was predicted
###Code
wvs_index = create_index(wvs_1k) # nearest neighborhood on imageword2vec wvs
###Output
_____no_output_____
###Markdown
Get the 10 top nearest neighbours for each vector in y_hat (i.e. predicted word2vec for the test images) on true labels
###Code
get_knn(wvs_index, y[0])
idxs, dists = get_knns(wvs_index, y)
idxs[100:102], idxs[10:12]
###Output
_____no_output_____
###Markdown
have a look to the nearest neighbours for some vectors
###Code
offset = 300
idxs[offset:offset+10]
###Output
_____no_output_____
###Markdown
get the name of the top 3 neighbours for vectors between index start and start + 10
###Code
random_ids = [idxs[randint(0, len(idxs))] for i in range(10)]
[[words_1k[id] for id in ids[:3]] for ids in random_ids]
###Output
_____no_output_____
###Markdown
on predicted labels
###Code
idxs, dists = get_knns(wvs_index, y_hat)
random_ids = [idxs[randint(0, len(idxs))] for i in range(10)]
[[words_1k[id] for id in ids[:3]] for ids in random_ids]
###Output
_____no_output_____
###Markdown
these results are complete garbadge KNN on wordnet labels create an index against the word vector representation in FastText, then find the nearest words for what was predicted
###Code
wvs_index = create_index(wvs_all) # nearest neighborhood on imageword2vec wvs
idxs, dists = get_knns(wvs_index, y)
random_ids = [idxs[randint(0, len(idxs))] for i in range(10)]
[[words_all[id] for id in ids[:3]] for ids in random_ids]
idxs, dists = get_knns(wvs_index, y_hat)
[[words_all[id] for id in ids[:3]] for ids in random_ids]
###Output
_____no_output_____
###Markdown
Search: Text to Image helper functions to display search result
###Code
def ceildiv(a, b):
return -(-a // b)
def show_img_old(fname, figsize=None, ax=None):
img = preprocessing.image.load_img(fname, target_size=image_size)
if not ax:
fig,ax = plt.subplots(figsize=figsize)
ax.imshow(img)
ax.axis('off')
return ax
def show_imgs_old(fnames, cols, figsize=None):
fig,axes = plt.subplots(len(fnames)//cols, cols, figsize=figsize)
for i,ax in enumerate(axes.flat):
show_img(fnames[i], ax=ax)
plt.tight_layout()
plt.show()
def show_img(fname, figsize=None, ax=None):
if not ax:
fig,ax = plt.subplots(figsize=figsize)
img = mpimg.imread(fname)
ax.imshow(img)
ax.axis('off')
return ax
def show_imgs(imspaths, figsize=(10,5), rows=1, titles=None, maintitle=None):
"""Plot the images in a grid"""
f = plt.figure(figsize=figsize)
if maintitle is not None: plt.suptitle(maintitle, fontsize=10)
for i in range(len(imspaths)):
sp = f.add_subplot(rows, ceildiv(len(imspaths), rows), i+1)
sp.axis('Off')
if titles is not None: sp.set_title(titles[i], fontsize=16)
img = plt.imread(imspaths[i])
plt.imshow(img)
###Output
_____no_output_____
###Markdown
Create an index on the predicted images then lookup for images with neighrest vector representation to the representation of an example word
###Code
wvs_index = create_index(y_hat) # nearest neighborhood on imageword2vec wvs
word = 'boat'
wv = en_vecs.get_word_vector(word) # word2vec[word]
idxs, dists = get_knn(wvs_index, wv)
imgs = [[fnames[id] for id in ids[:3]] for ids in [idxs]]; imgs
show_imgs(imgs[0], rows=1, maintitle='Similar images of '+word)
word = 'car'
wv = en_vecs.get_word_vector(word) # word2vec[word]
idxs, dists = get_knn(wvs_index, wv)
imgs = [[fnames[id] for id in ids[:3]] for ids in [idxs]]
show_imgs(imgs[0], rows=1)
word = 'engine'
wv = en_vecs.get_word_vector(word) # word2vec[word]
idxs, dists = get_knn(wvs_index, wv)
imgs = [[fnames[id] for id in ids[:3]] for ids in [idxs]]
show_imgs(imgs[0], rows=1)
###Output
_____no_output_____
###Markdown
search for something between a boat and an engine
###Code
#vec = (np.array(word2vec['engine']) + np.array(word2vec['boat'])) / 2
vec = (np.array(en_vecs.get_word_vector('engine')) + np.array(en_vecs.get_word_vector('boat'))) / 2
idxs, dists = get_knn(wvs_index, vec.tolist())
imgs = [[fnames[id] for id in ids[:3]] for ids in [idxs]]
show_imgs(imgs[0], rows=1)
###Output
_____no_output_____
###Markdown
Search: Image to Image
###Code
fname = imagenet_path/'test/images/test_104.JPEG'
show_img(fname)
img = preprocessing.image.load_img(fname, target_size=image_size)
img_data = preprocessing.image.img_to_array(img)
vec = model.predict(img_data[None])
idxs, dists = get_knn(wvs_index, vec)
imgs = [[fnames[id] for id in ids[:3]] for ids in [idxs]]
show_imgs(imgs[0], rows=1)
###Output
_____no_output_____
###Markdown
Oh finally something meaningful, images with car and beaches!
###Code
###Output
_____no_output_____ |
docs/tutorials/fermi_lat.ipynb | ###Markdown
Fermi-LAT with Gammapy IntroductionThis tutorial will show you how to work with Fermi-LAT data with Gammapy. As an example, we will look at the Galactic center region using the high-energy dataset that was used for the 3FHL catalog, in the energy range 10 GeV to 2 TeV.We note that support for Fermi-LAT data analysis in Gammapy is very limited. For most tasks, we recommend you use [Fermipy](http://fermipy.readthedocs.io/), which is based on the [Fermi Science Tools](https://fermi.gsfc.nasa.gov/ssc/data/analysis/software/) (Fermi ST).Using Gammapy with Fermi-LAT data could be an option for you if you want to do an analysis that is not easily possible with Fermipy and the Fermi Science Tools. For example a joint likelihood fit of Fermi-LAT data with data e.g. from H.E.S.S., MAGIC, VERITAS or some other instrument, or analysis of Fermi-LAT data with a complex spatial or spectral model that is not available in Fermipy or the Fermi ST.Besides Gammapy, you might want to look at are [Sherpa](http://cxc.harvard.edu/sherpa/) or [3ML](https://threeml.readthedocs.io/). Or just using Python to roll your own analyis using several existing analysis packages. E.g. it it possible to use Fermipy and the Fermi ST to evaluate the likelihood on Fermi-LAT data, and Gammapy to evaluate it e.g. for IACT data, and to do a joint likelihood fit using e.g. [iminuit](http://iminuit.readthedocs.io/) or [emcee](http://dfm.io/emcee).To use Fermi-LAT data with Gammapy, you first have to use the Fermi ST to prepare an event list (using ``gtselect`` and ``gtmktime``, exposure cube (using ``gtexpcube2`` and PSF (using ``gtpsf``). You can then use `~gammapy.data.EventList`, `~gammapy.maps` and the `~gammapy.irf.EnergyDependentTablePSF` to read the Fermi-LAT maps and PSF, i.e. support for these high-level analysis products from the Fermi ST is built in. To do a 3D map analyis, you can use Fit for Fermi-LAT data in the same way that it's use for IACT data. This is illustrated in this notebook. A 1D region-based spectral analysis is also possible, this will be illustrated in a future tutorial. Setup**IMPORTANT**: For this notebook you have to get the prepared ``3fhl`` dataset provided in your $GAMMAPY_DATA.Note that the ``3fhl`` dataset is high-energy only, ranging from 10 GeV to 2 TeV.
###Code
# Check that you have the prepared Fermi-LAT dataset
# We will use diffuse models from here
!ls -1 $GAMMAPY_DATA/fermi_3fhl
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from astropy import units as u
from astropy.coordinates import SkyCoord
from gammapy.data import EventList
from gammapy.datasets import MapDataset
from gammapy.irf import EnergyDependentTablePSF, PSFMap, EDispMap
from gammapy.maps import Map, MapAxis, WcsGeom
from gammapy.modeling.models import (
PowerLawSpectralModel,
PointSpatialModel,
SkyModel,
TemplateSpatialModel,
PowerLawNormSpectralModel,
Models,
create_fermi_isotropic_diffuse_model,
)
from gammapy.modeling import Fit
###Output
_____no_output_____
###Markdown
EventsTo load up the Fermi-LAT event list, use the `~gammapy.data.EventList` class:
###Code
events = EventList.read(
"$GAMMAPY_DATA/fermi_3fhl/fermi_3fhl_events_selected.fits.gz"
)
print(events)
###Output
_____no_output_____
###Markdown
The event data is stored in a [astropy.table.Table](http://docs.astropy.org/en/stable/api/astropy.table.Table.html) object. In case of the Fermi-LAT event list this contains all the additional information on positon, zenith angle, earth azimuth angle, event class, event type etc.
###Code
events.table.colnames
events.table[:5][["ENERGY", "RA", "DEC"]]
print(events.time[0].iso)
print(events.time[-1].iso)
energy = events.energy
energy.info("stats")
###Output
_____no_output_____
###Markdown
As a short analysis example we will count the number of events above a certain minimum energy:
###Code
for e_min in [10, 100, 1000] * u.GeV:
n = (events.energy > e_min).sum()
print(f"Events above {e_min:4.0f}: {n:5.0f}")
###Output
_____no_output_____
###Markdown
CountsLet us start to prepare things for an 3D map analysis of the Galactic center region with Gammapy. The first thing we do is to define the map geometry. We chose a TAN projection centered on position ``(glon, glat) = (0, 0)`` with pixel size 0.1 deg, and four energy bins.
###Code
gc_pos = SkyCoord(0, 0, unit="deg", frame="galactic")
energy_axis = MapAxis.from_edges(
[1e4, 3e4, 1e5, 3e5, 2e6], name="energy", unit="MeV", interp="log"
)
counts = Map.create(
skydir=gc_pos,
npix=(100, 80),
proj="TAN",
frame="galactic",
binsz=0.1,
axes=[energy_axis],
dtype=float,
)
# We put this call into the same Jupyter cell as the Map.create
# because otherwise we could accidentally fill the counts
# multiple times when executing the ``fill_by_coord`` multiple times.
counts.fill_by_coord({"skycoord": events.radec, "energy": events.energy})
counts.geom.axes[0]
counts.sum_over_axes().smooth(2).plot(stretch="sqrt", vmax=30);
###Output
_____no_output_____
###Markdown
ExposureThe Fermi-LAT datatset contains the energy-dependent exposure for the whole sky as a HEALPix map computed with ``gtexpcube2``. This format is supported by `~gammapy.maps` directly.Interpolating the exposure cube from the Fermi ST to get an exposure cube matching the spatial geometry and energy axis defined above with Gammapy is easy. The only point to watch out for is how exactly you want the energy axis and binning handled.Below we just use the default behaviour, which is linear interpolation in energy on the original exposure cube. Probably log interpolation would be better, but it doesn't matter much here, because the energy binning is fine. Finally, we just copy the counts map geometry, which contains an energy axis with `node_type="edges"`. This is non-ideal for exposure cubes, but again, acceptable because exposure doesn't vary much from bin to bin, so the exact way interpolation occurs in later use of that exposure cube doesn't matter a lot. Of course you could define any energy axis for your exposure cube that you like.
###Code
exposure_hpx = Map.read(
"$GAMMAPY_DATA/fermi_3fhl/fermi_3fhl_exposure_cube_hpx.fits.gz"
)
print(exposure_hpx.geom)
print(exposure_hpx.geom.axes[0])
exposure_hpx.plot();
# For exposure, we choose a geometry with node_type='center',
# whereas for counts it was node_type='edge'
axis = MapAxis.from_nodes(
counts.geom.axes[0].center, name="energy_true", unit="MeV", interp="log"
)
geom = WcsGeom(wcs=counts.geom.wcs, npix=counts.geom.npix, axes=[axis])
exposure = exposure_hpx.interp_to_geom(geom)
print(exposure.geom)
print(exposure.geom.axes[0])
# Exposure is almost constant accross the field of view
exposure.slice_by_idx({"energy_true": 0}).plot(add_cbar=True);
# Exposure varies very little with energy at these high energies
energy = [10, 100, 1000] * u.GeV
exposure.get_by_coord({"skycoord": gc_pos, "energy_true": energy})
###Output
_____no_output_____
###Markdown
Galactic diffuse background The Fermi-LAT collaboration provides a galactic diffuse emission model, that can be used as a background model forFermi-LAT source analysis.Diffuse model maps are very large (100s of MB), so as an example here, we just load one that represents a small cutout for the Galactic center region.
###Code
diffuse_galactic_fermi = Map.read(
"$GAMMAPY_DATA/fermi-3fhl-gc/gll_iem_v06_gc.fits.gz"
)
# Unit is not stored in the file, set it manually
diffuse_galactic_fermi.unit = "cm-2 s-1 MeV-1 sr-1"
print(diffuse_galactic_fermi.geom)
print(diffuse_galactic_fermi.geom.axes[0])
template_diffuse = TemplateSpatialModel(
diffuse_galactic_fermi, normalize=False
)
diffuse_iem = SkyModel(
spectral_model=PowerLawNormSpectralModel(),
spatial_model=template_diffuse,
name="diffuse-iem",
)
###Output
_____no_output_____
###Markdown
Let's look at the map of first energy band of the cube:
###Code
template_diffuse.map.slice_by_idx({"energy_true": 0}).plot(add_cbar=True);
###Output
_____no_output_____
###Markdown
Here is the spectrum at the Glaactic center:
###Code
# Exposure varies very little with energy at these high energies
energy = np.logspace(1, 3, 10) * u.GeV
dnde = template_diffuse.map.interp_by_coord(
{"skycoord": gc_pos, "energy_true": energy},
method="linear",
fill_value=None,
)
plt.plot(energy.value, dnde, "+")
plt.loglog()
plt.xlabel("Energy (GeV)")
plt.ylabel("Flux (cm-2 s-1 MeV-1 sr-1)")
# TODO: show how one can fix the extrapolate to high energy
# by computing and padding an extra plane e.g. at 1e3 TeV
# that corresponds to a linear extrapolation
###Output
_____no_output_____
###Markdown
Isotropic diffuse backgroundTo load the isotropic diffuse model with Gammapy, use the `~gammapy.modeling.models.TemplateSpectralModel`. We are using `'fill_value': 'extrapolate'` to extrapolate the model above 500 GeV:
###Code
filename = "$GAMMAPY_DATA/fermi_3fhl/iso_P8R2_SOURCE_V6_v06.txt"
diffuse_iso = create_fermi_isotropic_diffuse_model(
filename=filename, interp_kwargs={"fill_value": None}
)
###Output
_____no_output_____
###Markdown
We can plot the model in the energy range between 50 GeV and 2000 GeV:
###Code
energy_range = [50, 2000] * u.GeV
diffuse_iso.spectral_model.plot(energy_range, flux_unit="1 / (cm2 MeV s)");
###Output
_____no_output_____
###Markdown
PSFNext we will tke a look at the PSF. It was computed using ``gtpsf``, in this case for the Galactic center position. Note that generally for Fermi-LAT, the PSF only varies little within a given regions of the sky, especially at high energies like what we have here. We use the `~gammapy.irf.EnergyDependentTablePSF` class to load the PSF and use some of it's methods to get some information about it.
###Code
psf_table = EnergyDependentTablePSF.read(
"$GAMMAPY_DATA/fermi_3fhl/fermi_3fhl_psf_gc.fits.gz"
)
print(psf_table)
###Output
_____no_output_____
###Markdown
To get an idea of the size of the PSF we check how the containment radii of the Fermi-LAT PSF vari with energy and different containment fractions:
###Code
plt.figure(figsize=(8, 5))
psf_table.plot_containment_vs_energy(linewidth=2, fractions=[0.68, 0.95])
plt.xlim(50, 2000)
plt.show()
###Output
_____no_output_____
###Markdown
In addition we can check how the actual shape of the PSF varies with energy and compare it against the mean PSF between 50 GeV and 2000 GeV:
###Code
plt.figure(figsize=(8, 5))
for energy in [100, 300, 1000] * u.GeV:
psf_at_energy = psf_table.table_psf_at_energy(energy)
psf_at_energy.plot_psf_vs_rad(label=f"PSF @ {energy:.0f}", lw=2)
energy_range = [50, 2000] * u.GeV
spectrum = PowerLawSpectralModel(index=2.3)
psf_mean = psf_table.table_psf_in_energy_range(
energy_range=energy_range, spectrum=spectrum
)
psf_mean.plot_psf_vs_rad(label="PSF Mean", lw=4, c="k", ls="--")
plt.xlim(1e-3, 0.3)
plt.ylim(1e3, 1e6)
plt.legend();
# Let's compute a PSF kernel matching the pixel size of our map
psf = PSFMap.from_energy_dependent_table_psf(psf_table)
psf_kernel = psf.get_psf_kernel(
position=geom.center_skydir, geom=geom, max_radius="1 deg"
)
psf_kernel.psf_kernel_map.sum_over_axes().plot(stretch="log", add_cbar=True);
###Output
_____no_output_____
###Markdown
Energy DispersionFor simplicity we assume a diagonal energy dispersion:
###Code
e_true = exposure.geom.axes["energy_true"]
edisp = EDispMap.from_diagonal_response(energy_axis_true=e_true)
###Output
_____no_output_____
###Markdown
FitNow, the big finale: let’s do a 3D map fit for the source at the Galactic center, to measure it’s position and spectrum. We keep the background normalization free.
###Code
spatial_model = PointSpatialModel(
lon_0="0 deg", lat_0="0 deg", frame="galactic"
)
spectral_model = PowerLawSpectralModel(
index=2.7, amplitude="5.8e-10 cm-2 s-1 TeV-1", reference="100 GeV"
)
source = SkyModel(
spectral_model=spectral_model,
spatial_model=spatial_model,
name="source-gc",
)
models = Models([source, diffuse_iem, diffuse_iso])
dataset = MapDataset(
models=models, counts=counts, exposure=exposure, psf=psf, edisp=edisp
)
%%time
fit = Fit([dataset])
result = fit.run()
print(result)
print(models)
residual = counts - dataset.npred()
residual.sum_over_axes().smooth("0.1 deg").plot(
cmap="coolwarm", vmin=-3, vmax=3, add_cbar=True
);
###Output
_____no_output_____
###Markdown
Fermi-LAT with Gammapy IntroductionThis tutorial will show you how to work with Fermi-LAT data with Gammapy. As an example, we will look at the Galactic center region using the high-energy dataset that was used for the 3FHL catalog, in the energy range 10 GeV to 2 TeV.We note that support for Fermi-LAT data analysis in Gammapy is very limited. For most tasks, we recommend you use [Fermipy](http://fermipy.readthedocs.io/), which is based on the [Fermi Science Tools](https://fermi.gsfc.nasa.gov/ssc/data/analysis/software/) (Fermi ST).Using Gammapy with Fermi-LAT data could be an option for you if you want to do an analysis that is not easily possible with Fermipy and the Fermi Science Tools. For example a joint likelihood fit of Fermi-LAT data with data e.g. from H.E.S.S., MAGIC, VERITAS or some other instrument, or analysis of Fermi-LAT data with a complex spatial or spectral model that is not available in Fermipy or the Fermi ST.Besides Gammapy, you might want to look at are [Sherpa](http://cxc.harvard.edu/sherpa/) or [3ML](https://threeml.readthedocs.io/). Or just using Python to roll your own analyis using several existing analysis packages. E.g. it it possible to use Fermipy and the Fermi ST to evaluate the likelihood on Fermi-LAT data, and Gammapy to evaluate it e.g. for IACT data, and to do a joint likelihood fit using e.g. [iminuit](http://iminuit.readthedocs.io/) or [emcee](http://dfm.io/emcee).To use Fermi-LAT data with Gammapy, you first have to use the Fermi ST to prepare an event list (using ``gtselect`` and ``gtmktime``, exposure cube (using ``gtexpcube2`` and PSF (using ``gtpsf``). You can then use `~gammapy.data.EventList`, `~gammapy.maps` and the `~gammapy.irf.PSFMap` to read the Fermi-LAT maps and PSF, i.e. support for these high-level analysis products from the Fermi ST is built in. To do a 3D map analyis, you can use Fit for Fermi-LAT data in the same way that it's use for IACT data. This is illustrated in this notebook. A 1D region-based spectral analysis is also possible, this will be illustrated in a future tutorial. Setup**IMPORTANT**: For this notebook you have to get the prepared ``3fhl`` dataset provided in your $GAMMAPY_DATA.Note that the ``3fhl`` dataset is high-energy only, ranging from 10 GeV to 2 TeV.
###Code
# Check that you have the prepared Fermi-LAT dataset
# We will use diffuse models from here
!ls -1 $GAMMAPY_DATA/fermi_3fhl
%matplotlib inline
import matplotlib.pyplot as plt
from astropy import units as u
from astropy.coordinates import SkyCoord
from gammapy.data import EventList
from gammapy.datasets import MapDataset
from gammapy.irf import PSFMap, EDispKernelMap
from gammapy.maps import Map, MapAxis, WcsGeom
from gammapy.modeling.models import (
PowerLawSpectralModel,
PointSpatialModel,
SkyModel,
TemplateSpatialModel,
PowerLawNormSpectralModel,
Models,
create_fermi_isotropic_diffuse_model,
)
from gammapy.modeling import Fit
###Output
_____no_output_____
###Markdown
EventsTo load up the Fermi-LAT event list, use the `~gammapy.data.EventList` class:
###Code
events = EventList.read(
"$GAMMAPY_DATA/fermi_3fhl/fermi_3fhl_events_selected.fits.gz"
)
print(events)
###Output
_____no_output_____
###Markdown
The event data is stored in a [astropy.table.Table](http://docs.astropy.org/en/stable/api/astropy.table.Table.html) object. In case of the Fermi-LAT event list this contains all the additional information on positon, zenith angle, earth azimuth angle, event class, event type etc.
###Code
events.table.colnames
events.table[:5][["ENERGY", "RA", "DEC"]]
print(events.time[0].iso)
print(events.time[-1].iso)
energy = events.energy
energy.info("stats")
###Output
_____no_output_____
###Markdown
As a short analysis example we will count the number of events above a certain minimum energy:
###Code
for e_min in [10, 100, 1000] * u.GeV:
n = (events.energy > e_min).sum()
print(f"Events above {e_min:4.0f}: {n:5.0f}")
###Output
_____no_output_____
###Markdown
CountsLet us start to prepare things for an 3D map analysis of the Galactic center region with Gammapy. The first thing we do is to define the map geometry. We chose a TAN projection centered on position ``(glon, glat) = (0, 0)`` with pixel size 0.1 deg, and four energy bins.
###Code
gc_pos = SkyCoord(0, 0, unit="deg", frame="galactic")
energy_axis = MapAxis.from_edges(
[1e4, 3e4, 1e5, 3e5, 2e6], name="energy", unit="MeV", interp="log"
)
counts = Map.create(
skydir=gc_pos,
npix=(100, 80),
proj="TAN",
frame="galactic",
binsz=0.1,
axes=[energy_axis],
dtype=float,
)
# We put this call into the same Jupyter cell as the Map.create
# because otherwise we could accidentally fill the counts
# multiple times when executing the ``fill_by_coord`` multiple times.
counts.fill_events(events)
counts.geom.axes[0]
counts.sum_over_axes().smooth(2).plot(stretch="sqrt", vmax=30);
###Output
_____no_output_____
###Markdown
ExposureThe Fermi-LAT datatset contains the energy-dependent exposure for the whole sky as a HEALPix map computed with ``gtexpcube2``. This format is supported by `~gammapy.maps` directly.Interpolating the exposure cube from the Fermi ST to get an exposure cube matching the spatial geometry and energy axis defined above with Gammapy is easy. The only point to watch out for is how exactly you want the energy axis and binning handled.Below we just use the default behaviour, which is linear interpolation in energy on the original exposure cube. Probably log interpolation would be better, but it doesn't matter much here, because the energy binning is fine. Finally, we just copy the counts map geometry, which contains an energy axis with `node_type="edges"`. This is non-ideal for exposure cubes, but again, acceptable because exposure doesn't vary much from bin to bin, so the exact way interpolation occurs in later use of that exposure cube doesn't matter a lot. Of course you could define any energy axis for your exposure cube that you like.
###Code
exposure_hpx = Map.read(
"$GAMMAPY_DATA/fermi_3fhl/fermi_3fhl_exposure_cube_hpx.fits.gz"
)
print(exposure_hpx.geom)
print(exposure_hpx.geom.axes[0])
exposure_hpx.plot();
# For exposure, we choose a geometry with node_type='center',
# whereas for counts it was node_type='edge'
axis = MapAxis.from_energy_bounds("10 GeV", "2 TeV", nbin=10, per_decade=True, name="energy_true",)
geom = WcsGeom(wcs=counts.geom.wcs, npix=counts.geom.npix, axes=[axis])
exposure = exposure_hpx.interp_to_geom(geom)
counts.geom.axes[0]
print(exposure.geom)
print(exposure.geom.axes[0])
# Exposure is almost constant accross the field of view
exposure.slice_by_idx({"energy_true": 0}).plot(add_cbar=True);
# Exposure varies very little with energy at these high energies
energy = [10, 100, 1000] * u.GeV
exposure.get_by_coord({"skycoord": gc_pos, "energy_true": energy})
###Output
_____no_output_____
###Markdown
Galactic diffuse background The Fermi-LAT collaboration provides a galactic diffuse emission model, that can be used as a background model forFermi-LAT source analysis.Diffuse model maps are very large (100s of MB), so as an example here, we just load one that represents a small cutout for the Galactic center region.
###Code
diffuse_galactic_fermi = Map.read(
"$GAMMAPY_DATA/fermi-3fhl-gc/gll_iem_v06_gc.fits.gz"
)
# Unit is not stored in the file, set it manually
diffuse_galactic_fermi.unit = "cm-2 s-1 MeV-1 sr-1"
print(diffuse_galactic_fermi.geom)
print(diffuse_galactic_fermi.geom.axes[0])
template_diffuse = TemplateSpatialModel(
diffuse_galactic_fermi, normalize=False
)
diffuse_iem = SkyModel(
spectral_model=PowerLawNormSpectralModel(),
spatial_model=template_diffuse,
name="diffuse-iem",
)
###Output
_____no_output_____
###Markdown
Let's look at the map of first energy band of the cube:
###Code
template_diffuse.map.slice_by_idx({"energy_true": 0}).plot(add_cbar=True);
###Output
_____no_output_____
###Markdown
Here is the spectrum at the Glaactic center:
###Code
dnde = template_diffuse.map.to_region_nd_map(region=gc_pos)
dnde.plot()
plt.xlabel("Energy (GeV)")
plt.ylabel("Flux (cm-2 s-1 MeV-1 sr-1)");
###Output
_____no_output_____
###Markdown
Isotropic diffuse backgroundTo load the isotropic diffuse model with Gammapy, use the `~gammapy.modeling.models.TemplateSpectralModel`. We are using `'fill_value': 'extrapolate'` to extrapolate the model above 500 GeV:
###Code
filename = "$GAMMAPY_DATA/fermi_3fhl/iso_P8R2_SOURCE_V6_v06.txt"
diffuse_iso = create_fermi_isotropic_diffuse_model(
filename=filename, interp_kwargs={"fill_value": None}
)
###Output
_____no_output_____
###Markdown
We can plot the model in the energy range between 50 GeV and 2000 GeV:
###Code
energy_range = [50, 2000] * u.GeV
diffuse_iso.spectral_model.plot(energy_range, flux_unit="1 / (cm2 MeV s)");
###Output
_____no_output_____
###Markdown
PSFNext we will tke a look at the PSF. It was computed using ``gtpsf``, in this case for the Galactic center position. Note that generally for Fermi-LAT, the PSF only varies little within a given regions of the sky, especially at high energies like what we have here. We use the `~gammapy.irf.PSFMap` class to load the PSF and use some of it's methods to get some information about it.
###Code
psf = PSFMap.read(
"$GAMMAPY_DATA/fermi_3fhl/fermi_3fhl_psf_gc.fits.gz", format="gtpsf"
)
print(psf)
###Output
_____no_output_____
###Markdown
To get an idea of the size of the PSF we check how the containment radii of the Fermi-LAT PSF vari with energy and different containment fractions:
###Code
plt.figure(figsize=(8, 5))
psf.plot_containment_radius_vs_energy()
plt.show()
###Output
_____no_output_____
###Markdown
In addition we can check how the actual shape of the PSF varies with energy and compare it against the mean PSF between 50 GeV and 2000 GeV:
###Code
plt.figure(figsize=(8, 5))
energy = [100, 300, 1000] * u.GeV
psf.plot_psf_vs_rad(energy_true=energy)
spectrum = PowerLawSpectralModel(index=2.3)
psf_mean = psf.to_image(spectrum=spectrum)
psf_mean.plot_psf_vs_rad(c="k", ls="--", energy_true=[500] * u.GeV)
plt.xlim(1e-3, 0.3)
plt.ylim(1e3, 1e6)
plt.legend();
psf_kernel = psf.get_psf_kernel(
position=geom.center_skydir, geom=geom, max_radius="1 deg"
)
psf_kernel.to_image().psf_kernel_map.plot(stretch="log", add_cbar=True);
###Output
_____no_output_____
###Markdown
Energy DispersionFor simplicity we assume a diagonal energy dispersion:
###Code
e_true = exposure.geom.axes["energy_true"]
edisp = EDispKernelMap.from_diagonal_response(
energy_axis_true=e_true, energy_axis=energy_axis
)
edisp.get_edisp_kernel().plot_matrix()
###Output
_____no_output_____
###Markdown
FitNow, the big finale: let’s do a 3D map fit for the source at the Galactic center, to measure it’s position and spectrum. We keep the background normalization free.
###Code
spatial_model = PointSpatialModel(
lon_0="0 deg", lat_0="0 deg", frame="galactic"
)
spectral_model = PowerLawSpectralModel(
index=2.7, amplitude="5.8e-10 cm-2 s-1 TeV-1", reference="100 GeV"
)
source = SkyModel(
spectral_model=spectral_model,
spatial_model=spatial_model,
name="source-gc",
)
models = Models([source, diffuse_iem, diffuse_iso])
dataset = MapDataset(
models=models, counts=counts, exposure=exposure, psf=psf, edisp=edisp
)
%%time
fit = Fit([dataset])
result = fit.run()
print(result)
print(models)
residual = counts - dataset.npred()
residual.sum_over_axes().smooth("0.1 deg").plot(
cmap="coolwarm", vmin=-3, vmax=3, add_cbar=True
);
###Output
_____no_output_____
###Markdown
Fermi-LAT with Gammapy IntroductionThis tutorial will show you how to work with Fermi-LAT data with Gammapy. As an example, we will look at the Galactic center region using the high-energy dataset that was used for the 3FHL catalog, in the energy range 10 GeV to 2 TeV.We note that support for Fermi-LAT data analysis in Gammapy is very limited. For most tasks, we recommend you use [Fermipy](http://fermipy.readthedocs.io/), which is based on the [Fermi Science Tools](https://fermi.gsfc.nasa.gov/ssc/data/analysis/software/) (Fermi ST).Using Gammapy with Fermi-LAT data could be an option for you if you want to do an analysis that is not easily possible with Fermipy and the Fermi Science Tools. For example a joint likelihood fit of Fermi-LAT data with data e.g. from H.E.S.S., MAGIC, VERITAS or some other instrument, or analysis of Fermi-LAT data with a complex spatial or spectral model that is not available in Fermipy or the Fermi ST.Besides Gammapy, you might want to look at are [Sherpa](http://cxc.harvard.edu/sherpa/) or [3ML](https://threeml.readthedocs.io/). Or just using Python to roll your own analyis using several existing analysis packages. E.g. it it possible to use Fermipy and the Fermi ST to evaluate the likelihood on Fermi-LAT data, and Gammapy to evaluate it e.g. for IACT data, and to do a joint likelihood fit using e.g. [iminuit](http://iminuit.readthedocs.io/) or [emcee](http://dfm.io/emcee).To use Fermi-LAT data with Gammapy, you first have to use the Fermi ST to prepare an event list (using ``gtselect`` and ``gtmktime``, exposure cube (using ``gtexpcube2`` and PSF (using ``gtpsf``). You can then use `~gammapy.data.EventList`, `~gammapy.maps` and the `~gammapy.irf.EnergyDependentTablePSF` to read the Fermi-LAT maps and PSF, i.e. support for these high-level analysis products from the Fermi ST is built in. To do a 3D map analyis, you can use Fit for Fermi-LAT data in the same way that it's use for IACT data. This is illustrated in this notebook. A 1D region-based spectral analysis is also possible, this will be illustrated in a future tutorial. Setup**IMPORTANT**: For this notebook you have to get the prepared ``3fhl`` dataset provided in your $GAMMAPY_DATA.Note that the ``3fhl`` dataset is high-energy only, ranging from 10 GeV to 2 TeV.
###Code
# Check that you have the prepared Fermi-LAT dataset
# We will use diffuse models from here
!ls -1 $GAMMAPY_DATA/fermi_3fhl
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from astropy import units as u
from astropy.coordinates import SkyCoord
from gammapy.data import EventList
from gammapy.datasets import MapDataset
from gammapy.irf import EnergyDependentTablePSF, PSFMap, EDispKernelMap
from gammapy.maps import Map, MapAxis, WcsGeom
from gammapy.modeling.models import (
PowerLawSpectralModel,
PointSpatialModel,
SkyModel,
TemplateSpatialModel,
PowerLawNormSpectralModel,
Models,
create_fermi_isotropic_diffuse_model,
)
from gammapy.modeling import Fit
###Output
_____no_output_____
###Markdown
EventsTo load up the Fermi-LAT event list, use the `~gammapy.data.EventList` class:
###Code
events = EventList.read(
"$GAMMAPY_DATA/fermi_3fhl/fermi_3fhl_events_selected.fits.gz"
)
print(events)
###Output
_____no_output_____
###Markdown
The event data is stored in a [astropy.table.Table](http://docs.astropy.org/en/stable/api/astropy.table.Table.html) object. In case of the Fermi-LAT event list this contains all the additional information on positon, zenith angle, earth azimuth angle, event class, event type etc.
###Code
events.table.colnames
events.table[:5][["ENERGY", "RA", "DEC"]]
print(events.time[0].iso)
print(events.time[-1].iso)
energy = events.energy
energy.info("stats")
###Output
_____no_output_____
###Markdown
As a short analysis example we will count the number of events above a certain minimum energy:
###Code
for e_min in [10, 100, 1000] * u.GeV:
n = (events.energy > e_min).sum()
print(f"Events above {e_min:4.0f}: {n:5.0f}")
###Output
_____no_output_____
###Markdown
CountsLet us start to prepare things for an 3D map analysis of the Galactic center region with Gammapy. The first thing we do is to define the map geometry. We chose a TAN projection centered on position ``(glon, glat) = (0, 0)`` with pixel size 0.1 deg, and four energy bins.
###Code
gc_pos = SkyCoord(0, 0, unit="deg", frame="galactic")
energy_axis = MapAxis.from_edges(
[1e4, 3e4, 1e5, 3e5, 2e6], name="energy", unit="MeV", interp="log"
)
counts = Map.create(
skydir=gc_pos,
npix=(100, 80),
proj="TAN",
frame="galactic",
binsz=0.1,
axes=[energy_axis],
dtype=float,
)
# We put this call into the same Jupyter cell as the Map.create
# because otherwise we could accidentally fill the counts
# multiple times when executing the ``fill_by_coord`` multiple times.
counts.fill_events(events)
counts.geom.axes[0]
counts.sum_over_axes().smooth(2).plot(stretch="sqrt", vmax=30);
###Output
_____no_output_____
###Markdown
ExposureThe Fermi-LAT datatset contains the energy-dependent exposure for the whole sky as a HEALPix map computed with ``gtexpcube2``. This format is supported by `~gammapy.maps` directly.Interpolating the exposure cube from the Fermi ST to get an exposure cube matching the spatial geometry and energy axis defined above with Gammapy is easy. The only point to watch out for is how exactly you want the energy axis and binning handled.Below we just use the default behaviour, which is linear interpolation in energy on the original exposure cube. Probably log interpolation would be better, but it doesn't matter much here, because the energy binning is fine. Finally, we just copy the counts map geometry, which contains an energy axis with `node_type="edges"`. This is non-ideal for exposure cubes, but again, acceptable because exposure doesn't vary much from bin to bin, so the exact way interpolation occurs in later use of that exposure cube doesn't matter a lot. Of course you could define any energy axis for your exposure cube that you like.
###Code
exposure_hpx = Map.read(
"$GAMMAPY_DATA/fermi_3fhl/fermi_3fhl_exposure_cube_hpx.fits.gz"
)
print(exposure_hpx.geom)
print(exposure_hpx.geom.axes[0])
exposure_hpx.plot();
# For exposure, we choose a geometry with node_type='center',
# whereas for counts it was node_type='edge'
axis = MapAxis.from_energy_bounds("10 GeV", "2 TeV", nbin=10, per_decade=True, name="energy_true",)
geom = WcsGeom(wcs=counts.geom.wcs, npix=counts.geom.npix, axes=[axis])
exposure = exposure_hpx.interp_to_geom(geom)
counts.geom.axes[0]
print(exposure.geom)
print(exposure.geom.axes[0])
# Exposure is almost constant accross the field of view
exposure.slice_by_idx({"energy_true": 0}).plot(add_cbar=True);
# Exposure varies very little with energy at these high energies
energy = [10, 100, 1000] * u.GeV
exposure.get_by_coord({"skycoord": gc_pos, "energy_true": energy})
###Output
_____no_output_____
###Markdown
Galactic diffuse background The Fermi-LAT collaboration provides a galactic diffuse emission model, that can be used as a background model forFermi-LAT source analysis.Diffuse model maps are very large (100s of MB), so as an example here, we just load one that represents a small cutout for the Galactic center region.
###Code
diffuse_galactic_fermi = Map.read(
"$GAMMAPY_DATA/fermi-3fhl-gc/gll_iem_v06_gc.fits.gz"
)
# Unit is not stored in the file, set it manually
diffuse_galactic_fermi.unit = "cm-2 s-1 MeV-1 sr-1"
print(diffuse_galactic_fermi.geom)
print(diffuse_galactic_fermi.geom.axes[0])
template_diffuse = TemplateSpatialModel(
diffuse_galactic_fermi, normalize=False
)
diffuse_iem = SkyModel(
spectral_model=PowerLawNormSpectralModel(),
spatial_model=template_diffuse,
name="diffuse-iem",
)
###Output
_____no_output_____
###Markdown
Let's look at the map of first energy band of the cube:
###Code
template_diffuse.map.slice_by_idx({"energy_true": 0}).plot(add_cbar=True);
###Output
_____no_output_____
###Markdown
Here is the spectrum at the Glaactic center:
###Code
# Exposure varies very little with energy at these high energies
dnde = template_diffuse.map.to_region_nd_map(region=gc_pos)
dnde.plot()
plt.xlabel("Energy (GeV)")
plt.ylabel("Flux (cm-2 s-1 MeV-1 sr-1)");
# TODO: show how one can fix the extrapolate to high energy
# by computing and padding an extra plane e.g. at 1e3 TeV
# that corresponds to a linear extrapolation
###Output
_____no_output_____
###Markdown
Isotropic diffuse backgroundTo load the isotropic diffuse model with Gammapy, use the `~gammapy.modeling.models.TemplateSpectralModel`. We are using `'fill_value': 'extrapolate'` to extrapolate the model above 500 GeV:
###Code
filename = "$GAMMAPY_DATA/fermi_3fhl/iso_P8R2_SOURCE_V6_v06.txt"
diffuse_iso = create_fermi_isotropic_diffuse_model(
filename=filename, interp_kwargs={"fill_value": None}
)
###Output
_____no_output_____
###Markdown
We can plot the model in the energy range between 50 GeV and 2000 GeV:
###Code
energy_range = [50, 2000] * u.GeV
diffuse_iso.spectral_model.plot(energy_range, flux_unit="1 / (cm2 MeV s)");
###Output
_____no_output_____
###Markdown
PSFNext we will tke a look at the PSF. It was computed using ``gtpsf``, in this case for the Galactic center position. Note that generally for Fermi-LAT, the PSF only varies little within a given regions of the sky, especially at high energies like what we have here. We use the `~gammapy.irf.EnergyDependentTablePSF` class to load the PSF and use some of it's methods to get some information about it.
###Code
psf_table = EnergyDependentTablePSF.read(
"$GAMMAPY_DATA/fermi_3fhl/fermi_3fhl_psf_gc.fits.gz"
)
print(psf_table)
###Output
_____no_output_____
###Markdown
To get an idea of the size of the PSF we check how the containment radii of the Fermi-LAT PSF vari with energy and different containment fractions:
###Code
plt.figure(figsize=(8, 5))
psf_table.plot_containment_vs_energy(linewidth=2, fractions=[0.68, 0.95])
plt.xlim(50, 2000)
plt.show()
###Output
_____no_output_____
###Markdown
In addition we can check how the actual shape of the PSF varies with energy and compare it against the mean PSF between 50 GeV and 2000 GeV:
###Code
plt.figure(figsize=(8, 5))
energy = [100, 300, 1000] * u.GeV
psf_table.plot_psf_vs_rad(energy=energy)
energy_range = [50, 2000] * u.GeV
spectrum = PowerLawSpectralModel(index=2.3)
psf_mean = psf_table.table_psf_in_energy_range(
energy_range=energy_range, spectrum=spectrum
)
psf_mean.plot_psf_vs_rad(c="k", ls="--", energy=[500] * u.GeV)
plt.xlim(1e-3, 0.3)
plt.ylim(1e3, 1e6)
plt.legend();
# Let's compute a PSF kernel matching the pixel size of our map
psf = PSFMap.from_energy_dependent_table_psf(psf_table)
psf_kernel = psf.get_psf_kernel(
position=geom.center_skydir, geom=geom, max_radius="1 deg"
)
psf_kernel.psf_kernel_map.sum_over_axes().plot(stretch="log", add_cbar=True);
###Output
_____no_output_____
###Markdown
Energy DispersionFor simplicity we assume a diagonal energy dispersion:
###Code
e_true = exposure.geom.axes["energy_true"]
edisp = EDispKernelMap.from_diagonal_response(
energy_axis_true=e_true, energy_axis=energy_axis
)
edisp.get_edisp_kernel().plot_matrix()
###Output
_____no_output_____
###Markdown
FitNow, the big finale: let’s do a 3D map fit for the source at the Galactic center, to measure it’s position and spectrum. We keep the background normalization free.
###Code
spatial_model = PointSpatialModel(
lon_0="0 deg", lat_0="0 deg", frame="galactic"
)
spectral_model = PowerLawSpectralModel(
index=2.7, amplitude="5.8e-10 cm-2 s-1 TeV-1", reference="100 GeV"
)
source = SkyModel(
spectral_model=spectral_model,
spatial_model=spatial_model,
name="source-gc",
)
models = Models([source, diffuse_iem, diffuse_iso])
dataset = MapDataset(
models=models, counts=counts, exposure=exposure, psf=psf, edisp=edisp
)
%%time
fit = Fit([dataset])
result = fit.run()
print(result)
print(models)
residual = counts - dataset.npred()
residual.sum_over_axes().smooth("0.1 deg").plot(
cmap="coolwarm", vmin=-3, vmax=3, add_cbar=True
);
###Output
_____no_output_____
###Markdown
Fermi-LAT data with Gammapy IntroductionThis tutorial will show you how to work with Fermi-LAT data with Gammapy. As an example, we will look at the Galactic center region using the high-energy dataset that was used for the 3FHL catalog, in the energy range 10 GeV to 2 TeV.We note that support for Fermi-LAT data analysis in Gammapy is very limited. For most tasks, we recommend you use [Fermipy](http://fermipy.readthedocs.io/), which is based on the [Fermi Science Tools](https://fermi.gsfc.nasa.gov/ssc/data/analysis/software/) (Fermi ST).Using Gammapy with Fermi-LAT data could be an option for you if you want to do an analysis that is not easily possible with Fermipy and the Fermi Science Tools. For example a joint likelihood fit of Fermi-LAT data with data e.g. from H.E.S.S., MAGIC, VERITAS or some other instrument, or analysis of Fermi-LAT data with a complex spatial or spectral model that is not available in Fermipy or the Fermi ST.Besides Gammapy, you might want to look at are [Sherpa](http://cxc.harvard.edu/sherpa/) or [3ML](https://threeml.readthedocs.io/). Or just using Python to roll your own analyis using several existing analysis packages. E.g. it it possible to use Fermipy and the Fermi ST to evaluate the likelihood on Fermi-LAT data, and Gammapy to evaluate it e.g. for IACT data, and to do a joint likelihood fit using e.g. [iminuit](http://iminuit.readthedocs.io/) or [emcee](http://dfm.io/emcee).To use Fermi-LAT data with Gammapy, you first have to use the Fermi ST to prepare an event list (using ``gtselect`` and ``gtmktime``, exposure cube (using ``gtexpcube2`` and PSF (using ``gtpsf``). You can then use `~gammapy.data.EventList`, `~gammapy.maps` and the `~gammapy.irf.EnergyDependentTablePSF` to read the Fermi-LAT maps and PSF, i.e. support for these high-level analysis products from the Fermi ST is built in. To do a 3D map analyis, you can use Fit for Fermi-LAT data in the same way that it's use for IACT data. This is illustrated in this notebook. A 1D region-based spectral analysis is also possible, this will be illustrated in a future tutorial. Setup**IMPORTANT**: For this notebook you have to get the prepared ``3fhl`` dataset provided in your $GAMMAPY_DATA.Note that the ``3fhl`` dataset is high-energy only, ranging from 10 GeV to 2 TeV.
###Code
# Check that you have the prepared Fermi-LAT dataset
# We will use diffuse models from here
!ls -1 $GAMMAPY_DATA/fermi_3fhl
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from astropy import units as u
from astropy.coordinates import SkyCoord
from gammapy.data import EventList
from gammapy.datasets import MapDataset
from gammapy.irf import EnergyDependentTablePSF, PSFMap, EDispMap
from gammapy.maps import Map, MapAxis, WcsGeom
from gammapy.modeling.models import (
PowerLawSpectralModel,
PointSpatialModel,
SkyModel,
TemplateSpatialModel,
PowerLawNormSpectralModel,
Models,
create_fermi_isotropic_diffuse_model,
)
from gammapy.modeling import Fit
###Output
_____no_output_____
###Markdown
EventsTo load up the Fermi-LAT event list, use the `~gammapy.data.EventList` class:
###Code
events = EventList.read(
"$GAMMAPY_DATA/fermi_3fhl/fermi_3fhl_events_selected.fits.gz"
)
print(events)
###Output
_____no_output_____
###Markdown
The event data is stored in a [astropy.table.Table](http://docs.astropy.org/en/stable/api/astropy.table.Table.html) object. In case of the Fermi-LAT event list this contains all the additional information on positon, zenith angle, earth azimuth angle, event class, event type etc.
###Code
events.table.colnames
events.table[:5][["ENERGY", "RA", "DEC"]]
print(events.time[0].iso)
print(events.time[-1].iso)
energy = events.energy
energy.info("stats")
###Output
_____no_output_____
###Markdown
As a short analysis example we will count the number of events above a certain minimum energy:
###Code
for e_min in [10, 100, 1000] * u.GeV:
n = (events.energy > e_min).sum()
print(f"Events above {e_min:4.0f}: {n:5.0f}")
###Output
_____no_output_____
###Markdown
CountsLet us start to prepare things for an 3D map analysis of the Galactic center region with Gammapy. The first thing we do is to define the map geometry. We chose a TAN projection centered on position ``(glon, glat) = (0, 0)`` with pixel size 0.1 deg, and four energy bins.
###Code
gc_pos = SkyCoord(0, 0, unit="deg", frame="galactic")
energy_axis = MapAxis.from_edges(
[1e4, 3e4, 1e5, 3e5, 2e6], name="energy", unit="MeV", interp="log"
)
counts = Map.create(
skydir=gc_pos,
npix=(100, 80),
proj="TAN",
frame="galactic",
binsz=0.1,
axes=[energy_axis],
dtype=float,
)
# We put this call into the same Jupyter cell as the Map.create
# because otherwise we could accidentally fill the counts
# multiple times when executing the ``fill_by_coord`` multiple times.
counts.fill_by_coord({"skycoord": events.radec, "energy": events.energy})
counts.geom.axes[0]
counts.sum_over_axes().smooth(2).plot(stretch="sqrt", vmax=30);
###Output
_____no_output_____
###Markdown
ExposureThe Fermi-LAT datatset contains the energy-dependent exposure for the whole sky as a HEALPix map computed with ``gtexpcube2``. This format is supported by `~gammapy.maps` directly.Interpolating the exposure cube from the Fermi ST to get an exposure cube matching the spatial geometry and energy axis defined above with Gammapy is easy. The only point to watch out for is how exactly you want the energy axis and binning handled.Below we just use the default behaviour, which is linear interpolation in energy on the original exposure cube. Probably log interpolation would be better, but it doesn't matter much here, because the energy binning is fine. Finally, we just copy the counts map geometry, which contains an energy axis with `node_type="edges"`. This is non-ideal for exposure cubes, but again, acceptable because exposure doesn't vary much from bin to bin, so the exact way interpolation occurs in later use of that exposure cube doesn't matter a lot. Of course you could define any energy axis for your exposure cube that you like.
###Code
exposure_hpx = Map.read(
"$GAMMAPY_DATA/fermi_3fhl/fermi_3fhl_exposure_cube_hpx.fits.gz"
)
print(exposure_hpx.geom)
print(exposure_hpx.geom.axes[0])
exposure_hpx.plot();
# For exposure, we choose a geometry with node_type='center',
# whereas for counts it was node_type='edge'
axis = MapAxis.from_nodes(
counts.geom.axes[0].center, name="energy_true", unit="MeV", interp="log"
)
geom = WcsGeom(wcs=counts.geom.wcs, npix=counts.geom.npix, axes=[axis])
exposure = exposure_hpx.interp_to_geom(geom)
print(exposure.geom)
print(exposure.geom.axes[0])
# Exposure is almost constant accross the field of view
exposure.slice_by_idx({"energy_true": 0}).plot(add_cbar=True);
# Exposure varies very little with energy at these high energies
energy = [10, 100, 1000] * u.GeV
exposure.get_by_coord({"skycoord": gc_pos, "energy_true": energy})
###Output
_____no_output_____
###Markdown
Galactic diffuse background The Fermi-LAT collaboration provides a galactic diffuse emission model, that can be used as a background model forFermi-LAT source analysis.Diffuse model maps are very large (100s of MB), so as an example here, we just load one that represents a small cutout for the Galactic center region.
###Code
diffuse_galactic_fermi = Map.read(
"$GAMMAPY_DATA/fermi-3fhl-gc/gll_iem_v06_gc.fits.gz"
)
# Unit is not stored in the file, set it manually
diffuse_galactic_fermi.unit = "cm-2 s-1 MeV-1 sr-1"
print(diffuse_galactic_fermi.geom)
print(diffuse_galactic_fermi.geom.axes[0])
template_diffuse = TemplateSpatialModel(
diffuse_galactic_fermi, normalize=False
)
diffuse_iem = SkyModel(
spectral_model=PowerLawNormSpectralModel(),
spatial_model=template_diffuse,
name="diffuse-iem",
)
###Output
_____no_output_____
###Markdown
Let's look at the map of first energy band of the cube:
###Code
template_diffuse.map.slice_by_idx({"energy_true": 0}).plot(add_cbar=True);
###Output
_____no_output_____
###Markdown
Here is the spectrum at the Glaactic center:
###Code
# Exposure varies very little with energy at these high energies
energy = np.logspace(1, 3, 10) * u.GeV
dnde = template_diffuse.map.interp_by_coord(
{"skycoord": gc_pos, "energy_true": energy},
interp="linear",
fill_value=None,
)
plt.plot(energy.value, dnde, "+")
plt.loglog()
plt.xlabel("Energy (GeV)")
plt.ylabel("Flux (cm-2 s-1 MeV-1 sr-1)")
# TODO: show how one can fix the extrapolate to high energy
# by computing and padding an extra plane e.g. at 1e3 TeV
# that corresponds to a linear extrapolation
###Output
_____no_output_____
###Markdown
Isotropic diffuse backgroundTo load the isotropic diffuse model with Gammapy, use the `~gammapy.modeling.models.TemplateSpectralModel`. We are using `'fill_value': 'extrapolate'` to extrapolate the model above 500 GeV:
###Code
filename = "$GAMMAPY_DATA/fermi_3fhl/iso_P8R2_SOURCE_V6_v06.txt"
diffuse_iso = create_fermi_isotropic_diffuse_model(
filename=filename, interp_kwargs={"fill_value": None}
)
###Output
_____no_output_____
###Markdown
We can plot the model in the energy range between 50 GeV and 2000 GeV:
###Code
energy_range = [50, 2000] * u.GeV
diffuse_iso.spectral_model.plot(energy_range, flux_unit="1 / (cm2 MeV s)");
###Output
_____no_output_____
###Markdown
PSFNext we will tke a look at the PSF. It was computed using ``gtpsf``, in this case for the Galactic center position. Note that generally for Fermi-LAT, the PSF only varies little within a given regions of the sky, especially at high energies like what we have here. We use the `~gammapy.irf.EnergyDependentTablePSF` class to load the PSF and use some of it's methods to get some information about it.
###Code
psf_table = EnergyDependentTablePSF.read(
"$GAMMAPY_DATA/fermi_3fhl/fermi_3fhl_psf_gc.fits.gz"
)
print(psf_table)
###Output
_____no_output_____
###Markdown
To get an idea of the size of the PSF we check how the containment radii of the Fermi-LAT PSF vari with energy and different containment fractions:
###Code
plt.figure(figsize=(8, 5))
psf_table.plot_containment_vs_energy(linewidth=2, fractions=[0.68, 0.95])
plt.xlim(50, 2000)
plt.show()
###Output
_____no_output_____
###Markdown
In addition we can check how the actual shape of the PSF varies with energy and compare it against the mean PSF between 50 GeV and 2000 GeV:
###Code
plt.figure(figsize=(8, 5))
for energy in [100, 300, 1000] * u.GeV:
psf_at_energy = psf_table.table_psf_at_energy(energy)
psf_at_energy.plot_psf_vs_rad(label=f"PSF @ {energy:.0f}", lw=2)
energy_range = [50, 2000] * u.GeV
spectrum = PowerLawSpectralModel(index=2.3)
psf_mean = psf_table.table_psf_in_energy_range(
energy_range=energy_range, spectrum=spectrum
)
psf_mean.plot_psf_vs_rad(label="PSF Mean", lw=4, c="k", ls="--")
plt.xlim(1e-3, 0.3)
plt.ylim(1e3, 1e6)
plt.legend();
# Let's compute a PSF kernel matching the pixel size of our map
psf = PSFMap.from_energy_dependent_table_psf(psf_table)
psf_kernel = psf.get_psf_kernel(
position=geom.center_skydir, geom=geom, max_radius="1 deg"
)
psf_kernel.psf_kernel_map.sum_over_axes().plot(stretch="log", add_cbar=True);
###Output
_____no_output_____
###Markdown
Energy DispersionFor simplicity we assume a diagonal energy dispersion:
###Code
e_true = exposure.geom.axes["energy_true"]
edisp = EDispMap.from_diagonal_response(energy_axis_true=e_true)
###Output
_____no_output_____
###Markdown
FitNow, the big finale: let’s do a 3D map fit for the source at the Galactic center, to measure it’s position and spectrum. We keep the background normalization free.
###Code
spatial_model = PointSpatialModel(
lon_0="0 deg", lat_0="0 deg", frame="galactic"
)
spectral_model = PowerLawSpectralModel(
index=2.7, amplitude="5.8e-10 cm-2 s-1 TeV-1", reference="100 GeV"
)
source = SkyModel(
spectral_model=spectral_model,
spatial_model=spatial_model,
name="source-gc",
)
models = Models([source, diffuse_iem, diffuse_iso])
dataset = MapDataset(
models=models, counts=counts, exposure=exposure, psf=psf, edisp=edisp
)
%%time
fit = Fit([dataset])
result = fit.run()
print(result)
print(models)
residual = counts - dataset.npred()
residual.sum_over_axes().smooth("0.1 deg").plot(
cmap="coolwarm", vmin=-3, vmax=3, add_cbar=True
);
###Output
_____no_output_____
###Markdown
Fermi-LAT data with Gammapy IntroductionThis tutorial will show you how to work with Fermi-LAT data with Gammapy. As an example, we will look at the Galactic center region using the high-energy dataset that was used for the 3FHL catalog, in the energy range 10 GeV to 2 TeV.We note that support for Fermi-LAT data analysis in Gammapy is very limited. For most tasks, we recommend you use [Fermipy](http://fermipy.readthedocs.io/), which is based on the [Fermi Science Tools](https://fermi.gsfc.nasa.gov/ssc/data/analysis/software/) (Fermi ST).Using Gammapy with Fermi-LAT data could be an option for you if you want to do an analysis that is not easily possible with Fermipy and the Fermi Science Tools. For example a joint likelihood fit of Fermi-LAT data with data e.g. from H.E.S.S., MAGIC, VERITAS or some other instrument, or analysis of Fermi-LAT data with a complex spatial or spectral model that is not available in Fermipy or the Fermi ST.Besides Gammapy, you might want to look at are [Sherpa](http://cxc.harvard.edu/sherpa/) or [3ML](https://threeml.readthedocs.io/). Or just using Python to roll your own analyis using several existing analysis packages. E.g. it it possible to use Fermipy and the Fermi ST to evaluate the likelihood on Fermi-LAT data, and Gammapy to evaluate it e.g. for IACT data, and to do a joint likelihood fit using e.g. [iminuit](http://iminuit.readthedocs.io/) or [emcee](http://dfm.io/emcee).To use Fermi-LAT data with Gammapy, you first have to use the Fermi ST to prepare an event list (using ``gtselect`` and ``gtmktime``, exposure cube (using ``gtexpcube2`` and PSF (using ``gtpsf``). You can then use `~gammapy.data.EventList`, `~gammapy.maps` and the `~gammapy.irf.EnergyDependentTablePSF` to read the Fermi-LAT maps and PSF, i.e. support for these high-level analysis products from the Fermi ST is built in. To do a 3D map analyis, you can use Fit for Fermi-LAT data in the same way that it's use for IACT data. This is illustrated in this notebook. A 1D region-based spectral analysis is also possible, this will be illustrated in a future tutorial. Setup**IMPORTANT**: For this notebook you have to get the prepared ``3fhl`` dataset provided in your $GAMMAPY_DATA.Note that the ``3fhl`` dataset is high-energy only, ranging from 10 GeV to 2 TeV.
###Code
# Check that you have the prepared Fermi-LAT dataset
# We will use diffuse models from here
!ls -1 $GAMMAPY_DATA/fermi_3fhl
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from astropy import units as u
from astropy.coordinates import SkyCoord
from gammapy.data import EventList
from gammapy.datasets import MapDataset
from gammapy.datasets.map import MapEvaluator
from gammapy.irf import EnergyDependentTablePSF, PSFMap, EDispMap
from gammapy.maps import Map, MapAxis, WcsNDMap, WcsGeom
from gammapy.modeling.models import (
PowerLawSpectralModel,
PointSpatialModel,
SkyModel,
SkyDiffuseCube,
Models,
create_fermi_isotropic_diffuse_model,
BackgroundModel,
)
from gammapy.modeling import Fit
###Output
_____no_output_____
###Markdown
EventsTo load up the Fermi-LAT event list, use the `~gammapy.data.EventList` class:
###Code
events = EventList.read(
"$GAMMAPY_DATA/fermi_3fhl/fermi_3fhl_events_selected.fits.gz"
)
print(events)
###Output
_____no_output_____
###Markdown
The event data is stored in a [astropy.table.Table](http://docs.astropy.org/en/stable/api/astropy.table.Table.html) object. In case of the Fermi-LAT event list this contains all the additional information on positon, zenith angle, earth azimuth angle, event class, event type etc.
###Code
events.table.colnames
events.table[:5][["ENERGY", "RA", "DEC"]]
print(events.time[0].iso)
print(events.time[-1].iso)
energy = events.energy
energy.info("stats")
###Output
_____no_output_____
###Markdown
As a short analysis example we will count the number of events above a certain minimum energy:
###Code
for e_min in [10, 100, 1000] * u.GeV:
n = (events.energy > e_min).sum()
print(f"Events above {e_min:4.0f}: {n:5.0f}")
###Output
_____no_output_____
###Markdown
CountsLet us start to prepare things for an 3D map analysis of the Galactic center region with Gammapy. The first thing we do is to define the map geometry. We chose a TAN projection centered on position ``(glon, glat) = (0, 0)`` with pixel size 0.1 deg, and four energy bins.
###Code
gc_pos = SkyCoord(0, 0, unit="deg", frame="galactic")
energy_axis = MapAxis.from_edges(
[1e4, 3e4, 1e5, 3e5, 2e6], name="energy", unit="MeV", interp="log"
)
counts = Map.create(
skydir=gc_pos,
npix=(100, 80),
proj="TAN",
frame="galactic",
binsz=0.1,
axes=[energy_axis],
dtype=float,
)
# We put this call into the same Jupyter cell as the Map.create
# because otherwise we could accidentally fill the counts
# multiple times when executing the ``fill_by_coord`` multiple times.
counts.fill_by_coord({"skycoord": events.radec, "energy": events.energy})
counts.geom.axes[0]
counts.sum_over_axes().smooth(2).plot(stretch="sqrt", vmax=30);
###Output
_____no_output_____
###Markdown
ExposureThe Fermi-LAT datatset contains the energy-dependent exposure for the whole sky as a HEALPix map computed with ``gtexpcube2``. This format is supported by `~gammapy.maps` directly.Interpolating the exposure cube from the Fermi ST to get an exposure cube matching the spatial geometry and energy axis defined above with Gammapy is easy. The only point to watch out for is how exactly you want the energy axis and binning handled.Below we just use the default behaviour, which is linear interpolation in energy on the original exposure cube. Probably log interpolation would be better, but it doesn't matter much here, because the energy binning is fine. Finally, we just copy the counts map geometry, which contains an energy axis with `node_type="edges"`. This is non-ideal for exposure cubes, but again, acceptable because exposure doesn't vary much from bin to bin, so the exact way interpolation occurs in later use of that exposure cube doesn't matter a lot. Of course you could define any energy axis for your exposure cube that you like.
###Code
exposure_hpx = Map.read(
"$GAMMAPY_DATA/fermi_3fhl/fermi_3fhl_exposure_cube_hpx.fits.gz"
)
# Unit is not stored in the file, set it manually
exposure_hpx.unit = "cm2 s"
print(exposure_hpx.geom)
print(exposure_hpx.geom.axes[0])
exposure_hpx.plot();
# For exposure, we choose a geometry with node_type='center',
# whereas for counts it was node_type='edge'
axis = MapAxis.from_nodes(
counts.geom.axes[0].center, name="energy_true", unit="MeV", interp="log"
)
geom = WcsGeom(wcs=counts.geom.wcs, npix=counts.geom.npix, axes=[axis])
coord = geom.get_coord()
data = exposure_hpx.interp_by_coord(coord)
exposure = WcsNDMap(geom, data, unit=exposure_hpx.unit, dtype=float)
print(exposure.geom)
print(exposure.geom.axes[0])
# Exposure is almost constant accross the field of view
exposure.slice_by_idx({"energy_true": 0}).plot(add_cbar=True);
# Exposure varies very little with energy at these high energies
energy = [10, 100, 1000] * u.GeV
exposure.get_by_coord({"skycoord": gc_pos, "energy_true": energy})
###Output
_____no_output_____
###Markdown
Galactic diffuse background The Fermi-LAT collaboration provides a galactic diffuse emission model, that can be used as a background model forFermi-LAT source analysis.Diffuse model maps are very large (100s of MB), so as an example here, we just load one that represents a small cutout for the Galactic center region.
###Code
diffuse_galactic_fermi = Map.read(
"$GAMMAPY_DATA/fermi_3fhl/gll_iem_v06_cutout.fits"
)
# Unit is not stored in the file, set it manually
diffuse_galactic_fermi.unit = "cm-2 s-1 MeV-1 sr-1"
print(diffuse_galactic_fermi.geom)
print(diffuse_galactic_fermi.geom.axes[0])
# Interpolate the diffuse emission model onto the counts geometry
# The resolution of `diffuse_galactic_fermi` is low: bin size = 0.5 deg
# We use ``interp=3`` which means cubic spline interpolation
coord = counts.geom.get_coord()
data = diffuse_galactic_fermi.interp_by_coord(
{"skycoord": coord.skycoord, "energy": coord["energy"]}, interp=3
)
diffuse_galactic = WcsNDMap(
exposure.geom, data, unit=diffuse_galactic_fermi.unit
)
print(diffuse_galactic.geom)
print(diffuse_galactic.geom.axes[0])
diffuse_gal = SkyDiffuseCube(diffuse_galactic, name="diffuse-gal")
###Output
_____no_output_____
###Markdown
Let's look at the map of first energy band of the cube:
###Code
diffuse_gal.map.slice_by_idx({"energy_true": 0}).plot(add_cbar=True);
###Output
_____no_output_____
###Markdown
Here is the spectrum at the Glaactic center:
###Code
# Exposure varies very little with energy at these high energies
energy = np.logspace(1, 3, 10) * u.GeV
dnde = diffuse_gal.map.interp_by_coord(
{"skycoord": gc_pos, "energy_true": energy},
interp="linear",
fill_value=None,
)
plt.plot(energy.value, dnde, "+")
plt.loglog()
plt.xlabel("Energy (GeV)")
plt.ylabel("Flux (cm-2 s-1 MeV-1 sr-1)")
# TODO: show how one can fix the extrapolate to high energy
# by computing and padding an extra plane e.g. at 1e3 TeV
# that corresponds to a linear extrapolation
###Output
_____no_output_____
###Markdown
Isotropic diffuse backgroundTo load the isotropic diffuse model with Gammapy, use the `~gammapy.modeling.models.TemplateSpectralModel`. We are using `'fill_value': 'extrapolate'` to extrapolate the model above 500 GeV:
###Code
filename = "$GAMMAPY_DATA/fermi_3fhl/iso_P8R2_SOURCE_V6_v06.txt"
diffuse_iso = create_fermi_isotropic_diffuse_model(
filename=filename, interp_kwargs={"fill_value": None}
)
###Output
_____no_output_____
###Markdown
We can plot the model in the energy range between 50 GeV and 2000 GeV:
###Code
erange = [50, 2000] * u.GeV
diffuse_iso.spectral_model.plot(erange, flux_unit="1 / (cm2 MeV s)");
###Output
_____no_output_____
###Markdown
PSFNext we will tke a look at the PSF. It was computed using ``gtpsf``, in this case for the Galactic center position. Note that generally for Fermi-LAT, the PSF only varies little within a given regions of the sky, especially at high energies like what we have here. We use the `~gammapy.irf.EnergyDependentTablePSF` class to load the PSF and use some of it's methods to get some information about it.
###Code
psf_table = EnergyDependentTablePSF.read(
"$GAMMAPY_DATA/fermi_3fhl/fermi_3fhl_psf_gc.fits.gz"
)
print(psf_table)
###Output
_____no_output_____
###Markdown
To get an idea of the size of the PSF we check how the containment radii of the Fermi-LAT PSF vari with energy and different containment fractions:
###Code
plt.figure(figsize=(8, 5))
psf_table.plot_containment_vs_energy(linewidth=2, fractions=[0.68, 0.95])
plt.xlim(50, 2000)
plt.show()
###Output
_____no_output_____
###Markdown
In addition we can check how the actual shape of the PSF varies with energy and compare it against the mean PSF between 50 GeV and 2000 GeV:
###Code
plt.figure(figsize=(8, 5))
for energy in [100, 300, 1000] * u.GeV:
psf_at_energy = psf_table.table_psf_at_energy(energy)
psf_at_energy.plot_psf_vs_rad(label=f"PSF @ {energy:.0f}", lw=2)
erange = [50, 2000] * u.GeV
spectrum = PowerLawSpectralModel(index=2.3)
psf_mean = psf_table.table_psf_in_energy_band(
energy_band=erange, spectrum=spectrum
)
psf_mean.plot_psf_vs_rad(label="PSF Mean", lw=4, c="k", ls="--")
plt.xlim(1e-3, 0.3)
plt.ylim(1e3, 1e6)
plt.legend();
# Let's compute a PSF kernel matching the pixel size of our map
psf = PSFMap.from_energy_dependent_table_psf(psf_table)
psf_kernel = psf.get_psf_kernel(
position=geom.center_skydir, geom=geom, max_radius="1 deg"
)
psf_kernel.psf_kernel_map.sum_over_axes().plot(stretch="log", add_cbar=True);
###Output
_____no_output_____
###Markdown
Energy DispersionFor simplicity we assume a diagonal energy dispersion:
###Code
e_true = exposure.geom.get_axis_by_name("energy_true")
edisp = EDispMap.from_diagonal_response(energy_axis_true=e_true)
###Output
_____no_output_____
###Markdown
Pre-processingThe model components for which only a norm is fitted can be pre-processed so we don't have to apply the IRF at each iteration of the fit and then save computation time. This can be done using the `MapEvaluator`.
###Code
# pre-compute iso model
evaluator = MapEvaluator(diffuse_iso)
evaluator.update(exposure=exposure, psf=psf, edisp=edisp, geom=counts.geom)
diffuse_iso = BackgroundModel(
evaluator.compute_npred(), name="bkg-iso", norm=3.3
)
# pre-compute diffuse model
evaluator = MapEvaluator(diffuse_gal)
evaluator.update(exposure=exposure, psf=psf, edisp=edisp, geom=counts.geom)
diffuse_gal = BackgroundModel(evaluator.compute_npred(), name="bkg-iem")
###Output
_____no_output_____
###Markdown
FitFinally, the big finale: let’s do a 3D map fit for the source at the Galactic center, to measure it’s position and spectrum. We keep the background normalization free.
###Code
spatial_model = PointSpatialModel(
lon_0="0 deg", lat_0="0 deg", frame="galactic"
)
spectral_model = PowerLawSpectralModel(
index=2.7, amplitude="5.8e-10 cm-2 s-1 TeV-1", reference="100 GeV"
)
source = SkyModel(
spectral_model=spectral_model,
spatial_model=spatial_model,
name="source-gc",
)
models = Models([source, diffuse_gal, diffuse_iso])
dataset = MapDataset(
models=models, counts=counts, exposure=exposure, psf=psf, edisp=edisp,
)
%%time
fit = Fit([dataset])
result = fit.run()
print(result)
print(models)
residual = counts - dataset.npred()
residual.sum_over_axes().smooth("0.1 deg").plot(
cmap="coolwarm", vmin=-3, vmax=3, add_cbar=True
);
###Output
_____no_output_____ |
02 Pandas/04 Grouping & Sorting.ipynb | ###Markdown
Grouping & Sorting------ Tutorial---
###Code
from google.colab import drive
drive.mount('/content/gdrive')
###Output
Mounted at /content/gdrive
###Markdown
Groupwise analysisOne function we've been using heavily thus far is the `value_counts()` function. We can replicate what `value_counts()`
###Code
import pandas as pd
pd.set_option('max_rows', 5)
import numpy as np
reviews = pd.read_csv("/content/gdrive/MyDrive/Colab Notebooks/Kaggle_Courses/02 Pandas/winemag-data-130k-v2.csv", index_col=0)
reviews.head()
reviews.groupby('points').points.count()
###Output
_____no_output_____
###Markdown
`groupby()` created a group of reviews which allotted the same point values to the given wines. Then, for each of these groups, we grabbed the `points()` column and counted how many times it appeared. `value_counts()` is just a shortcut to this `groupby()` operation.
###Code
reviews.groupby('points').price.min()
###Output
_____no_output_____
###Markdown
You can think of each group we generate as being a slice of our DataFrame containing only data with values that match. This DataFrame is accessible to us directly using the `apply()` method, and we can then manipulate the data in any way we see fit.
###Code
reviews.groupby('winery').apply(lambda df: df.title.iloc[0])
###Output
_____no_output_____
###Markdown
For even more fine-grained control, you can also group by more than one column. For an example, here's how we would pick out the best wine by country _and_ province. Another `groupby()` method worth mentioning is `agg()`, which lets you run a bunch of different functions on your DataFrame simultaneously. For example, we can generate a simple statistical summary of the dataset.
###Code
reviews.groupby(['country', 'province']).apply(lambda df: df.loc[df.points.idxmax()])
reviews.groupby(['country']).price.agg([len, min, max])
###Output
_____no_output_____
###Markdown
**Multi-indexes**In all of the examples we've seen thus far we've been working with DataFrame or Series objects with a single-label index. `groupby()` is slightly different in the fact that, depending on the operation we run, it will sometimes result in what is called a multi-index.
###Code
countries_reviewed = reviews.groupby(['country', 'province']).description.agg([len])
countries_reviewed
mi = countries_reviewed.index
type(mi)
###Output
_____no_output_____
###Markdown
Multi-indices have several methods for dealing with their tiered structure which are absent for single-level indices. They also require two levels of labels to retrieve a value. Dealing with multi-index output is a common "gotcha" for users new to pandas.However, in general the multi-index method you will use most often is the one for converting back to a regular index, the `reset_index()` method:
###Code
countries_reviewed.reset_index()
###Output
_____no_output_____
###Markdown
SortingLooking again at `countries_reviewed` we can see that grouping returns data in index order, not in value order. That is to say, when outputting the result of a `groupby`, the order of the rows is dependent on the values in the index, not in the data. To get data in the order want it in we can sort it ourselves. The `sort_values()` method is handy for this.
###Code
countries_reviewed = countries_reviewed.reset_index()
countries_reviewed.sort_values(by='len')
countries_reviewed.sort_values(by='len', ascending=False)
countries_reviewed.sort_index()
countries_reviewed.sort_values(by=['country', 'len'])
###Output
_____no_output_____
###Markdown
Exercise---
###Code
# Who are the most common wine reviewers in the dataset? Create a Series whose index is
# taster_twitter_handle category from the dataset, and whose values count too.
reviews_written = reviews.groupby('taster_twitter_handle').taster_twitter_handle.count()
reviews_written
###Output
_____no_output_____
###Markdown
What is the best wine I can buy for a given amount of money? Create a Series whose index is wine prices and whose values is the maximum number of points a wine costing that much was given in a review. Sort the values by price, ascending (so that 4.0 dollars is at the top and 3300.0 dollars is at the bottom)
###Code
# Grouping same price wine, getting its max points and soting by price value
best_rating_per_price = reviews.groupby('price')['points'].max().sort_index()
best_rating_per_price
###Output
_____no_output_____
###Markdown
What are the minimum and maximum prices for each variety of wine? Create a DataFrame whose index is the variety category from the dataset and whose values are the min and max values thereof.
###Code
# Grouping Variety and getting its min and max val
price_extremes = reviews.groupby('variety').price.agg([min, max])
price_extremes
###Output
_____no_output_____
###Markdown
What are the most expensive wine varieties? Create a variable sorted_varieties containing a copy of the dataframe from the previous question where varieties are sorted in descending order based on minimum price, then on maximum price (to break ties).
###Code
sorted_varieties = price_extremes.sort_values(by = ['min', 'max'], ascending=False )
sorted_varieties
# Create a Series whose index is reviewers and whose values is the average review score
# given out by that reviewer. Hint: you will need the taster_name and points columns.
reviewer_mean_ratings = reviews.groupby('taster_name').points.mean()
reviewer_mean_ratings
# Are there significant differences in the average scores assigned by the various reviewers?
# Run the cell below to use the describe() method to see a summary of the range of values.
reviewer_mean_ratings.describe()
###Output
_____no_output_____
###Markdown
What combination of countries and varieties are most common? Create a Series whose index is a MultiIndexof {country, variety} pairs. Sort the values in the Series in descending order based on wine count.
###Code
country_variety_counts = reviews.groupby(['country', 'variety']).size().sort_values(ascending=False)
country_variety_counts
###Output
_____no_output_____ |
chapter_10/chap10-object-detection-yolov5.ipynb | ###Markdown
Data
###Code
# training data - bboxes
df = pd.read_csv('../input/global-wheat-detection/train.csv')
df.head(3)
bboxs = np.stack(df['bbox'].apply(lambda x: np.fromstring(x[1:-1], sep=',')))
bboxs
# reformat to yolo
for i, column in enumerate(['x', 'y', 'w', 'h']):
df[column] = bboxs[:,i]
df.drop(columns=['bbox'], inplace=True)
df['x_center'] = df['x'] + df['w']/2
df['y_center'] = df['y'] + df['h']/2
df['classes'] = 0
df.head(3)
# stratify on source
fold_id = np.zeros((df.shape[0],1))
skf = StratifiedKFold(n_splits = 5, random_state = 42, shuffle = True)
for (ff, (train_index, test_index)) in enumerate(skf.split(df, df['source'])):
fold_id[test_index]= int(ff)
df['fold'] = fold_id.copy()
df.head(3)
df = df[['image_id','x', 'y', 'w', 'h','x_center','y_center','classes', 'fold']]
###Output
_____no_output_____
###Markdown
Yolo preparationThe implementation from Ultralytics has some requirements on the structure of the dataset - where the annotations are stored and the folders for training / validation data. The creation of the folders in the code below is fairly straightforward, but a more inquisitive reader is encouraged to consult the official documentation: https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data
###Code
source = 'train'
# pick a single fold for demonstration sake
fold = 0
val_index = set(df[df['fold'] == fold]['image_id'])
# loop through the bounding boxes per image
for name,mini in tqdm(df.groupby('image_id')):
# where to save the files
if name in val_index:
path2save = 'valid/'
else:
path2save = 'train/'
# storage path for labels
if not os.path.exists('convertor/fold{}/labels/'.format(fold)+path2save):
os.makedirs('convertor/fold{}/labels/'.format(fold)+path2save)
with open('convertor/fold{}/labels/'.format(fold)+path2save+name+".txt", 'w+') as f:
# normalize the coordinates in accordance with the Yolo format requirements
row = mini[['classes','x_center','y_center','w','h']].astype(float).values
row = row/1024
row = row.astype(str)
for j in range(len(row)):
text = ' '.join(row[j])
f.write(text)
f.write("\n")
if not os.path.exists('convertor/fold{}/images/{}'.format(fold,path2save)):
os.makedirs('convertor/fold{}/images/{}'.format(fold,path2save))
# no preprocessing needed for images => copy them as a batch
sh.copy("../input/global-wheat-detection/{}/{}.jpg".format(source,name),'convertor/fold{}/images/{}/{}.jpg'.format(fold,path2save,name))
###Output
_____no_output_____
###Markdown
Model Actual Yolo
###Code
!git clone https://github.com/ultralytics/yolov5 && cd yolov5 && pip install -r requirements.txt
# check the assigned GPU type
gpu_info = !nvidia-smi
gpu_info = '\n'.join(gpu_info)
if gpu_info.find('failed') >= 0:
print('Select the Runtime > "Change runtime type" menu to enable a GPU accelerator, ')
print('and then re-execute this cell.')
else:
print(gpu_info)
yaml_text = """train: /kaggle/working/convertor/fold0/images/train/
val: /kaggle/working/convertor/fold0/images/valid/
nc: 1
names: ['wheat']"""
with open("wheat.yaml", 'w') as f:
f.write(yaml_text)
%cat wheat.yaml
!python ./yolov5/train.py --img 512 --batch 2 --epochs 3 --workers 2 --data wheat.yaml --cfg "./yolov5/models/yolov5s.yaml" --name yolov5x_fold0 --cache
!ls ./yolov5/runs/train/yolov5x_fold0/weights/ -lh
###Output
total 28M
-rw-r--r-- 1 root root 14M Apr 5 23:16 best.pt
-rw-r--r-- 1 root root 14M Apr 5 23:16 last.pt
###Markdown
Prediction
###Code
!ls /kaggle/input/global-wheat-detection/test
!python ./yolov5/detect.py --weights ./yolov5/runs/train/yolov5x_fold0/weights/best.pt --img 512 --conf 0.1 --source /kaggle/input/global-wheat-detection/test --save-txt --save-conf --exist-ok
!ls ./yolov5/runs/detect/exp/labels/
def convert(s):
x = int(1024 * (s[1] - s[3]/2))
y = int(1024 * (s[2] - s[4]/2))
w = int(1024 * s[3])
h = int(1024 * s[4])
return(str(s[5]) + ' ' + str(x) + ' ' + str(y) + ' ' + str(w) + ' ' + str(h))
with open('submission.csv', 'w') as myfile:
# prepare submission
wfolder = './yolov5/runs/detect/exp/labels/'
for f in os.listdir(wfolder):
fname = wfolder + f
xdat = pd.read_csv(fname, sep = ' ', header = None)
outline = f[:-4] + ' ' + ' '.join(list(xdat.apply(lambda s: convert(s), axis = 1)))
myfile.write(outline + '\n')
myfile.close()
!cat submission.csv
###Output
53f253011 0.100472 61 669 961 57 0.106223 0 125 234 183 0.1082 96 696 928 126 0.108863 515 393 86 161 0.11459 31 0 167 209 0.120246 517 466 89 147
aac893a91 0.108037 376 435 325 188
796707dd7 0.235373 684 128 234 113
cc3532ff6 0.100443 406 752 144 108 0.102479 405 87 4 89 0.107173 576 537 138 94 0.113459 256 498 179 211 0.114847 836 618 186 65 0.121121 154 544 248 115 0.125105 40 567 483 199
2fd875eaa 0.101398 439 163 204 860 0.112546 807 440 216 323
348a992bb 0.100572 0 10 440 298 0.101236 344 445 401 211
f5a1f0358 0.102549 398 424 295 96
|
User Independence Analysis/ipynb/.ipynb_checkpoints/Stacked Result (Documentation)-checkpoint.ipynb | ###Markdown
RFECV
###Code
from sklearn.model_selection import StratifiedKFold
from sklearn.feature_selection import RFECV
rfecv = RFECV(estimator=et, step=1, cv=StratifiedKFold(10), scoring='accuracy')
rfecv.fit(scaled_data_train,train['label'])
print('Optimal number of features: {}'.format(rfecv.n_features_))
plt.figure(figsize=(16, 9))
plt.title('Recursive Feature Elimination with Cross-Validation', fontsize=18, fontweight='bold', pad=20)
plt.xlabel('Number of features selected', fontsize=14, labelpad=20)
plt.ylabel('% Correct Classification', fontsize=14, labelpad=20)
plt.plot(range(1, len(rfecv.grid_scores_) + 1), rfecv.grid_scores_, color='#303F9F', linewidth=3)
plt.show()
print(np.where(rfecv.support_ == False)[0])
X = pd.DataFrame(data=scaled_data_train)
X
X_test = pd.DataFrame(data=scaled_data_test)
X.drop(X.columns[np.where(rfecv.support_ == False)[0]], axis=1, inplace=True)
X_test.drop(X_test.columns[np.where(rfecv.support_ == False)[0]], axis=1, inplace=True)
et = ExtraTreesClassifier(n_estimators=100,n_jobs=10,random_state=56)
et.fit(X,train['label'])
y_pred=et.predict(X_test)
print(classification_report(test['label'],y_pred))
for i in range (201):
#print (i)
et = ExtraTreesClassifier(n_estimators=100,n_jobs=10,random_state=i)
et.fit(X,train['label'])
y_pred=et.predict(X_test)
#print(classification_report(test['label'],y_pred))
if ((classification_report(test['label'],y_pred,output_dict=True)['0']['recall'])>.60 and (classification_report(test['label'],y_pred,output_dict=True)['2']['recall'])>.60 and (classification_report(test['label'],y_pred,output_dict=True)['3']['recall'])>.60):
print(i)
print(classification_report(test['label'],y_pred))
###Output
2
precision recall f1-score support
0 0.80 0.64 0.71 147
1 0.74 0.91 0.82 161
2 0.57 0.62 0.59 147
3 0.70 0.61 0.65 150
accuracy 0.70 605
macro avg 0.70 0.69 0.69 605
weighted avg 0.70 0.70 0.70 605
13
precision recall f1-score support
0 0.86 0.62 0.72 147
1 0.78 0.93 0.84 161
2 0.53 0.63 0.58 147
3 0.69 0.61 0.65 150
accuracy 0.70 605
macro avg 0.72 0.70 0.70 605
weighted avg 0.72 0.70 0.70 605
14
precision recall f1-score support
0 0.81 0.61 0.69 147
1 0.78 0.91 0.84 161
2 0.55 0.65 0.59 147
3 0.75 0.67 0.70 150
accuracy 0.71 605
macro avg 0.72 0.71 0.71 605
weighted avg 0.72 0.71 0.71 605
27
precision recall f1-score support
0 0.85 0.63 0.72 147
1 0.80 0.90 0.85 161
2 0.52 0.63 0.57 147
3 0.69 0.61 0.65 150
accuracy 0.70 605
macro avg 0.71 0.69 0.70 605
weighted avg 0.71 0.70 0.70 605
29
precision recall f1-score support
0 0.80 0.61 0.69 147
1 0.84 0.91 0.87 161
2 0.54 0.70 0.61 147
3 0.76 0.65 0.70 150
accuracy 0.72 605
macro avg 0.74 0.72 0.72 605
weighted avg 0.74 0.72 0.72 605
33
precision recall f1-score support
0 0.83 0.61 0.71 147
1 0.76 0.93 0.84 161
2 0.54 0.62 0.58 147
3 0.72 0.63 0.67 150
accuracy 0.70 605
macro avg 0.71 0.70 0.70 605
weighted avg 0.71 0.70 0.70 605
45
precision recall f1-score support
0 0.85 0.65 0.73 147
1 0.81 0.90 0.85 161
2 0.53 0.67 0.59 147
3 0.75 0.64 0.69 150
accuracy 0.72 605
macro avg 0.74 0.72 0.72 605
weighted avg 0.74 0.72 0.72 605
54
precision recall f1-score support
0 0.84 0.66 0.74 147
1 0.78 0.90 0.84 161
2 0.52 0.61 0.56 147
3 0.68 0.61 0.64 150
accuracy 0.70 605
macro avg 0.71 0.69 0.69 605
weighted avg 0.71 0.70 0.70 605
60
precision recall f1-score support
0 0.83 0.63 0.71 147
1 0.79 0.91 0.85 161
2 0.53 0.64 0.58 147
3 0.73 0.65 0.69 150
accuracy 0.71 605
macro avg 0.72 0.71 0.71 605
weighted avg 0.72 0.71 0.71 605
64
precision recall f1-score support
0 0.80 0.62 0.70 147
1 0.82 0.91 0.86 161
2 0.58 0.69 0.63 147
3 0.75 0.69 0.72 150
accuracy 0.73 605
macro avg 0.74 0.73 0.73 605
weighted avg 0.74 0.73 0.73 605
73
precision recall f1-score support
0 0.81 0.63 0.71 147
1 0.78 0.91 0.84 161
2 0.57 0.65 0.61 147
3 0.73 0.66 0.69 150
accuracy 0.72 605
macro avg 0.72 0.71 0.71 605
weighted avg 0.72 0.72 0.72 605
85
precision recall f1-score support
0 0.86 0.67 0.76 147
1 0.80 0.92 0.85 161
2 0.52 0.63 0.57 147
3 0.73 0.61 0.66 150
accuracy 0.71 605
macro avg 0.73 0.71 0.71 605
weighted avg 0.73 0.71 0.71 605
86
precision recall f1-score support
0 0.89 0.64 0.74 147
1 0.79 0.91 0.84 161
2 0.56 0.71 0.62 147
3 0.72 0.61 0.66 150
accuracy 0.72 605
macro avg 0.74 0.72 0.72 605
weighted avg 0.74 0.72 0.72 605
88
precision recall f1-score support
0 0.82 0.62 0.71 147
1 0.77 0.91 0.84 161
2 0.54 0.66 0.60 147
3 0.73 0.61 0.66 150
accuracy 0.70 605
macro avg 0.72 0.70 0.70 605
weighted avg 0.72 0.70 0.70 605
89
precision recall f1-score support
0 0.82 0.63 0.72 147
1 0.79 0.94 0.86 161
2 0.56 0.70 0.62 147
3 0.76 0.61 0.68 150
accuracy 0.72 605
macro avg 0.74 0.72 0.72 605
weighted avg 0.74 0.72 0.72 605
90
precision recall f1-score support
0 0.80 0.64 0.71 147
1 0.72 0.92 0.81 161
2 0.58 0.61 0.59 147
3 0.71 0.61 0.65 150
accuracy 0.70 605
macro avg 0.70 0.69 0.69 605
weighted avg 0.70 0.70 0.69 605
92
precision recall f1-score support
0 0.84 0.66 0.74 147
1 0.78 0.89 0.83 161
2 0.54 0.67 0.60 147
3 0.76 0.63 0.69 150
accuracy 0.72 605
macro avg 0.73 0.71 0.71 605
weighted avg 0.73 0.72 0.72 605
108
precision recall f1-score support
0 0.87 0.66 0.75 147
1 0.78 0.94 0.85 161
2 0.57 0.65 0.61 147
3 0.73 0.65 0.69 150
accuracy 0.73 605
macro avg 0.74 0.72 0.72 605
weighted avg 0.74 0.73 0.73 605
110
precision recall f1-score support
0 0.83 0.62 0.71 147
1 0.72 0.90 0.80 161
2 0.56 0.63 0.59 147
3 0.72 0.62 0.66 150
accuracy 0.70 605
macro avg 0.71 0.69 0.69 605
weighted avg 0.71 0.70 0.69 605
###Markdown
STACKED
###Code
classification_report(test['label'],y_pred,output_dict = True)['accuracy']
rnd_st = []
for i in range (501):
et = ExtraTreesClassifier(n_estimators=100,n_jobs=10,random_state=i ,)
et.fit(scaled_data_train,train['label'])
y_pred=et.predict(scaled_data_test)
if ((classification_report(test['label'],y_pred,output_dict=True)['0']['recall'])>.60 and (classification_report(test['label'],y_pred,output_dict=True)['2']['recall'])>.60 and (classification_report(test['label'],y_pred,output_dict=True)['3']['recall'])>.60 and (classification_report(test['label'],y_pred,output_dict=True)['accuracy'])>.72):
print(i)
rnd_st.append(i)
clf = []
for st in rnd_st:
clf.append(ExtraTreesClassifier(n_estimators=100,n_jobs=10,random_state=st))
# Stacked random state:
# 72
# 86
# 235
# 388
# 396
for i in range (500):
meta = ExtraTreesClassifier(n_estimators=100,n_jobs=10,random_state=i)
sclf = StackingClassifier(classifiers=clf, meta_classifier=meta)
sclf.fit(scaled_data_train,train['label'])
y_pred_sta=sclf.predict(scaled_data_test)
if (classification_report(test['label'],y_pred_sta,output_dict=True)['accuracy']>.72):
print (i)
print(classification_report(test['label'],y_pred_sta))
###Output
0
precision recall f1-score support
0 0.78 0.67 0.72 147
1 0.82 0.93 0.87 161
2 0.58 0.61 0.60 147
3 0.73 0.69 0.71 150
accuracy 0.73 605
macro avg 0.73 0.73 0.73 605
weighted avg 0.73 0.73 0.73 605
1
precision recall f1-score support
0 0.78 0.66 0.71 147
1 0.83 0.93 0.88 161
2 0.59 0.63 0.61 147
3 0.74 0.71 0.72 150
accuracy 0.74 605
macro avg 0.73 0.73 0.73 605
weighted avg 0.74 0.74 0.73 605
2
precision recall f1-score support
0 0.77 0.66 0.71 147
1 0.82 0.93 0.87 161
2 0.59 0.61 0.60 147
3 0.73 0.71 0.72 150
accuracy 0.73 605
macro avg 0.73 0.73 0.72 605
weighted avg 0.73 0.73 0.73 605
3
precision recall f1-score support
0 0.78 0.66 0.72 147
1 0.83 0.93 0.88 161
2 0.59 0.64 0.61 147
3 0.73 0.70 0.72 150
accuracy 0.74 605
macro avg 0.74 0.73 0.73 605
weighted avg 0.74 0.74 0.73 605
4
precision recall f1-score support
0 0.77 0.67 0.72 147
1 0.83 0.93 0.88 161
2 0.58 0.62 0.60 147
3 0.74 0.70 0.72 150
accuracy 0.73 605
macro avg 0.73 0.73 0.73 605
weighted avg 0.73 0.73 0.73 605
5
precision recall f1-score support
0 0.78 0.66 0.71 147
1 0.83 0.93 0.88 161
2 0.59 0.63 0.61 147
3 0.74 0.70 0.72 150
accuracy 0.73 605
macro avg 0.73 0.73 0.73 605
weighted avg 0.73 0.73 0.73 605
6
precision recall f1-score support
0 0.79 0.65 0.71 147
1 0.83 0.94 0.88 161
2 0.58 0.63 0.60 147
3 0.73 0.69 0.71 150
accuracy 0.73 605
macro avg 0.73 0.73 0.73 605
weighted avg 0.73 0.73 0.73 605
7
precision recall f1-score support
0 0.77 0.66 0.71 147
1 0.82 0.93 0.87 161
2 0.59 0.61 0.60 147
3 0.73 0.69 0.71 150
accuracy 0.73 605
macro avg 0.73 0.72 0.72 605
weighted avg 0.73 0.73 0.73 605
8
precision recall f1-score support
0 0.77 0.67 0.72 147
1 0.81 0.93 0.87 161
2 0.59 0.61 0.60 147
3 0.73 0.70 0.72 150
accuracy 0.73 605
macro avg 0.73 0.73 0.73 605
weighted avg 0.73 0.73 0.73 605
9
precision recall f1-score support
0 0.78 0.66 0.72 147
1 0.83 0.94 0.88 161
2 0.60 0.63 0.61 147
3 0.72 0.70 0.71 150
accuracy 0.74 605
macro avg 0.73 0.73 0.73 605
weighted avg 0.74 0.74 0.73 605
10
precision recall f1-score support
0 0.77 0.66 0.71 147
1 0.82 0.93 0.87 161
2 0.58 0.61 0.60 147
3 0.73 0.70 0.72 150
accuracy 0.73 605
macro avg 0.73 0.73 0.72 605
weighted avg 0.73 0.73 0.73 605
11
precision recall f1-score support
0 0.77 0.65 0.71 147
1 0.82 0.93 0.87 161
2 0.59 0.63 0.61 147
3 0.74 0.71 0.72 150
accuracy 0.73 605
macro avg 0.73 0.73 0.73 605
weighted avg 0.73 0.73 0.73 605
12
precision recall f1-score support
0 0.77 0.67 0.71 147
1 0.84 0.93 0.88 161
2 0.59 0.63 0.61 147
3 0.73 0.70 0.72 150
accuracy 0.74 605
macro avg 0.73 0.73 0.73 605
weighted avg 0.74 0.74 0.73 605
13
precision recall f1-score support
0 0.78 0.67 0.72 147
1 0.83 0.94 0.88 161
2 0.59 0.61 0.60 147
3 0.73 0.70 0.71 150
accuracy 0.74 605
macro avg 0.73 0.73 0.73 605
weighted avg 0.73 0.74 0.73 605
14
precision recall f1-score support
0 0.77 0.65 0.70 147
1 0.83 0.93 0.88 161
2 0.57 0.61 0.59 147
3 0.73 0.70 0.72 150
accuracy 0.73 605
macro avg 0.72 0.72 0.72 605
weighted avg 0.73 0.73 0.72 605
15
precision recall f1-score support
0 0.77 0.65 0.70 147
1 0.83 0.93 0.88 161
2 0.58 0.63 0.60 147
3 0.72 0.70 0.71 150
accuracy 0.73 605
macro avg 0.73 0.72 0.72 605
weighted avg 0.73 0.73 0.73 605
16
precision recall f1-score support
0 0.77 0.65 0.71 147
1 0.83 0.93 0.88 161
2 0.59 0.63 0.61 147
3 0.73 0.70 0.72 150
accuracy 0.73 605
macro avg 0.73 0.73 0.73 605
weighted avg 0.73 0.73 0.73 605
17
precision recall f1-score support
0 0.78 0.66 0.72 147
1 0.84 0.93 0.88 161
2 0.58 0.62 0.60 147
3 0.74 0.71 0.73 150
accuracy 0.74 605
macro avg 0.73 0.73 0.73 605
weighted avg 0.74 0.74 0.73 605
18
precision recall f1-score support
0 0.79 0.65 0.71 147
1 0.82 0.94 0.87 161
2 0.59 0.62 0.60 147
3 0.73 0.70 0.72 150
accuracy 0.73 605
macro avg 0.73 0.73 0.73 605
weighted avg 0.73 0.73 0.73 605
|
Oanda v1 REST-oandapy/10.00 Bonus Materials II.ipynb | ###Markdown
Bonus Materials II Vectorised backtesting
###Code
import numpy as np
import pandas as pd
import oandapy
import configparser
%matplotlib inline
import seaborn as sns
import matplotlib.pyplot as plt
config = configparser.ConfigParser()
config.read('../config/config_v1.ini')
account_id = config['oanda']['account_id']
api_key = config['oanda']['api_key']
oanda = oandapy.API(environment="practice",
access_token=api_key)
response = oanda.get_history(instrument="EUR_USD",
granularity="H1",
count = 5000)
res = pd.DataFrame(response['candles'])
res.columns = ['Close_Ask', 'Close_Bid', 'Complete',
'High_Ask', 'High_Bid', 'Low_Ask', 'Low_Bid',
'Open_Ask', 'Open_Bid', 'Time', 'Volume']
res = res.reindex_axis(['Time', 'Open_Bid', 'Open_Ask',
'High_Bid', 'High_Ask', 'Low_Bid',
'Low_Ask', 'Close_Bid', 'Close_Ask',
'Complete', 'Volume'],
axis=1)
df = res[['Time', 'Close_Bid', 'Close_Ask']].copy()
df['rtns'] = res['Close_Bid'].pct_change()
dtrain = df[0:4500].copy()
dtest = df[4500:].copy()
###Output
_____no_output_____
###Markdown
Experiment With the Training Data Set
###Code
k = range(1,13)
h = 1
for oo in k:
dtrain['signal'] = np.sign(dtrain['rtns'].rolling(oo).sum())
dtrain['strategy_rtn'] = dtrain['signal'].shift(1) * dtrain['rtns']
res = dtrain['strategy_rtn'].dropna().sum()
print('{0:3} {1:>8.4f}'.format(oo, res))
###Output
1 0.0267
2 -0.0196
3 -0.0609
4 0.0201
5 0.0419
6 0.0417
7 0.0146
8 -0.0044
9 -0.0286
10 -0.0169
11 0.0888
12 0.0306
###Markdown
*** Vectorized Backtesting With the Test Set - MomentumNote: we use the k=11,h=1 combination as it has the highest returns in the training set
###Code
dtest['signal'] = np.sign(dtest['rtns'].rolling(11).sum())
dtest['result'] = dtest['signal'].shift(1) * dtest['rtns']
dtest['result'].dropna().cumsum().plot(figsize=(10,6));
###Output
_____no_output_____
###Markdown
*** Vectorized Backtesting With the Test Set - ReversionNote: we use the k=3,h=1 combination as it has the highest returns in the training set for a reversion strategy
###Code
dtest['signal'] = - np.sign(dtest['rtns'].rolling(3).sum())
dtest['result'] = dtest['signal'].shift(1) * dtest['rtns']
dtest['result'].dropna().cumsum().plot(figsize=(10,6));
###Output
_____no_output_____ |
Protein_ligand.ipynb | ###Markdown
**Hello there!**This is a Jupyter notebook for running Molecular Dynamics (MD) simulations using OpenMM engine and AMBER force field for **Protein and Ligand** systems. This notebook is a supplementary material of the paper "***Making it rain: Cloud-based molecular simulations for everyone***" ([link here](https://doi.org/10.1021/acs.jcim.1c00998)) and we encourage you to read it before using this pipeline.The main goal of this notebook is to demonstrate how to harness the power of cloud-computing to run microsecond-long MD simulations in a cheap and yet feasible fashion.--- **This notebook is NOT a standard protocol for MD simulations!** It is just simple MD pipeline illustrating each step of a simulation protocol.--- **Bugs**- If you encounter any bugs, please report the issue to https://github.com/pablo-arantes/making-it-rain/issues**Acknowledgments**- We would like to thank the OpenMM team for developing an excellent and open source engine. - We would like to thank the ChemosimLab ([@ChemosimLab](https://twitter.com/ChemosimLab)) team for their incredible [ProLIF](https://prolif.readthedocs.io/en/latest/index.html) (Protein-Ligand Interaction Fingerprints) tool.- A Making-it-rain by **Pablo R. Arantes** ([@pablitoarantes](https://twitter.com/pablitoarantes)), **Marcelo D. Polêto** ([@mdpoleto](https://twitter.com/mdpoleto)), **Conrado Pedebos** ([@ConradoPedebos](https://twitter.com/ConradoPedebos)) and **Rodrigo Ligabue-Braun** ([@ligabue_braun](https://twitter.com/ligabue_braun)).- Also, credit to [David Koes](https://github.com/dkoes) for his awesome [py3Dmol](https://3dmol.csb.pitt.edu/) plugin.- For related notebooks see: [Making-it-rain](https://github.com/pablo-arantes/making-it-rain) **Introduction**In general, MD simulations rely on 1) a set of atomic coordinates of all atoms on a simulation box and 2) a set of force field parameters that describes the interaction energies between atoms.In terms of inputs, we wil need:* A .pdb file of the protein and a .pdb file of the ligand containing a set of atomic coordinates.In this notebook, we will simulate PDB 3HTB. To build our simulation box, we will use LEaP program (https://ambermd.org/tutorials/pengfei/index.php). The LEaP program is a portal between many chemical structure file types (.pdb and .mol2, primarily), and the Amber model parameter file types such as .lib, .prepi, parm.dat, and .frcmod. Each of the parameter files contains pieces of information needed for constructing a simulation, whether for energy minimization or molecular dynamics. LEaP functions within a larger workflow described in Section 1.1 of the [Amber Manual](https://ambermd.org/doc12/Amber20.pdf). To build ligand topology we will use general AMBER force field (GAFF - http://ambermd.org/antechamber/gaff.html) and The Open Force Field Toolkit (OpenFF - https://openforcefield.org/). GAFF is compatible with the AMBER force field and it has parameters for almost all the organic molecules made of C, N, O, H, S, P, F, Cl, Br and I. As a complete force field, GAFF is suitable for study of a great number of molecules in an automatic fashion. The Open Force Field Toolkit, built by the [Open Force Field Initiative](https://openforcefield.org/), is a Python toolkit for the development and application of modern molecular mechanics force fields based on direct chemical perception and rigorous statistical parameterization methods. You can download the input files examples from [here](https://github.com/pablo-arantes/making-it-rain/tree/main/PROTEIN_LIGAND); --- ------ **Setting the environment for MD calculation**Firstly, we need to install all necessary libraries and packages for our simulation. The main packages we will be installing are:1. Anaconda (https://docs.conda.io/en/latest/miniconda.html)2. OpenMM (https://openmm.org/)3. PyTraj (https://amber-md.github.io/pytraj/latest/index.html)4. py3Dmol (https://pypi.org/project/py3Dmol/)5. ProLIF (https://github.com/chemosim-lab/ProLIF)6. Numpy (https://numpy.org/)7. Matplotlib (https://matplotlib.org/)8. AmberTools (https://ambermd.org/AmberTools.php)
###Code
#@title **Install dependencies**
#@markdown It will take a few minutes, please, drink a coffee and wait. ;-)
# install dependencies
%%capture
import sys
!pip -q install py3Dmol 2>&1 1>/dev/null
!pip install --upgrade MDAnalysis 2>&1 1>/dev/null
!pip install biopandas 2>&1 1>/dev/null
!pip install rdkit-pypi
!pip install Cython
!git clone https://github.com/chemosim-lab/ProLIF.git
prolif1 = "cd /content/ProLIF"
prolif2 = "sed -i 's/mdanalysis.*/mdanalysis==2.0.0/' setup.cfg"
prolif3 = "pip install ."
original_stdout = sys.stdout # Save a reference to the original standard output
with open('prolif.sh', 'w') as f:
sys.stdout = f # Change the standard output to the file we created.
print(prolif1)
print(prolif2)
print(prolif3)
sys.stdout = original_stdout # Reset the standard output to its original value
!chmod 700 prolif.sh 2>&1 1>/dev/null
!bash prolif.sh >/dev/null 2>&1
# install conda
!wget -qnc https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
!bash Miniconda3-latest-Linux-x86_64.sh -bfp /usr/local 2>&1 1>/dev/null
!rm -r Miniconda3-latest-Linux-x86_64.sh /content/ProLIF prolif.sh
!conda install -y -q -c conda-forge openmm=7.6 python=3.7 pdbfixer 2>&1 1>/dev/null
!conda install -c conda-forge ambertools --yes 2>&1 1>/dev/null
!conda install -c ambermd pytraj --yes 2>&1 1>/dev/null
!conda install -c conda-forge parmed --yes 2>&1 1>/dev/null
!conda install -c conda-forge openff-toolkit --yes 2>&1 1>/dev/null
!conda install -c bioconda pybel --yes
!conda install -c openbabel openbabel --yes
#load dependencies
sys.path.append('/usr/local/lib/python3.7/site-packages/')
from openmm import app, unit
from openmm.app import HBonds, NoCutoff, PDBFile
from openff.toolkit.topology import Molecule, Topology
from openff.toolkit.typing.engines.smirnoff import ForceField
from openff.toolkit.utils import get_data_file_path
import parmed as pmd
from biopandas.pdb import PandasPdb
import openmm as mm
from openmm import *
from openmm.app import *
from openmm.unit import *
import os
import urllib.request
import numpy as np
import MDAnalysis as mda
import py3Dmol
from __future__ import print_function
import pytraj as pt
import platform
import scipy.cluster.hierarchy
from scipy.spatial.distance import squareform
import scipy.stats as stats
import matplotlib.pyplot as plt
import pandas as pd
from scipy.interpolate import griddata
import seaborn as sb
from statistics import mean, stdev
from pytraj import matrix
from matplotlib import colors
from IPython.display import set_matplotlib_formats
!wget https://raw.githubusercontent.com/openforcefield/openff-forcefields/master/openforcefields/offxml/openff_unconstrained-2.0.0.offxml 2>&1 1>/dev/null
###Output
_____no_output_____
###Markdown
Using Google Drive to store simulation dataGoogle Colab does not allow users to keep data on their computing nodes. However, we can use Google Drive to read, write, and store our simulations files. Therefore, we suggest to you to:1. Create a folder in your own Google Drive and copy the necessary input files there.2. Copy the path of your created directory. We will use it below.
###Code
#@title ### **Import Google Drive**
#@markdown Click in the "Run" buttom to make your Google Drive accessible.
from google.colab import drive
drive.flush_and_unmount()
drive.mount('/content/drive', force_remount=True)
#@title **Check if you correctly allocated GPU nodes**
gpu_info = !nvidia-smi
gpu_info = '\n'.join(gpu_info)
if gpu_info.find('failed') >= 0:
print('Select the Runtime > "Change runtime type" menu to enable a GPU accelerator, ')
print('and then re-execute this cell.')
else:
print(gpu_info)
###Output
_____no_output_____
###Markdown
------ **Loading the necessary input files**At this point, we should have all libraries and dependencies installed and all necessary input files already at your Google Drive folder.**Important**: Make sure the PDB file points to the correct pathway. If necessary, correct the pathway and re-upload the files. We will merge the receptor and ligand structure objects to form the complex. Note that the coordinates of protein and ligand are determined by the PDB file, and they should be consistent with the ligand being positioned in the binding pocket.Below, you should provide the names of all input files and the pathway of your Google Drive folder containing them.**Please, don't use spaces in the files and folders names, i.e., protein_inpupt.pdb, MyDrive/protein_ligand and so on.**
###Code
#@title **Please, provide the necessary input files below**:
#@markdown **Important:** The protonation of your ligand is crucial for the correct parameterization of the molecule.
%%capture
import pybel
import rdkit
import mdtraj as md
from rdkit import Chem
from rdkit.Chem import AllChem,Draw
from rdkit.Chem.Draw import IPythonConsole
from pdbfixer import PDBFixer
Protein_PDB_file_name = 'protein.pdb' #@param {type:"string"}
Ligand_PDB_file_name = 'ligand.pdb' #@param {type:"string"}
Add_ligand_hydrogens = "Yes" #@param ["Yes", "No"]
ligand_name = Ligand_PDB_file_name
Google_Drive_Path = '/content/drive/MyDrive/protein_ligand' #@param {type:"string"}
workDir = Google_Drive_Path
file_name = os.path.join(workDir, str(Protein_PDB_file_name))
initial_pdb = os.path.join(workDir, "starting0.pdb")
ligand_pdb = os.path.join(workDir, str(ligand_name))
ligand_pdb2 = os.path.join(workDir, "ligand_H.pdb")
starting = os.path.join(workDir, "starting1.pdb")
starting2 = os.path.join(workDir, "starting2.pdb")
starting_end = os.path.join(workDir, "starting_end.pdb")
#Add hydrogens in the ligand
if Add_ligand_hydrogens == "Yes":
fixer = PDBFixer(filename=ligand_pdb)
PDBFile.writeFile(fixer.topology, fixer.positions, open("temp.pdb", 'w'))
ppdb = PandasPdb().read_pdb("temp.pdb")
ppdb.df['ATOM'] = ppdb.df['ATOM']
ppdb.df['HETATM']= ppdb.df['HETATM'][ppdb.df['HETATM']['element_symbol'] != 'H']
ppdb.to_pdb(path="temp.pdb", records=['ATOM', 'HETATM'], gz=False, append_newline=True)
mol= [m for m in pybel.readfile(filename="temp.pdb", format='pdb')][0]
mol.calccharges
mol.addh()
out=pybel.Outputfile(filename="temp2.pdb",format='pdb',overwrite=True)
out.write(mol)
out.close()
md.load("temp2.pdb").save("temp2.pdb")
halogens = ['Cl', 'F', 'Br', 'I']
atom_id = []
H_id = []
with open("temp2.pdb") as f:
for line in f:
data = line.split()
if data[0] == "ATOM":
if data[2] in halogens:
atom_id.append(data[1])
if data[0] == "CONECT":
if data[1] in atom_id:
if len(data) > 3:
H_id.append(data[3])
H_id.append(data[4])
H_id.append(data[5])
with open(ligand_pdb2, 'w') as h:
with open("temp2.pdb") as f:
for line in f:
data = line.split()
if data[0] == "ATOM":
if data[1] not in H_id:
print(line, file=h)
elif data[0] == "CONECT":
if data[1] not in atom_id:
print(line, file=h)
else:
print(line, file=h)
fixer = PDBFixer(filename=ligand_pdb2)
PDBFile.writeFile(fixer.topology, fixer.positions, open(ligand_pdb2, 'w'))
else:
fixer = PDBFixer(filename=ligand_pdb)
PDBFile.writeFile(fixer.topology, fixer.positions, open(ligand_pdb2, 'w'))
#Fix protein
pdb_parm = pmd.load_file(file_name)
pdb_parm.save(initial_pdb, standard_resnames=True, overwrite=True)
ppdb = PandasPdb().read_pdb(initial_pdb)
ppdb.df['ATOM'] = ppdb.df['ATOM']
ppdb.df['HETATM'] = ppdb.df['HETATM'][ppdb.df['HETATM']['residue_name'] == 'HOH']
ppdb.df['ATOM'] = ppdb.df['ATOM'][ppdb.df['ATOM']['atom_name'] != 'OXT']
ppdb.df['ATOM']= ppdb.df['ATOM'][ppdb.df['ATOM']['element_symbol'] != 'H']
ppdb.to_pdb(path=starting, records=['ATOM', 'HETATM'], gz=False, append_newline=True)
from Bio.PDB import is_aa
from Bio.PDB import PDBParser, PDBIO, Select
class ProtSelect(Select):
def accept_residue(self, residue):
print(f"{residue} -> {is_aa(residue)}")
return is_aa(residue, standard=True)
from Bio import PDB
pdb_ini = PDBParser().get_structure("pdb", starting)
io = PDBIO()
io.set_structure(pdb_ini)
io.save(starting2, ProtSelect());
pdb4amber_cmd = "pdb4amber -i " + str(starting2) + " -o " + str(starting_end) + " -p"
original_stdout = sys.stdout # Save a reference to the original standard output
with open('pdb4amber.sh', 'w') as f:
sys.stdout = f # Change the standard output to the file we created.
print(pdb4amber_cmd)
sys.stdout = original_stdout # Reset the standard output to its original value
!chmod 700 pdb4amber.sh 2>&1 1>/dev/null
!bash pdb4amber.sh 2> /dev/null
!rm pdb4amber.sh temp.pdb temp2.pdb
#@markdown ---
import rdkit
from rdkit import Chem
from rdkit.Chem import AllChem,Draw
from rdkit.Chem.Draw import IPythonConsole
#@title **Enumerate Stereoisomers to generate ligand topology:**
##@markdown **You can find the smiles for your lingad at: https://pubchem.ncbi.nlm.nih.gov/**
mol= [m for m in pybel.readfile(filename=ligand_pdb2, format='pdb')][0]
mol.calccharges
mol.addh()
out=pybel.Outputfile(filename="temp2.smi",format='smiles',overwrite=True)
out.write(mol)
out.close()
fileObj = open("temp2.smi", "r",) #opens the file in read mode
for aRow in fileObj:
smi = aRow.split('\t')
fileObj.close()
Ligand_smiles = smi[0]
!rm temp2.smi >/dev/null 2>&1
mol = Chem.MolFromSmiles(Ligand_smiles)
def spam(n):
out=[]
for perm in getPerms(n):
elem = [ int(i) for i in list(perm) ]
out.append(elem)
return out
def getPerms(n):
from itertools import permutations
for i in getCandidates(n):
for perm in set(permutations(i)):
yield ''.join(perm)
def getCandidates(n):
for i in range(0, n+1):
res = "1" * i + "0" * (n - i)
yield res
def GetStereoIsomers(mol):
from rdkit import Chem
from copy import copy
out = []
chiralCentres = Chem.FindMolChiralCenters(mol, includeUnassigned=True)
#return the molecule object when no chiral centres where identified
if chiralCentres == []:
return [mol]
#All bit permutations with number of bits equals number of chiralCentres
elements = spam(len(chiralCentres))
!rm smiles.txt temp2.smi >/dev/null 2>&1
for isoId,element in enumerate(elements):
for centreId,i in enumerate(element):
atomId = chiralCentres[centreId][0]
if i == 0:
mol.GetAtomWithIdx(atomId).SetChiralTag(Chem.rdchem.ChiralType.CHI_TETRAHEDRAL_CW)
elif i == 1:
mol.GetAtomWithIdx(atomId).SetChiralTag(Chem.rdchem.ChiralType.CHI_TETRAHEDRAL_CCW)
outmol = copy(mol)
out.append(outmol)
print(Chem.MolToSmiles(mol,isomericSmiles=True), file=open("smiles.txt", "a",))
return out
Draw.MolsToGridImage(GetStereoIsomers(mol), subImgSize=(500,200), molsPerRow=1)
chiralCentres = Chem.FindMolChiralCenters(mol, includeUnassigned=True)
if chiralCentres != []:
print("Follow the stereoisomers for your ligand: \n")
fileObj = open("smiles.txt", "r",) #opens the file in read mode
smiles = fileObj.read().splitlines() #puts the file into an array
fileObj.close()
x = len(smiles[:-1])
for a in range(x+1):
y = smiles[0+a:(a+1)]
globals()[f"isomer{a+1}"] = str(y[0])
print("Isomer " + str(a+1) + " = " + str(y[0]) + "\n")
else:
isomer1 = Ligand_smiles
print("No chiral centres were identified! \nIsomer 1 = " + str(isomer1) )
Draw.MolsToGridImage(GetStereoIsomers(mol), subImgSize=(700,200), molsPerRow=1, returnPNG=True)
from rdkit import Chem
from rdkit.Chem import PandasTools
from openff.toolkit.typing.engines.smirnoff import ForceField
import parmed
#@title **Parameters to generate the topology:**
#@markdown **Parameters to generate the protein topology:**
Force_field = "ff19SB" #@param ["ff19SB", "ff14SB"]
if Force_field == "ff19SB":
ff = "leaprc.protein.ff19SB"
else:
ff = "leaprc.protein.ff14SB"
Water_type = "TIP3P" #@param ["TIP3P", "OPC"]
if Water_type == "TIP3P":
water = "leaprc.water.tip3p"
water_box = "TIP3PBOX"
else:
water = "leaprc.water.opc"
water_box = "OPCBOX"
#@markdown Size Box (Angstrons):
Size_box = 12 #@param {type:"slider", min:10, max:20, step:1}
size_box = Size_box
#@markdown **ATTENTION**: Give the concentration in Molar units, AMBER tleap will neutralize your system automatically:
Ions = "NaCl" #@param ["NaCl", "KCl" ]
Concentration = "0.15" #@param {type:"string"}
#@markdown **Parameters to generate the ligand topology:**
Ligand_Force_field = "GAFF2" #@param ["GAFF2", "OpenFF 2.0.0 (Sage)"]
Ligand_isomer = "1" #@param {type:"string", min:1, max:10, step:100}
if chiralCentres == []:
isomer_end = isomer1
else:
isomer_end = globals()[f"isomer{Ligand_isomer}"]
Ligand_net_charges = "0" #@param {type:"string", min:-10, max:10, step:1}
#@markdown ---
tleap = os.path.join(workDir, "tleap.in")
top_nw = os.path.join(workDir, "SYS_nw.prmtop")
crd_nw = os.path.join(workDir, "SYS_nw.crd")
pdb_nw = os.path.join(workDir, "SYS_nw.pdb")
top = os.path.join(workDir, "SYS_gaff2.prmtop")
crd = os.path.join(workDir, "SYS_gaff2.crd")
pdb = os.path.join(workDir, "SYS.pdb")
ligand_noh = os.path.join(workDir, "ligand_noh.pdb")
ligand_h = os.path.join(workDir, "ligand_h.pdb")
ligand_mol2 = os.path.join(workDir, "ligand.mol2")
ligand_frcmod = os.path.join(workDir, "ligand.frcmod")
lig_new = os.path.join(workDir, "ligand_gaff.pdb")
protein_ligand = os.path.join(workDir, "protein_ligand.pdb")
lib = os.path.join(workDir, "lig.lib")
#gaff_command1 = "pdb4amber -i " + str(ligand_pdb2) + " -o " + str(ligand_h)
gaff_command1 = "pdb4amber -i " + str(ligand_pdb2) + " -o " + str(ligand_h)
gaff_command3 = "antechamber -i " + str(ligand_h) + " -fi pdb -o " + str(ligand_mol2) + " -fo mol2 -c bcc -nc " + str(Ligand_net_charges) + " -rn LIG -at gaff2"
gaff_command4 = "parmchk2 -i " + str(ligand_mol2) + " -f mol2 -o " + str(ligand_frcmod) + " -s gaff2"
original_stdout = sys.stdout # Save a reference to the original standard output
with open('gaff.sh', 'w') as f:
sys.stdout = f # Change the standard output to the file we created.
print(gaff_command1)
print(gaff_command3)
print(gaff_command4)
sys.stdout = original_stdout # Reset the standard output to its original value
!chmod 700 gaff.sh 2>&1 1>/dev/null
!bash gaff.sh >/dev/null 2>&1
f = open(tleap, "w")
f.write("""source """ + str(ff) + "\n"
"""source leaprc.gaff2
LIG = loadmol2 """ + str(ligand_mol2) + "\n"
"""loadamberparams """ + str(ligand_frcmod) + "\n"
"""saveoff LIG """ + str(lib) + "\n"
"""savepdb LIG """ + str(lig_new) + "\n"
"""quit""")
f.close()
tleap_command = "tleap -f " + str(tleap)
cat_command = "cat " + str(starting_end) + " " + str(lig_new) + str(" > ") + str(protein_ligand)
original_stdout = sys.stdout # Save a reference to the original standard output
with open('run_tleap.sh', 'w') as f:
sys.stdout = f # Change the standard output to the file we created.
print(tleap_command)
print(cat_command)
sys.stdout = original_stdout # Reset the standard output to its original value
!chmod 700 run_tleap.sh 2>&1 1>/dev/null
!bash run_tleap.sh 2>&1 1>/dev/null
ppdb = PandasPdb().read_pdb(protein_ligand)
ppdb.df['ATOM'] = ppdb.df['ATOM']
ppdb.df['OTHERS'] = [ppdb.df['OTHERS'] != 'OTHERS']
ppdb.to_pdb(path=protein_ligand, records=['ATOM', 'HETATM'], gz=False, append_newline=True)
f = open(tleap, "w")
f.write("""source """ + str(ff) + "\n"
"""source leaprc.DNA.OL15
source leaprc.RNA.OL3
source leaprc.GLYCAM_06j-1
source leaprc.lipid17
source leaprc.gaff2
source """ + str(water) + "\n"
"""loadamberparams """ + str(ligand_frcmod) + "\n"
"""loadoff """ + str(lib) + "\n"
"""SYS = loadpdb """ + str(protein_ligand) + "\n"
"""alignaxes SYS
savepdb SYS """ + str(pdb_nw) + "\n"
"""saveamberparm SYS """ + str(top_nw) + " " + str(crd_nw) + "\n"
"""solvatebox SYS """ + str(water_box) + " " + str(size_box) + """ 0.7
saveamberparm SYS """ + str(top) + " " + str(crd) + "\n"
"""savepdb SYS """ + str(pdb) + "\n"
"""quit""")
f.close()
tleap_command = "tleap -f " + str(tleap)
original_stdout = sys.stdout # Save a reference to the original standard output
with open('run_tleap.sh', 'w') as f:
sys.stdout = f # Change the standard output to the file we created.
print(tleap_command)
sys.stdout = original_stdout # Reset the standard output to its original value
SYS = os.path.join(workDir, "SYS*")
rm_sys = "rm " + SYS
original_stdout = sys.stdout # Save a reference to the original standard output
with open('rm_sys.sh', 'w') as f:
sys.stdout = f # Change the standard output to the file we created.
print(rm_sys)
sys.stdout = original_stdout # Reset the standard output to its original value
!chmod 700 rm_sys.sh 2>&1 1>/dev/null
!bash rm_sys.sh 2> /dev/null
!chmod 700 run_tleap.sh 2>&1 1>/dev/null
!bash run_tleap.sh 2>&1 1>/dev/null
!grep "Volume:" leap.log > temp.txt
with open("temp.txt", 'r') as f:
for line in f:
vol = float(line.split()[1])
vol_lit = vol * pow(10, -27)
atom_lit = 9.03 * pow(10, 22)
conc = float(Concentration)
num_ion = int(vol_lit * (conc/0.15) * atom_lit)
if Ions == "NaCl":
pos_neut = "Na+ 0"
pos_num = "Na+ " + str(num_ion)
Cl_num = num_ion
else:
pos_neut = "K+ 0"
pos_num = "K+ " + str(num_ion)
Cl_num = num_ion
f = open(tleap, "w")
f.write("""source """ + str(ff) + "\n"
"""source leaprc.DNA.OL15
source leaprc.RNA.OL3
source leaprc.GLYCAM_06j-1
source leaprc.lipid17
source leaprc.gaff2
source """ + str(water) + "\n"
"""loadamberparams """ + str(ligand_frcmod) + "\n"
"""loadoff """ + str(lib) + "\n"
"""SYS = loadpdb """ + str(protein_ligand) + "\n"
"""alignaxes SYS
check SYS
charge SYS
addions SYS """ + str(pos_neut) + "\n"
"""addions SYS Cl- 0
check SYS
charge SYS
savepdb SYS """ + str(pdb_nw) + "\n"
"""saveamberparm SYS """ + str(top_nw) + " " + str(crd_nw) + "\n"
"""solvatebox SYS """ + str(water_box) + " " + str(size_box) + """ 0.7 """ + "\n"
"""addIonsRand SYS """ + str(pos_num) + """ Cl- """ + str(Cl_num) + "\n"
"""saveamberparm SYS """ + str(top) + " " + str(crd) + "\n"
"""savepdb SYS """ + str(pdb) + "\n"
"""quit""")
f.close()
!chmod 700 run_tleap.sh 2>&1 1>/dev/null
!bash run_tleap.sh 2>&1 1>/dev/null
if Ligand_Force_field == "OpenFF 2.0.0 (Sage)":
mol = Chem.MolFromPDBFile(lig_new, removeHs=False)
Chem.MolToPDBFile(mol, os.path.join(workDir, "ligand_openFF.pdb"))
in_prmtop = top
in_crd = crd
orig_structure = parmed.amber.AmberParm(in_prmtop, in_crd)
pieces = orig_structure.split()
for piece in pieces:
print(f"There are {len(piece[1])} instance(s) of {piece[0]}")
from openmm.app import PDBFile
from openff.toolkit.topology import Molecule, Topology
from openff.toolkit.tests.utils import get_data_file_path
# rdmol = Chem.MolFromMolFile(os.path.join(workDir, "ligand_openFF.sdf"))
# ligand_off_molecule = Molecule.from_rdkit(rdmol, hydrogens_are_explicit=True)
ligand_off_molecule = Molecule.from_smiles(isomer_end)
ligand_pdbfile = PDBFile(os.path.join(workDir, "ligand_openFF.pdb"))
ligand_off_topology = Topology.from_openmm(
ligand_pdbfile.topology,
unique_molecules=[ligand_off_molecule],)
force_field = ForceField("openff_unconstrained-2.0.0.offxml")
ligand_system = force_field.create_openmm_system(ligand_off_topology)
new_ligand_structure = parmed.openmm.load_topology(
ligand_off_topology.to_openmm(),
ligand_system,
xyz=pieces[1][0].positions,)
new_ligand_structure.save(os.path.join(workDir, "ligand.prmtop"), overwrite=True)
new_ligand_structure.save(os.path.join(workDir, "ligand.inpcrd"), overwrite=True)
# Check how many atoms and which order elements are in the new ligand
n_atoms_new = len(new_ligand_structure.atoms)
elements_new = [atom.element for atom in new_ligand_structure.atoms]
# Check how many atoms and which order elements are in the old ligand
old_ligand_structure, n_copies = pieces[1]
n_atoms_old = len(old_ligand_structure.atoms)
elements_old = [atom.element for atom in old_ligand_structure.atoms]
print(
f"There are {n_atoms_old} in the old ligand structure and {n_atoms_new} atoms "
f"in the new ligand structure")
# Print out error message if number of atoms doesn't match
if n_atoms_new != n_atoms_old:
print(
"Error: Number of atoms in input ligand doesn't match number extracted "
"from prmtop file.")
if elements_new != elements_old:
print(
"Error: Elements in input ligand don't match elements in the ligand "
"from the prmtop file.")
print(f"Old elements: {elements_old}")
print(f"New elements: {elements_new}")
# Create a new, empty system
complex_structure = parmed.Structure()
# Add the protein. Convert explicitly to an AmberParm object to ensure that 1-4 scaling factors are preserved.
complex_structure += parmed.amber.AmberParm.from_structure(pieces[0][0])
# Add the ligand
complex_structure += parmed.amber.AmberParm.from_structure(new_ligand_structure)
# Add ions and Waters
ppdb = PandasPdb().read_pdb(pdb)
Cl = [ppdb.df['ATOM']['atom_name'] == 'Cl-']
Na = [ppdb.df['ATOM']['atom_name'] == 'Na+']
K = [ppdb.df['ATOM']['atom_name'] == 'K+']
Cl = np.array(Cl)
Na = np.array(Na)
K = np.array(K)
if True in Cl and True in Na:
just_ion1_structure = parmed.Structure()
just_ion1_structure += pieces[2][0]
just_ion1_structure *= len(pieces[2][1])
just_ion2_structure = parmed.Structure()
just_ion2_structure += pieces[3][0]
just_ion2_structure *= len(pieces[3][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_ion1_structure)
complex_structure += parmed.amber.AmberParm.from_structure(just_ion2_structure)
just_water_structure = parmed.Structure()
just_water_structure += pieces[4][0]
just_water_structure *= len(pieces[4][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_water_structure)
elif True in Cl and True in K:
just_ion1_structure = parmed.Structure()
just_ion1_structure += pieces[2][0]
just_ion1_structure *= len(pieces[2][1])
just_ion2_structure = parmed.Structure()
just_ion2_structure += pieces[3][0]
just_ion2_structure *= len(pieces[3][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_ion1_structure)
complex_structure += parmed.amber.AmberParm.from_structure(just_ion2_structure)
just_water_structure = parmed.Structure()
just_water_structure += pieces[4][0]
just_water_structure *= len(pieces[4][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_water_structure)
elif True in Cl:
just_ion1_structure = parmed.Structure()
just_ion1_structure += pieces[2][0]
just_ion1_structure *= len(pieces[2][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_ion1_structure)
just_water_structure = parmed.Structure()
just_water_structure += pieces[3][0]
just_water_structure *= len(pieces[3][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_water_structure)
elif True in Na:
just_ion1_structure = parmed.Structure()
just_ion1_structure += pieces[2][0]
just_ion1_structure *= len(pieces[2][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_ion1_structure)
just_water_structure = parmed.Structure()
just_water_structure += pieces[3][0]
just_water_structure *= len(pieces[3][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_water_structure)
elif True in K:
just_ion1_structure = parmed.Structure()
just_ion1_structure += pieces[2][0]
just_ion1_structure *= len(pieces[2][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_ion1_structure)
just_water_structure = parmed.Structure()
just_water_structure += pieces[3][0]
just_water_structure *= len(pieces[3][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_water_structure)
else:
just_water_structure = parmed.Structure()
just_water_structure += pieces[2][0]
just_water_structure *= len(pieces[2][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_water_structure)
# Copy over the original coordinates and box vectors
complex_structure.coordinates = orig_structure.coordinates
complex_structure.box_vectors = orig_structure.box_vectors
# Export the Structure to AMBER files
top = os.path.join(workDir, "SYS_openff.prmtop")
crd = os.path.join(workDir, "SYS_openff.inpcrd")
complex_structure.save(top, overwrite=True)
complex_structure.save(crd, overwrite=True)
top_openff = os.path.exists(top)
crd_openff = os.path.exists(crd)
if top_openff == True and crd_openff == True:
print("Successfully generated topology! :-)")
else:
print("ERROR: Check your inputs! ")
else:
pdb_amber = os.path.exists(pdb)
top_amber = os.path.exists(top)
crd_amber = os.path.exists(crd)
if pdb_amber == True and top_amber == True and crd_amber == True:
print("Successfully generated topology! :-)")
else:
print("ERROR: Check your inputs! ")
!!rm *.sh ANTECHAMBER* ATOMTYPE* temp.txt >/dev/null 2>&1
###Output
_____no_output_____
###Markdown
Let's take a look on our simulation box:
###Code
#@title **Show 3D structure**
import ipywidgets
from ipywidgets import interact, fixed
import warnings
warnings.filterwarnings('ignore')
def show_pdb(show_box=True,
show_ligand=True,
show_sidechains=False,
show_mainchain=False,
color="None"):
def mainchain(p, color="white", model=0):
BB = ['C','O','N','CA']
p.addStyle({"model":model,'atom':BB},
{'stick':{'colorscheme':f"{color}Carbon",'radius':0.3}})
p.setViewStyle({'style':'outline','color':'black','width':0.1})
def ligand(p, model=0):
HP = ['LIG']
p.addStyle({"model":model,'and':[{'resn':HP}]},
{'stick':{'colorscheme':'greenCarbon','radius':0.3}})
p.setViewStyle({'style':'outline','color':'black','width':0.1})
def box(p, model=0):
p.addModelsAsFrames(pdb)
p.addSurface(py3Dmol.SAS, {'opacity': 0.6, 'color':'white'}) #comment this line if you dont want to see the water box
p.setViewStyle({'style':'outline','color':'black','width':0.1})
def sidechain(p, model=0):
HP = ["ALA","GLY","VAL","ILE","LEU","PHE","MET","PRO","TRP","CYS","TYR"]
BB = ['C','O','N']
p.addStyle({"model":model,'and':[{'resn':HP},{'atom':BB,'invert':True}]},
{'stick':{'colorscheme':"whiteCarbon",'radius':0.3}})
p.addStyle({"model":model,'and':[{'resn':"GLY"},{'atom':'CA'}]},
{'sphere':{'colorscheme':"whiteCarbon",'radius':0.3}})
p.addStyle({"model":model,'and':[{'resn':"PRO"},{'atom':['C','O'],'invert':True}]},
{'stick':{'colorscheme':"whiteCarbon",'radius':0.3}})
p.addStyle({"model":model,'and':[{'resn':HP,'invert':True},{'atom':BB,'invert':True}]},
{'stick':{'colorscheme':"whiteCarbon",'radius':0.3}})
p = py3Dmol.view(js='https://3dmol.org/build/3Dmol.js')
p.addModel(open(pdb,'r').read(),'pdb')
if color == "rainbow":
p.setStyle({'cartoon': {'color':'spectrum'}})
else:
p.setStyle({'cartoon':{}})
if show_sidechains: sidechain(p)
if show_mainchain: mainchain(p)
if show_ligand: ligand(p)
if show_box: box(p)
p.zoomTo()
return p.show()
interact(show_pdb,
show_box=ipywidgets.Checkbox(value=True),
show_ligand=ipywidgets.Checkbox(value=True),
show_sidechains=ipywidgets.Checkbox(value=False),
show_mainchain=ipywidgets.Checkbox(value=False),
color=ipywidgets.Dropdown(options=['None', 'rainbow'], value='None'))
#@title **View and check the Ligand Interaction Network (LigPlot)**
#@markdown This diagram is interactive and allows moving around the residues, as well as clicking the legend to toggle the display of specific residues types or interactions. The diagram will be saved as an HTML file (initial.html).
import MDAnalysis as mda
import prolif as plf
import numpy as np
import os
from prolif.plotting.network import LigNetwork
# load topology
u = mda.Universe(os.path.join(workDir, "SYS_gaff2.prmtop"), pdb)
lig = u.select_atoms("resname LIG")
prot = u.select_atoms("protein")
# create RDKit-like molecules for visualisation
lmol = plf.Molecule.from_mda(lig)
pmol = plf.Molecule.from_mda(prot)
fp = plf.Fingerprint()
fp.run(u.trajectory[::1], lig, prot)
df = fp.to_dataframe(return_atoms=True)
net = LigNetwork.from_ifp(df, lmol,
# replace with `kind="frame", frame=0` for the other depiction
kind="frame", frame=0,
rotation=270)
net.save(os.path.join(workDir, "initial.html"))
net.display()
###Output
_____no_output_____
###Markdown
------ **Equilibrating the simulation box**Proper MD equilibration protocol is designed to equilibrate both temperature and pressure throughout the simulation box while preserving the protein experimental conformation. In addition, we also allow the solvent to accomodate around the protein, creating proper solvation layers.Below, we will set up the MD equilibration parameters, such as temperature, pressure and the desired simulation time. We will define the force constant used to restraint protein heavy-atoms in place and the frequency at which we want to save atomic coordinates in a trajectory file (.dcd).After you are done, you can run the next 2 cells to equilibrate your system.
###Code
#@title ### **Parameters for MD Equilibration protocol:**
# remove whitespaces
Jobname = 'prot_lig_equil' #@param {type:"string"}
Ligand_Force_field = "GAFF2" #@param ["GAFF2", "OpenFF 2.0.0 (Sage)"]
if Ligand_Force_field == "OpenFF 2.0.0 (Sage)":
top = os.path.join(workDir, "SYS_openff.prmtop")
crd = os.path.join(workDir, "SYS_openff.inpcrd")
pdb = os.path.join(workDir, "SYS.pdb")
else:
top = os.path.join(workDir, "SYS_gaff2.prmtop")
crd = os.path.join(workDir, "SYS_gaff2.crd")
pdb = os.path.join(workDir, "SYS.pdb")
Minimization_steps = "20000" #@param ["1000", "5000", "10000", "20000", "50000", "100000"]
#@markdown Simulation time (in nanoseconds) and integration time (in femtoseconds):
Time = "5" #@param {type:"string"}
stride_time_eq = Time
Integration_timestep = "2" #@param ["0.5", "1", "2", "3", "4"]
dt_eq = Integration_timestep
#@markdown Temperature (in Kelvin) and Pressure (in bar)
Temperature = 298 #@param {type:"string"}
temperature_eq = Temperature
Pressure = 1 #@param {type:"string"}
pressure_eq = Pressure
#@markdown Position restraints force constant (in kJ/mol):
Force_constant = 700 #@param {type:"slider", min:0, max:2000, step:100}
#@markdown Frequency to write the trajectory file (in picoseconds):
Write_the_trajectory = "10" #@param ["10", "100", "200", "500", "1000"]
write_the_trajectory_eq = Write_the_trajectory
#@markdown Frequency to write the log file (in picoseconds):
Write_the_log = "10" #@param ["10", "100", "200", "500", "1000"]
write_the_log_eq = Write_the_log
#@markdown ---
#@title **Runs an Equilibration MD simulation (NPT ensemble)**
#@markdown Now, let's equilibrate our system!
###########################################
import openmm as mm
from openmm import *
from openmm.app import *
from openmm.unit import *
import pytraj as pt
from sys import stdout, exit, stderr
import os, math, fnmatch
#############################################
# Defining MD simulation parameters
jobname = os.path.join(workDir, Jobname)
coordinatefile = crd
pdbfile = pdb
topologyfile = top
time_ps = float(Time)*1000
simulation_time = float(time_ps)*picosecond # in ps
dt = int(dt_eq)*femtosecond
temperature = float(temperature_eq)*kelvin
savcrd_freq = int(write_the_trajectory_eq)*picosecond
print_freq = int(write_the_log_eq)*picosecond
pressure = float(pressure_eq)*bar
restraint_fc = int(Force_constant) # kJ/mol
nsteps = int(simulation_time.value_in_unit(picosecond)/dt.value_in_unit(picosecond))
nprint = int(print_freq.value_in_unit(picosecond)/dt.value_in_unit(picosecond))
nsavcrd = int(savcrd_freq.value_in_unit(picosecond)/dt.value_in_unit(picosecond))
#############################################
# Defining functions to use below:
def backup_old_log(pattern, string):
result = []
for root, dirs, files in os.walk("./"):
for name in files:
if fnmatch.fnmatch(name, pattern):
try:
number = int(name[-2])
avail = isinstance(number, int)
#print(name,avail)
if avail == True:
result.append(number)
except:
pass
if len(result) > 0:
maxnumber = max(result)
else:
maxnumber = 0
backup_file = "\#" + string + "." + str(maxnumber + 1) + "#"
os.system("mv " + string + " " + backup_file)
return backup_file
def restraints(system, crd, fc, restraint_array):
boxlx = system.getDefaultPeriodicBoxVectors()[0][0].value_in_unit(nanometers)
boxly = system.getDefaultPeriodicBoxVectors()[1][1].value_in_unit(nanometers)
boxlz = system.getDefaultPeriodicBoxVectors()[2][2].value_in_unit(nanometers)
if fc > 0:
# positional restraints for all heavy-atoms
posresPROT = CustomExternalForce('k*periodicdistance(x, y, z, x0, y0, z0)^2;')
posresPROT.addPerParticleParameter('k')
posresPROT.addPerParticleParameter('x0')
posresPROT.addPerParticleParameter('y0')
posresPROT.addPerParticleParameter('z0')
for atom1 in restraint_array:
atom1 = int(atom1)
xpos = crd.positions[atom1].value_in_unit(nanometers)[0]
ypos = crd.positions[atom1].value_in_unit(nanometers)[1]
zpos = crd.positions[atom1].value_in_unit(nanometers)[2]
posresPROT.addParticle(atom1, [fc, xpos, ypos, zpos])
system.addForce(posresPROT)
return system
##############################################
#############################################
print("\n> Simulation details:\n")
print("\tJob name = " + jobname)
print("\tCoordinate file = " + str(coordinatefile))
print("\tPDB file = " + str(pdbfile))
print("\tTopology file = " + str(topologyfile))
print("\n\tSimulation_time = " + str(simulation_time))
print("\tIntegration timestep = " + str(dt))
print("\tTotal number of steps = " + str(nsteps))
print("\n\tSave coordinates each " + str(savcrd_freq))
print("\tPrint in log file each " + str(print_freq))
print("\n\tTemperature = " + str(temperature))
print("\tPressure = " + str(pressure))
#############################################
print("\n> Setting the system:\n")
if Ligand_Force_field == "OpenFF 2.0.0 (Sage)":
print("\t- Reading topology and structure file...")
prmtop = pmd.load_file(topologyfile)
inpcrd = AmberInpcrdFile(coordinatefile)
print("\t- Creating system and setting parameters...")
nonbondedMethod = PME
nonbondedCutoff = 1.0*nanometers
ewaldErrorTolerance = 0.0005
constraints = HBonds
rigidWater = True
constraintTolerance = 0.000001
friction = 1.0
system = complex_structure.createSystem(nonbondedMethod=nonbondedMethod, nonbondedCutoff=nonbondedCutoff,
constraints=constraints, rigidWater=rigidWater, ewaldErrorTolerance=ewaldErrorTolerance)
else:
print("\t- Reading topology and structure file...")
prmtop = AmberPrmtopFile(topologyfile)
inpcrd = AmberInpcrdFile(coordinatefile)
print("\t- Creating system and setting parameters...")
nonbondedMethod = PME
nonbondedCutoff = 1.0*nanometers
ewaldErrorTolerance = 0.0005
constraints = HBonds
rigidWater = True
constraintTolerance = 0.000001
friction = 1.0
system = prmtop.createSystem(nonbondedMethod=nonbondedMethod, nonbondedCutoff=nonbondedCutoff,
constraints=constraints, rigidWater=rigidWater, ewaldErrorTolerance=ewaldErrorTolerance)
print("\t- Applying restraints. Force Constant = " + str(Force_constant) + "kJ/mol")
pt_system = pt.iterload(coordinatefile, topologyfile)
pt_topology = pt_system.top
restraint_array = pt.select_atoms('!(:H*) & !(:WAT) & !(:Na+) & !(:Cl-) & !(:Mg+) & !(:K+)', pt_topology)
system = restraints(system, inpcrd, restraint_fc, restraint_array)
print("\t- Setting barostat...")
system.addForce(MonteCarloBarostat(pressure, temperature))
print("\t- Setting integrator...")
integrator = LangevinIntegrator(temperature, friction, dt)
integrator.setConstraintTolerance(constraintTolerance)
simulation = Simulation(prmtop.topology, system, integrator)
simulation.context.setPositions(inpcrd.positions)
if inpcrd.boxVectors is not None:
simulation.context.setPeriodicBoxVectors(*inpcrd.boxVectors)
print("\t- Energy minimization: " + str(Minimization_steps) + " steps")
simulation.minimizeEnergy(tolerance=10*kilojoule/mole, maxIterations=int(Minimization_steps))
print("\t-> Potential Energy = " + str(simulation.context.getState(getEnergy=True).getPotentialEnergy()))
print("\t- Setting initial velocities...")
simulation.context.setVelocitiesToTemperature(temperature)
#############################################
# Running Equilibration on NPT ensemble
dcd_file = jobname + ".dcd"
log_file = jobname + ".log"
rst_file = jobname + ".rst"
prv_rst_file = jobname + ".rst"
pdb_file = jobname + ".pdb"
# Creating a trajectory file and reporters
dcd = DCDReporter(dcd_file, nsavcrd)
firstdcdstep = (nsteps) + nsavcrd
dcd._dcd = DCDFile(dcd._out, simulation.topology, simulation.integrator.getStepSize(), firstdcdstep, nsavcrd) # charmm doesn't like first step to be 0
simulation.reporters.append(dcd)
simulation.reporters.append(StateDataReporter(stdout, nprint, step=True, speed=True, progress=True, totalSteps=nsteps, remainingTime=True, separator='\t\t'))
simulation.reporters.append(StateDataReporter(log_file, nprint, step=True, kineticEnergy=True, potentialEnergy=True, totalEnergy=True, temperature=True, volume=True, speed=True))
print("\n> Simulating " + str(nsteps) + " steps...")
simulation.step(nsteps)
simulation.reporters.clear() # remove all reporters so the next iteration don't trigger them.
##################################
# Writing last frame information of stride
print("\n> Writing state file (" + str(rst_file) + ")...")
state = simulation.context.getState( getPositions=True, getVelocities=True )
with open(rst_file, 'w') as f:
f.write(XmlSerializer.serialize(state))
last_frame = int(nsteps/nsavcrd)
print("> Writing coordinate file (" + str(pdb_file) + ", frame = " + str(last_frame) + ")...")
positions = simulation.context.getState(getPositions=True).getPositions()
PDBFile.writeFile(simulation.topology, positions, open(pdb_file, 'w'))
print("\n> Finished!\n")
###Output
_____no_output_____
###Markdown
------ **Running a Production MD simulation**Finally, we will proceed with the Production simulation itself using the equilibrated system coordinates as input structure.Note that we will use here a *.rst state file* , which contains atomic velocities and positions from the last frame of the equilibration simulation, guaranteeing that our production simulation begins from a thermodynamically equilibrated system.Another important information here is the **Number_of_strides** and the **Stride_Time**. In this notebook, we simulate a defined number of *strides*, so the **simulation time = Number_of_strides*Stride_Time**. For example, we can simulate 100ns by setting *Number_of_strides=10* and *Stride_Time=10 ns*.**Important: at the end of the Production simulation, we concatenate all strides to create a complete trajectory file which can be visualized and analyzed**The idea behind this approach is to make use of the intermitent 12h/24h period that Google Colab allows us to use its GPUs.
###Code
#@markdown ### **Provide input file names below:**
Equilibrated_PDB = 'prot_lig_equil.pdb' #@param {type:"string"}
State_file = 'prot_lig_equil.rst' #@param {type:"string"}
#@markdown ---
#@markdown ### **Parameters for MD Production protocol:**
# remove whitespaces
Jobname = 'prot_lig_prod' #@param {type:"string"}
Ligand_Force_field = "GAFF2" #@param ["GAFF2", "OpenFF 2.0.0 (Sage)"]
if Ligand_Force_field == "OpenFF 2.0.0 (Sage)":
top = os.path.join(workDir, "SYS_openff.prmtop")
crd = os.path.join(workDir, "SYS_openff.inpcrd")
pdb = os.path.join(workDir, "SYS.pdb")
else:
top = os.path.join(workDir, "SYS_gaff2.prmtop")
crd = os.path.join(workDir, "SYS_gaff2.crd")
pdb = os.path.join(workDir, "SYS.pdb")
#@markdown Simulation time (in nanoseconds), number of strides (integers) and integration timestep (in femtoseconds):
Stride_Time = "10" #@param {type:"string"}
stride_time_prod = Stride_Time
Number_of_strides = "1" #@param {type:"string"}
nstride = Number_of_strides
Integration_timestep = "2" #@param ["0.5", "1", "2", "3", "4"]
dt_prod = Integration_timestep
#@markdown Temperature (in Kelvin) and Pressure (in bar)
Temperature = 298 #@param {type:"string"}
temperature_prod = Temperature
Pressure = 1 #@param {type:"string"}
pressure_prod = Pressure
#@markdown Frequency to write the trajectory file (in picoseconds):
Write_the_trajectory = "100" #@param ["10", "100", "200", "500", "1000"]
write_the_trajectory_prod = Write_the_trajectory
#@markdown Frequency to write the log file (in picoseconds):
Write_the_log = "10" #@param ["10", "100", "200", "500", "1000"]
write_the_log_prod = Write_the_log
#@markdown ---
#@title **Runs a Production MD simulation (NPT ensemble) after equilibration**
#
###########################################
import openmm as mm
from openmm import *
from openmm.app import *
from openmm.unit import *
from sys import stdout, exit, stderr
import os, math, fnmatch
#############################################
# Defining MD simulation parameters
jobname = os.path.join(workDir, str(Jobname))
coordinatefile = crd
pdbfile = os.path.join(workDir, Equilibrated_PDB)
topologyfile = top
equil_rst_file = os.path.join(workDir, State_file)
stride_time_ps = float(stride_time_prod)*1000
stride_time = float(stride_time_ps)*picosecond
nstride = int(Number_of_strides)
dt = int(dt_prod)*femtosecond
temperature = float(temperature_prod)*kelvin
savcrd_freq = int(write_the_trajectory_prod)*picosecond
print_freq = int(write_the_log_prod)*picosecond
pressure = float(pressure_prod)*bar
simulation_time = stride_time*nstride
nsteps = int(stride_time.value_in_unit(picosecond)/dt.value_in_unit(picosecond))
nprint = int(print_freq.value_in_unit(picosecond)/dt.value_in_unit(picosecond))
nsavcrd = int(savcrd_freq.value_in_unit(picosecond)/dt.value_in_unit(picosecond))
firststride = 1 # must be integer
#############################################
# Defining functions to use below:
def backup_old_log(pattern, string):
result = []
for root, dirs, files in os.walk("./"):
for name in files:
if fnmatch.fnmatch(name, pattern):
try:
number = int(name[-2])
avail = isinstance(number, int)
#print(name,avail)
if avail == True:
result.append(number)
except:
pass
if len(result) > 0:
maxnumber = max(result)
else:
maxnumber = 0
backup_file = "\#" + string + "." + str(maxnumber + 1) + "#"
os.system("mv " + string + " " + backup_file)
return backup_file
##############################################
#############################################
print("\n> Simulation details:\n")
print("\tJob name = " + jobname)
print("\tCoordinate file = " + str(coordinatefile))
print("\tPDB file = " + str(pdbfile))
print("\tTopology file = " + str(topologyfile))
print("\n\tSimulation_time = " + str(stride_time*nstride))
print("\tIntegration timestep = " + str(dt))
print("\tTotal number of steps = " + str(nsteps*nstride))
print("\tNumber of strides = " + str(nstride) + " (" + str(stride_time) + " in each stride)")
print("\n\tSave coordinates each " + str(savcrd_freq))
print("\tSave checkpoint each " + str(savcrd_freq))
print("\tPrint in log file each " + str(print_freq))
print("\n\tTemperature = " + str(temperature))
print("\tPressure = " + str(pressure))
#############################################
print("\n> Setting the system:\n")
if Ligand_Force_field == "OpenFF 2.0.0 (Sage)":
print("\t- Reading topology and structure file...")
prmtop = pmd.load_file(topologyfile)
inpcrd = AmberInpcrdFile(coordinatefile)
print("\t- Creating system and setting parameters...")
nonbondedMethod = PME
nonbondedCutoff = 1.0*nanometers
ewaldErrorTolerance = 0.0005
constraints = HBonds
rigidWater = True
constraintTolerance = 0.000001
friction = 1.0
system = complex_structure.createSystem(nonbondedMethod=nonbondedMethod, nonbondedCutoff=nonbondedCutoff,
constraints=constraints, rigidWater=rigidWater, ewaldErrorTolerance=ewaldErrorTolerance)
else:
print("\t- Reading topology and structure file...")
prmtop = AmberPrmtopFile(topologyfile)
inpcrd = AmberInpcrdFile(coordinatefile)
print("\t- Creating system and setting parameters...")
nonbondedMethod = PME
nonbondedCutoff = 1.0*nanometers
ewaldErrorTolerance = 0.0005
constraints = HBonds
rigidWater = True
constraintTolerance = 0.000001
friction = 1.0
system = prmtop.createSystem(nonbondedMethod=nonbondedMethod, nonbondedCutoff=nonbondedCutoff,
constraints=constraints, rigidWater=rigidWater, ewaldErrorTolerance=ewaldErrorTolerance)
print("\t- Setting barostat...")
system.addForce(MonteCarloBarostat(pressure, temperature))
print("\t- Setting integrator...")
integrator = LangevinIntegrator(temperature, friction, dt)
integrator.setConstraintTolerance(constraintTolerance)
simulation = Simulation(prmtop.topology, system, integrator)
simulation.context.setPositions(inpcrd.positions)
if inpcrd.boxVectors is not None:
simulation.context.setPeriodicBoxVectors(*inpcrd.boxVectors)
#############################################
# Opening a loop of extension NSTRIDE to simulate the entire STRIDE_TIME*NSTRIDE
for n in range(1, nstride + 1):
print("\n\n>>> Simulating Stride #" + str(n) + " <<<")
dcd_file = jobname + "_" + str(n) + ".dcd"
log_file = jobname + "_" + str(n) + ".log"
rst_file = jobname + "_" + str(n) + ".rst"
prv_rst_file = jobname + "_" + str(n-1) + ".rst"
pdb_file = jobname + "_" + str(n) + ".pdb"
if os.path.exists(rst_file):
print("> Stride #" + str(n) + " finished (" + rst_file + " present). Moving to next stride... <")
continue
if n == 1:
print("\n> Loading previous state from equilibration > " + equil_rst_file + " <")
with open(equil_rst_file, 'r') as f:
simulation.context.setState(XmlSerializer.deserialize(f.read()))
currstep = int((n-1)*nsteps)
currtime = currstep*dt.in_units_of(picosecond)
simulation.currentStep = currstep
simulation.context.setTime(currtime)
print("> Current time: " + str(currtime) + " (Step = " + str(currstep) + ")")
else:
print("> Loading previous state from > " + prv_rst_file + " <")
with open(prv_rst_file, 'r') as f:
simulation.context.setState(XmlSerializer.deserialize(f.read()))
currstep = int((n-1)*nsteps)
currtime = currstep*dt.in_units_of(picosecond)
simulation.currentStep = currstep
simulation.context.setTime(currtime)
print("> Current time: " + str(currtime) + " (Step = " + str(currstep) + ")")
dcd = DCDReporter(dcd_file, nsavcrd)
firstdcdstep = (currstep) + nsavcrd
dcd._dcd = DCDFile(dcd._out, simulation.topology, simulation.integrator.getStepSize(), firstdcdstep, nsavcrd) # first step should not be 0
simulation.reporters.append(dcd)
simulation.reporters.append(StateDataReporter(stdout, nprint, step=True, speed=True, progress=True, totalSteps=(nsteps*nstride), remainingTime=True, separator='\t\t'))
simulation.reporters.append(StateDataReporter(log_file, nprint, step=True, kineticEnergy=True, potentialEnergy=True, totalEnergy=True, temperature=True, volume=True, speed=True))
print("\n> Simulating " + str(nsteps) + " steps... (Stride #" + str(n) + ")")
simulation.step(nsteps)
simulation.reporters.clear() # remove all reporters so the next iteration don't trigger them.
##################################
# Writing last frame information of stride
print("\n> Writing state file (" + str(rst_file) + ")...")
state = simulation.context.getState( getPositions=True, getVelocities=True )
with open(rst_file, 'w') as f:
f.write(XmlSerializer.serialize(state))
last_frame = int(nsteps/nsavcrd)
print("> Writing coordinate file (" + str(pdb_file) + ", frame = " + str(last_frame) + ")...")
positions = simulation.context.getState(getPositions=True).getPositions()
PDBFile.writeFile(simulation.topology, positions, open(pdb_file, 'w'))
print("\n> Finished!\n")
#@title **Concatenate and align the trajectory**
Skip = "1" #@param ["1", "2", "5", "10", "20", "50"]
stride_traj = Skip
Output_format = "dcd" #@param ["dcd", "pdb", "trr", "xtc"]
#@markdown **Attention:** A high number of frames can explode the memory on Colab. You should be fine with 5000 frames or less.
simulation_time_analysis = stride_time_ps*nstride
simulation_ns = float(Stride_Time)*int(Number_of_strides)
number_frames = int(simulation_time_analysis)/int(Write_the_trajectory)
number_frames_analysis = number_frames/int(stride_traj)
traj_end = os.path.join(workDir, str(Jobname) + "_all.dcd")
traj_end2 = os.path.join(workDir, str(Jobname) + "_all." + str(Output_format))
template = os.path.join(workDir, str(Jobname) + '_%s.dcd')
flist = [template % str(i) for i in range(1, nstride + 1)]
#print(flist)
trajlist = pt.load(flist, pdb, stride=stride_traj)
traj_image = trajlist.iterframe(autoimage=True, rmsfit=0)
traj_write = pt.write_traj(traj_end, traj_image, overwrite=True)
traj_load = pt.load(traj_end, pdb)
traj_align = pt.align(traj_load, mask="@CA", ref=0)
traj_write = pt.write_traj(traj_end, traj_align, overwrite=True, options='dcd')
traj_write = pt.write_traj(traj_end2, traj_align, overwrite=True, options=Output_format)
traj_load = pt.load(traj_end, os.path.join(workDir, "SYS_gaff2.prmtop"))
print(traj_load)
traj_end_check = os.path.exists(traj_end2)
if traj_end_check == True:
print("Trajectory concatenated successfully! :-)")
else:
print("ERROR: Check your inputs! ")
#@title **Load, view and check the trajectory**
#@markdown This will take a few minutes. Another coffee would be great. :-)
import warnings
warnings.filterwarnings('ignore')
!rm *.pdb 2> /dev/null
#py3dmol functions
class Atom(dict):
def __init__(self, line):
self["type"] = line[0:6].strip()
self["idx"] = line[6:11].strip()
self["name"] = line[12:16].strip()
self["resname"] = line[17:20].strip()
self["resid"] = int(int(line[22:26]))
self["x"] = float(line[30:38])
self["y"] = float(line[38:46])
self["z"] = float(line[46:54])
self["sym"] = line[76:78].strip()
def __str__(self):
line = list(" " * 80)
line[0:6] = self["type"].ljust(6)
line[6:11] = self["idx"].ljust(5)
line[12:16] = self["name"].ljust(4)
line[17:20] = self["resname"].ljust(3)
line[22:26] = str(self["resid"]).ljust(4)
line[30:38] = str(self["x"]).rjust(8)
line[38:46] = str(self["y"]).rjust(8)
line[46:54] = str(self["z"]).rjust(8)
line[76:78] = self["sym"].rjust(2)
return "".join(line) + "\n"
class Molecule(list):
def __init__(self, file):
for line in file:
if "ATOM" in line or "HETATM" in line:
self.append(Atom(line))
def __str__(self):
outstr = ""
for at in self:
outstr += str(at)
return outstr
if number_frames_analysis > 10:
stride_animation = number_frames_analysis/10
else:
stride_animation = 1
u = mda.Universe(pdb, traj_end)
# Write out frames for animation
protein = u.select_atoms('not (resname WAT)')
i = 0
for ts in u.trajectory[0:len(u.trajectory):int(stride_animation)]:
if i > -1:
with mda.Writer('' + str(i) + '.pdb', protein.n_atoms) as W:
W.write(protein)
i = i + 1
# Load frames as molecules
molecules = []
for i in range(int(len(u.trajectory)/int(stride_animation))):
with open('' + str(i) + '.pdb') as ifile:
molecules.append(Molecule(ifile))
models = ""
for i in range(len(molecules)):
models += "MODEL " + str(i) + "\n"
for j,mol in enumerate(molecules[i]):
models += str(mol)
models += "ENDMDL\n"
#view.addModelsAsFrames(models)
# Animation
view = py3Dmol.view(width=800, height=600)
view.addModelsAsFrames(models)
for i, at in enumerate(molecules[0]):
default = {"cartoon": {'color': 'spectrum'}}
view.setViewStyle({'style':'outline','color':'black','width':0.1})
view.setStyle({'model': -1, 'serial': i+1}, at.get("pymol", default))
HP = ['LIG']
view.setStyle({"model":-1,'and':[{'resn':HP}]},{'stick':{'radius':0.3}})
view.zoomTo()
view.animate({'loop': "forward"})
view.show()
#@title **View and check the Ligand Interaction Network (LigPlot) during MD simulations**
#@markdown This diagram is interactive and allows moving around the residues, as well as clicking the legend to toggle the display of specific residues types or interactions. The diagram will be saved as an HTML file (output.html).
#@markdown **Provide output file names below:**
Output_name = 'Interaction' #@param {type:"string"}
#@markdown The frequency with which an interaction is seen will control the width of the corresponding edge. You can hide the least frequent interactions by using a threshold, i.e. threshold=0.3 will hide interactions that occur in less than 30% of frames.
Threshold = 0.3 #@param {type:"slider", min:0, max:1.0, step:0.1}
import MDAnalysis as mda
import prolif as plf
import numpy as np
import os
from prolif.plotting.network import LigNetwork
# load topology
u = mda.Universe(os.path.join(workDir, "SYS_gaff2.prmtop"), traj_end)
lig = u.select_atoms("resname LIG")
prot = u.select_atoms("protein")
# create RDKit-like molecules for visualisation
lmol = plf.Molecule.from_mda(lig)
pmol = plf.Molecule.from_mda(prot)
if number_frames_analysis > 10:
stride_animation = number_frames_analysis/10
else:
stride_animation = 1
fp = plf.Fingerprint()
fp.run(u.trajectory[::int(stride_animation)], lig, prot)
df = fp.to_dataframe(return_atoms=True)
net = LigNetwork.from_ifp(df, lmol,
# replace with `kind="frame", frame=0` for the other depiction
kind="aggregate", threshold=float(Threshold),
rotation=270)
net.save(os.path.join(workDir, Output_name + ".html"))
net.display()
###Output
_____no_output_____
###Markdown
------ **Analysis**Although visualizing your trajectory can be quite useful, sometimes you also want more quantitative data.Analyses of MD trajectories vary a lot and we do not intend to cover it all here. However, one can make use of MDanalysis or PyTraj to easily analyze simulations. Below, you can find a few examples of code snippets that can help you to shed some light on your simulation behavior.
###Code
#@title **MM-PBSA method to calculate the binding free energy**
#@markdown **Important:** We will now calculate the interaction energy and solvation free energy for the complex, receptor and ligand and average the results to obtain an estimate of the binding free energy. Please note that we will not perform a calculation of the entropy contribution to binding and so strictly speaking our result will not be a true free energy but could be used to compare against similar systems. We will carry out the binding energy calculation using both the MM-GBSA method and the MM-PBSA method for comparison.
#@markdown Select the GB/SA input parameters, the "OBC" models (igb=2 and 5) are newer, but appear to give significant improvements and are recommended for most projects (For more information check the Section 4.1 of the [Amber Manual](https://ambermd.org/doc12/Amber20.pdf)):
igb = "2" #@param ["0", "1", "2", "5", "6", "7", "8", "10"]
Salt_concentration = '0.15' #@param {type:"string"}
fold_MMPBSA = "MMPBSA_igb_" + igb
#@markdown **Provide output file names below:**
Output_name = 'FINAL_RESULTS_MMPBSA' #@param {type:"string"}
final_mmpbsa = os.path.join(workDir, Output_name)
if number_frames_analysis > 10:
stride = number_frames_analysis/10
else:
stride = 1
f = open("mmpbsa.in", "w")
f.write("""&general """ + "\n"
""" endframe=""" + str(int(number_frames_analysis)) + """, interval=""" + str(int(stride)) + """, strip_mask=:WAT:Na+:Cl-:Mg+:K+, """ + "\n"
"""/ """ + "\n"
"""&gb """ + "\n"
""" igb=""" + str(igb) + """, saltcon=""" + str(Salt_concentration) + """, """ + "\n"
"""/ """ + "\n"
"""&pb """ + "\n"
""" istrng=""" + str(Salt_concentration) + """, inp=2, radiopt=0, prbrad=1.4, """ + "\n"
"""/""")
f.close()
amberhome = "source /usr/local/amber.sh"
ante_MMPBSA = "ante-MMPBSA.py -p " + os.path.join(workDir, "SYS_gaff2.prmtop") + " -c com.prmtop -r rec.prmtop -l ligand.prmtop -s :WAT:Na+:Cl-:Mg+:K+ -n :LIG --radii mbondi2"
MMPBSA = "MMPBSA.py -O -i mmpbsa.in -o " + str(final_mmpbsa) + ".dat -sp " + os.path.join(workDir, "SYS_gaff2.prmtop") + " -cp com.prmtop -rp rec.prmtop -lp ligand.prmtop -y " + str(traj_end)
mkdir = "mkdir " + os.path.join(workDir, fold_MMPBSA)
mv = "mv _MMPBSA* *.prmtop reference.frc mmpbsa.in " + os.path.join(workDir, fold_MMPBSA)
original_stdout = sys.stdout # Save a reference to the original standard output
with open('run_MMPBSA.sh', 'w') as f:
sys.stdout = f # Change the standard output to the file we created.
print(amberhome)
print(ante_MMPBSA)
print(MMPBSA)
print(mkdir)
print(mv)
sys.stdout = original_stdout # Reset the standard output to its original value
!chmod 700 run_MMPBSA.sh 2>&1 1>/dev/null
!bash run_MMPBSA.sh 2>&1 1>/dev/null
f_mmpbsa = open(final_mmpbsa + '.dat', 'r')
file_contents = f_mmpbsa.read()
print(file_contents)
f_mmpbsa.close()
#@title **Interaction Energy**
#@markdown **Important:** To quantify the strength of the interaction between the ligand and the protein, we will compute the nonbonded interaction energy between these two species. It is important to note that this quantity is NOT a free energy or a binding energy.
#@markdown **Provide output file names below:**
Output_name = 'Interaction_energy' #@param {type:"string"}
pt_topology = traj_load.top
restraint_array = pt.select_atoms('!(:WAT) & !(:Na+) & !(:Cl-) & !(:Mg+) & !(:K+) & !(:LIG)', pt_topology)
first_atom = restraint_array[0]
last_atom = restraint_array[-1]
mask = "LIE :LIG @" + str(first_atom+1) + "-" + str(last_atom+1)
lie = pt.analysis.energy_analysis.lie(traj_load, mask=mask, options='cutvdw 12.0 cutelec 12.0 diel 2.0', dtype='dict')
lie_elec = lie['LIE[EELEC]']
lie_vdw = lie['LIE[EVDW]']
lie_total = lie_elec + lie_vdw
lie_total_mean = mean(lie_total)
lie_total_stdev = stdev(lie_total)
print("Interaction Energy Average = " + str("{:.2f}".format(lie_total_mean)) + " \u00B1 " + str("{:.2f}".format(lie_total_stdev)) + " kcal/mol")
time = len(lie_total)*int(Write_the_trajectory)/1000
time_array = np.arange(0,time,int(Write_the_trajectory)/1000)*int(stride_traj)
# Plotting:
ax = plt.plot(time_array, lie_total, alpha=0.6, color = 'blue', linewidth = 1.5, label= "Total Energy")
ax = plt.plot(time_array, lie_elec, alpha=0.6, color = 'green', linewidth = 1.5, label= "Electrostatic Energy")
ax = plt.plot(time_array, lie_vdw, alpha=0.6, color = 'red', linewidth = 1.5, label= "van der Waals Energy")
plt.xlim(0, simulation_ns)
#plt.ylim(2, 6)
plt.xlabel("Time (ns)", fontsize = 14, fontweight = 'bold')
plt.ylabel('Interaction Energy \n (kcal/mol)', fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
plt.legend(frameon=False, loc='center left', bbox_to_anchor=(1, 0.5))
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
lie_eelec = pd.DataFrame(lie['LIE[EELEC]'])
lie_eelec.to_csv(os.path.join(workDir, Output_name + "_eelec.csv"))
lie_evdw = pd.DataFrame(lie['LIE[EVDW]'])
lie_evdw.to_csv(os.path.join(workDir, Output_name + "_evdw.csv"))
#@title **Compute distance between the ligand and catalytic site residues**
#@markdown **Provide output file names below:**
Output_name = 'distance' #@param {type:"string"}
#@markdown **Cutoff distance to nearest residues (Angstrons):**
Distance = '5' #@param {type:"string"}
ini = 0
top = pt_topology
for frame in traj_load:
top.set_reference(traj_load[ini])
indices = traj_load.top.select('(:LIG<:' + str(Distance) + ')&!(:WAT|:Na+,Cl-,LIG)')
residues = [res.original_resid for res in top[indices].residues]
res_string = ','.join(str(e) for e in residues)
print("Selected residues = " + res_string + "\n")
mask = ":LIG :" + str(res_string)
dist = pt.distance(traj_load, mask)
dist_mean = mean(dist)
dist_stdev = stdev(dist)
print("Distance Average = " + str("{:.2f}".format(dist_mean)) + " \u00B1 " + str("{:.2f}".format(dist_stdev)) + " Å")
time = len(dist)*int(Write_the_trajectory)/1000
time_array = np.arange(0,time,int(Write_the_trajectory)/1000)*int(stride_traj)
# Plotting:
ax = plt.plot(time_array, dist, alpha=1, color = 'springgreen', linewidth = 1.0)
plt.xlim(0, simulation_ns)
#plt.ylim(2, 6)
plt.xlabel("Time (ns)", fontsize = 14, fontweight = 'bold')
plt.ylabel("Distance [$\AA$]", fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
raw_data=pd.DataFrame(dist)
raw_data.to_csv(os.path.join(workDir, Output_name + ".csv"))
#@title **Compute distance between the ligand and specific residues**
#@markdown **Provide output file names below:**
Output_name = 'distance_select' #@param {type:"string"}
#@markdown **Type the number of residues separated by commas and without spaces (1,2,3...):**
Residues = '78,84,85' #@param {type:"string"}
mask = ":LIG :" + str(Residues)
dist = pt.distance(traj_load, mask)
print("Selected residues = " + Residues + "\n")
dist_mean = mean(dist)
dist_stdev = stdev(dist)
print("Distance Average = " + str("{:.2f}".format(dist_mean)) + " \u00B1 " + str("{:.2f}".format(dist_stdev)) + " Å")
time = len(dist)*int(Write_the_trajectory)/1000
time_array = np.arange(0,time,int(Write_the_trajectory)/1000)*int(stride_traj)
# Plotting:
ax = plt.plot(time_array, dist, alpha=1, color = 'magenta', linewidth = 1.0)
plt.xlim(0, simulation_ns)
#plt.ylim(2, 6)
plt.xlabel("Time (ns)", fontsize = 14, fontweight = 'bold')
plt.ylabel("Distance [$\AA$]", fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
raw_data=pd.DataFrame(dist)
raw_data.to_csv(os.path.join(workDir, Output_name + ".csv"))
#@title **Compute RMSD of protein's CA atoms**
#@markdown **Provide output file names below:**
Output_name = 'rmsd_ca' #@param {type:"string"}
rmsd = pt.rmsd(traj_load, ref = 0, mask = "@CA")
time = len(rmsd)*int(Write_the_trajectory)/1000
time_array = np.arange(0,time,int(Write_the_trajectory)/1000)*int(stride_traj)
# Plotting:
ax = plt.plot(time_array, rmsd, alpha=0.6, color = 'blue', linewidth = 1.0)
plt.xlim(0, simulation_ns)
#plt.ylim(2, 6)
plt.xlabel("Time (ns)", fontsize = 14, fontweight = 'bold')
plt.ylabel("RMSD [$\AA$]", fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
raw_data=pd.DataFrame(rmsd)
raw_data.to_csv(os.path.join(workDir, Output_name + ".csv"))
#@title **Plot RMSD as a ditribution**
#@markdown **Provide output file names below:**
Output_name = 'rmsd_dist' #@param {type:"string"}
ax = sb.kdeplot(rmsd, color="blue", shade=True, alpha=0.2, linewidth=0.5)
plt.xlabel('RMSD [$\AA$]', fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks([])
plt.ylabel('')
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(True)
ax.spines['left'].set_visible(False)
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
#@title **Compute radius of gyration of protein's CA atoms**
#@markdown **Provide output file names below:**
Output_name = 'radius_gyration' #@param {type:"string"}
radgyr = pt.radgyr(traj_load, mask = "@CA")
time = len(rmsd)*int(Write_the_trajectory)/1000
time_array = np.arange(0,time,int(Write_the_trajectory)/1000)*int(stride_traj)
# Plotting:
plt.plot(time_array, radgyr, alpha=0.6, color = 'green', linewidth = 1.0)
plt.xlim(0, simulation_ns)
#plt.ylim(2, 6)
plt.xlabel("Time (ns)", fontsize = 14, fontweight = 'bold')
plt.ylabel("Radius of gyration ($\AA$)", fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
raw_data=pd.DataFrame(radgyr)
raw_data.to_csv(os.path.join(workDir, Output_name + ".csv"))
#@title **Plot radius of gyration as a ditribution**
#@markdown **Provide output file names below:**
Output_name = 'radius_gyration_dist' #@param {type:"string"}
ax = sb.kdeplot(radgyr, color="green", shade=True, alpha=0.2, linewidth=0.5)
plt.xlabel('Radius of gyration ($\AA$)', fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks([])
plt.ylabel('')
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(True)
ax.spines['left'].set_visible(False)
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
#@title **Compute RMSF of protein's CA atoms**
#@markdown **Provide output file names below:**
Output_name = 'rmsf_ca' #@param {type:"string"}
rmsf = pt.rmsf(traj_load, "@CA")
bfactor = pt.bfactors(traj_load, byres=True)
# Plotting:
plt.plot(rmsf[:,1], alpha=1.0, color = 'red', linewidth = 1.0)
plt.xlabel("Residue", fontsize = 14, fontweight = 'bold')
plt.ylabel("RMSF ($\AA$)", fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.xlim(0, len(rmsf[:-1]))
#plt.xticks(np.arange(min(rmsf[:1]), max(rmsf[:1])))
plt.yticks(fontsize = 12)
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
raw_data=pd.DataFrame(rmsf)
raw_data.to_csv(os.path.join(workDir, Output_name + ".csv"))
#@title **2D RMSD**
#@markdown **Provide output file names below:**
Output_name = '2D_rmsd' #@param {type:"string"}
last_frame = len(time_array)
stride_ticks_f = (last_frame)/5
ticks_frame = np.arange(0,(len(time_array) + float(stride_ticks_f)), float(stride_ticks_f))
a = ticks_frame.astype(float)
stride_ticks_t = (simulation_ns)/5
tick_time = np.arange(0,(float(simulation_ns) + float(stride_ticks_t)), float(stride_ticks_t))
b = tick_time.astype(float)
mat1 = pt.pairwise_rmsd(traj_load, mask="@CA", frame_indices=range(int(number_frames_analysis)))
ax = plt.imshow(mat1, cmap = 'PRGn', origin='lower', interpolation = 'bicubic')
plt.title('2D RMSD')
plt.xlabel('Time (ns)', fontsize = 14, fontweight = 'bold')
plt.ylabel('Time (ns)', fontsize = 14, fontweight = 'bold')
# plt.xticks(fontsize = 12)
# plt.yticks(fontsize = 12)
plt.xticks(a, b.round(decimals=3), fontsize = 12)
plt.yticks(a, b.round(decimals=3), fontsize = 12)
# plt.xlim(0, a[-1])
# plt.ylim(0, a[-1])
cbar1 = plt.colorbar()
cbar1.set_label("RMSD ($\AA$)", fontsize = 14, fontweight = 'bold')
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
raw_data=pd.DataFrame(mat1)
raw_data.to_csv(os.path.join(workDir, Output_name + ".csv"))
#@title **Calculate eigvenctors of Principle Component Analysis (PCA)**
data = pt.pca(traj_load, fit=True, ref=0, mask='@CA', n_vecs=2)
#print('projection values of each frame to first mode = {} \n'.format(data[0][0]))
#print('projection values of each frame to second mode = {} \n'.format(data[0][1]))
#print('eigvenvalues of first two modes', data[1][0])
#print("")
#print('eigvenvectors of first two modes: \n', data[1][1])
last_frame = len(time_array)
stride_ticks_f = (last_frame)/5
ticks_frame = np.arange(0,(len(time_array) + float(stride_ticks_f)), float(stride_ticks_f))
a = ticks_frame.astype(float)
a2 = a.tolist()
stride_ticks_t = (simulation_ns)/5
tick_time = np.arange(0,(float(simulation_ns) + float(stride_ticks_t)), float(stride_ticks_t))
b = tick_time.astype(float)
#@markdown **Provide output file names below:**
Output_name = 'PCA' #@param {type:"string"}
Output_PC1 = 'PC1' #@param {type:"string"}
Output_PC2 = 'PC2' #@param {type:"string"}
%matplotlib inline
%config InlineBackend.figure_format = 'retina' # high resolution
projection_data = data[0]
plt.title(r'PCA of C-$\alpha$')
PC1 = data[0][0]
PC2 = data[0][1]
a = plt.scatter(PC1,PC2, c=range(int(number_frames_analysis)), cmap='Greens', marker='o',s=8, alpha=1)
plt.clim(0, last_frame)
plt.xlabel('PC1', fontsize = 14, fontweight = 'bold')
plt.ylabel('PC2', fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
# N = len(number_frames)
# x2 = np.arange(N)
cbar1 = plt.colorbar(a, orientation="vertical")
cbar1.set_label('Time(ns)', fontsize = 14, fontweight = 'bold')
cbar1.set_ticks(a2)
cbar1.set_ticklabels(b.round(decimals=3))
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
pc1=pd.DataFrame(PC1)
pc1.to_csv(os.path.join(workDir, Output_PC1 + ".csv"))
pc2=pd.DataFrame(PC2)
pc2.to_csv(os.path.join(workDir, Output_PC2 + ".csv"))
#@title **Plot Principal Component 1 (PC1) and Principal Component 2 (PC2) as a ditribution**
Output_name = 'PCA_dist' #@param {type:"string"}
fig = plt.figure(figsize=(9,5))
plt.subplot(1, 2, 1)
ax = sb.kdeplot(PC1, color="green", shade=True, alpha=0.2, linewidth=0.5)
plt.xlabel('PC1', fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks([])
plt.ylabel('')
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(True)
ax.spines['left'].set_visible(False)
plt.subplot(1, 2, 2)
ax2 = sb.kdeplot(PC2, color="purple", shade=True, alpha=0.2, linewidth=0.5)
plt.xlabel('PC2', fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks([])
plt.ylabel('')
ax2.spines['top'].set_visible(False)
ax2.spines['right'].set_visible(False)
ax2.spines['bottom'].set_visible(True)
ax2.spines['left'].set_visible(False)
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
#@title **Pearson's Cross Correlation (CC)**
#@markdown **Provide output file names below:**
Output_name = 'cross_correlation' #@param {type:"string"}
traj_align = pt.align(traj_load, mask='@CA', ref=0)
mat_cc = matrix.correl(traj_align, '@CA')
ax = plt.imshow(mat_cc, cmap = 'PiYG_r', interpolation = 'bicubic', vmin = -1, vmax = 1, origin='lower')
plt.xlabel('Residues', fontsize = 14, fontweight = 'bold')
plt.ylabel('Residues', fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
cbar1 = plt.colorbar()
cbar1.set_label('$CC_ij$', fontsize = 14, fontweight = 'bold')
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
raw_data=pd.DataFrame(mat_cc)
raw_data.to_csv(os.path.join(workDir, Output_name + ".csv"))
###Output
_____no_output_____
###Markdown
**Hello there!**This is a Jupyter notebook for running Molecular Dynamics (MD) simulations using OpenMM engine and AMBER force field for **Protein and Ligand** systems. This notebook is a supplementary material of the paper "***Making it rain: Cloud-based molecular simulations for everyone***" ([link here](https://doi.org/10.1021/acs.jcim.1c00998)) and we encourage you to read it before using this pipeline.The main goal of this notebook is to demonstrate how to harness the power of cloud-computing to run microsecond-long MD simulations in a cheap and yet feasible fashion.--- **This notebook is NOT a standard protocol for MD simulations!** It is just simple MD pipeline illustrating each step of a simulation protocol.--- **Bugs**- If you encounter any bugs, please report the issue to https://github.com/pablo-arantes/making-it-rain/issues**Acknowledgments**- We would like to thank the OpenMM team for developing an excellent and open source engine. - We would like to thank the ChemosimLab ([@ChemosimLab](https://twitter.com/ChemosimLab)) team for their incredible [ProLIF](https://prolif.readthedocs.io/en/latest/index.html) (Protein-Ligand Interaction Fingerprints) tool.- A Making-it-rain by **Pablo R. Arantes** ([@pablitoarantes](https://twitter.com/pablitoarantes)), **Marcelo D. Polêto** ([@mdpoleto](https://twitter.com/mdpoleto)), **Conrado Pedebos** ([@ConradoPedebos](https://twitter.com/ConradoPedebos)) and **Rodrigo Ligabue-Braun** ([@ligabue_braun](https://twitter.com/ligabue_braun)).- Also, credit to [David Koes](https://github.com/dkoes) for his awesome [py3Dmol](https://3dmol.csb.pitt.edu/) plugin.- For related notebooks see: [Making-it-rain](https://github.com/pablo-arantes/making-it-rain) **Introduction**In general, MD simulations rely on 1) a set of atomic coordinates of all atoms on a simulation box and 2) a set of force field parameters that describes the interaction energies between atoms.In terms of inputs, we wil need:* A .pdb file of the protein and a .pdb file of the ligand containing a set of atomic coordinates.In this notebook, we will simulate PDB 3HTB. To build our simulation box, we will use LEaP program (https://ambermd.org/tutorials/pengfei/index.php). The LEaP program is a portal between many chemical structure file types (.pdb and .mol2, primarily), and the Amber model parameter file types such as .lib, .prepi, parm.dat, and .frcmod. Each of the parameter files contains pieces of information needed for constructing a simulation, whether for energy minimization or molecular dynamics. LEaP functions within a larger workflow described in Section 1.1 of the [Amber Manual](https://ambermd.org/doc12/Amber20.pdf). To build ligand topology we will use general AMBER force field (GAFF - http://ambermd.org/antechamber/gaff.html) and The Open Force Field Toolkit (OpenFF - https://openforcefield.org/). GAFF is compatible with the AMBER force field and it has parameters for almost all the organic molecules made of C, N, O, H, S, P, F, Cl, Br and I. As a complete force field, GAFF is suitable for study of a great number of molecules in an automatic fashion. The Open Force Field Toolkit, built by the [Open Force Field Initiative](https://openforcefield.org/), is a Python toolkit for the development and application of modern molecular mechanics force fields based on direct chemical perception and rigorous statistical parameterization methods. You can download the input files examples from [here](https://github.com/pablo-arantes/making-it-rain/tree/main/PROTEIN_LIGAND); --- ------ **Setting the environment for MD calculation**Firstly, we need to install all necessary libraries and packages for our simulation. The main packages we will be installing are:1. Anaconda (https://docs.conda.io/en/latest/miniconda.html)2. OpenMM (https://openmm.org/)3. PyTraj (https://amber-md.github.io/pytraj/latest/index.html)4. py3Dmol (https://pypi.org/project/py3Dmol/)5. ProLIF (https://github.com/chemosim-lab/ProLIF)6. Numpy (https://numpy.org/)7. Matplotlib (https://matplotlib.org/)8. AmberTools (https://ambermd.org/AmberTools.php)
###Code
#@title **Install dependencies**
#@markdown It will take a few minutes, please, drink a coffee and wait. ;-)
# install dependencies
%%capture
import sys
!pip -q install py3Dmol 2>&1 1>/dev/null
!pip install --upgrade MDAnalysis 2>&1 1>/dev/null
!pip install biopandas 2>&1 1>/dev/null
!pip install rdkit-pypi
!pip install Cython
!git clone https://github.com/chemosim-lab/ProLIF.git
prolif1 = "cd /content/ProLIF"
prolif2 = "sed -i 's/mdanalysis.*/mdanalysis==2.0.0/' setup.cfg"
prolif3 = "pip install ."
original_stdout = sys.stdout # Save a reference to the original standard output
with open('prolif.sh', 'w') as f:
sys.stdout = f # Change the standard output to the file we created.
print(prolif1)
print(prolif2)
print(prolif3)
sys.stdout = original_stdout # Reset the standard output to its original value
!chmod 700 prolif.sh 2>&1 1>/dev/null
!bash prolif.sh >/dev/null 2>&1
# install conda
!wget -qnc https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
!bash Miniconda3-latest-Linux-x86_64.sh -bfp /usr/local 2>&1 1>/dev/null
!rm -r Miniconda3-latest-Linux-x86_64.sh /content/ProLIF prolif.sh
!conda install -y -q -c conda-forge openmm=7.6 python=3.7 pdbfixer 2>&1 1>/dev/null
!conda install -c conda-forge ambertools --yes 2>&1 1>/dev/null
!conda install -c ambermd pytraj --yes 2>&1 1>/dev/null
!conda install -c conda-forge parmed --yes 2>&1 1>/dev/null
!conda install -c conda-forge openff-toolkit --yes 2>&1 1>/dev/null
!conda install -c bioconda pybel --yes
!conda install -c openbabel openbabel --yes
#load dependencies
sys.path.append('/usr/local/lib/python3.7/site-packages/')
from openmm import app, unit
from openmm.app import HBonds, NoCutoff, PDBFile
from openff.toolkit.topology import Molecule, Topology
from openff.toolkit.typing.engines.smirnoff import ForceField
from openff.toolkit.utils import get_data_file_path
import parmed as pmd
from biopandas.pdb import PandasPdb
import openmm as mm
from openmm import *
from openmm.app import *
from openmm.unit import *
import os
import urllib.request
import numpy as np
import MDAnalysis as mda
import py3Dmol
from __future__ import print_function
import pytraj as pt
import platform
import scipy.cluster.hierarchy
from scipy.spatial.distance import squareform
import scipy.stats as stats
import matplotlib.pyplot as plt
import pandas as pd
from scipy.interpolate import griddata
import seaborn as sb
from statistics import mean, stdev
from pytraj import matrix
from matplotlib import colors
from IPython.display import set_matplotlib_formats
!wget https://raw.githubusercontent.com/openforcefield/openff-forcefields/master/openforcefields/offxml/openff_unconstrained-2.0.0.offxml 2>&1 1>/dev/null
###Output
_____no_output_____
###Markdown
Using Google Drive to store simulation dataGoogle Colab does not allow users to keep data on their computing nodes. However, we can use Google Drive to read, write, and store our simulations files. Therefore, we suggest to you to:1. Create a folder in your own Google Drive and copy the necessary input files there.2. Copy the path of your created directory. We will use it below.
###Code
#@title ### **Import Google Drive**
#@markdown Click in the "Run" buttom to make your Google Drive accessible.
from google.colab import drive
drive.flush_and_unmount()
drive.mount('/content/drive', force_remount=True)
#@title **Check if you correctly allocated GPU nodes**
gpu_info = !nvidia-smi
gpu_info = '\n'.join(gpu_info)
if gpu_info.find('failed') >= 0:
print('Select the Runtime > "Change runtime type" menu to enable a GPU accelerator, ')
print('and then re-execute this cell.')
else:
print(gpu_info)
###Output
_____no_output_____
###Markdown
------ **Loading the necessary input files**At this point, we should have all libraries and dependencies installed and all necessary input files already at your Google Drive folder.**Important**: Make sure the PDB file points to the correct pathway. If necessary, correct the pathway and re-upload the files. We will merge the receptor and ligand structure objects to form the complex. Note that the coordinates of protein and ligand are determined by the PDB file, and they should be consistent with the ligand being positioned in the binding pocket.Below, you should provide the names of all input files and the pathway of your Google Drive folder containing them.
###Code
#@title **Please, provide the necessary input files below**:
#@markdown **Important:** The protonation of your ligand is crucial for the correct parameterization of the molecule.
%%capture
import pybel
import rdkit
import mdtraj as md
from rdkit import Chem
from rdkit.Chem import AllChem,Draw
from rdkit.Chem.Draw import IPythonConsole
from pdbfixer import PDBFixer
Protein_PDB_file_name = 'protein.pdb' #@param {type:"string"}
Ligand_PDB_file_name = 'ligand.pdb' #@param {type:"string"}
Add_ligand_hydrogens = "Yes" #@param ["Yes", "No"]
ligand_name = Ligand_PDB_file_name
Google_Drive_Path = '/content/drive/MyDrive/' #@param {type:"string"}
workDir = Google_Drive_Path
file_name = os.path.join(workDir, str(Protein_PDB_file_name))
initial_pdb = os.path.join(workDir, "starting0.pdb")
ligand_pdb = os.path.join(workDir, str(ligand_name))
ligand_pdb2 = os.path.join(workDir, "ligand_H.pdb")
starting = os.path.join(workDir, "starting1.pdb")
starting2 = os.path.join(workDir, "starting2.pdb")
starting_end = os.path.join(workDir, "starting_end.pdb")
#Add hydrogens in the ligand
if Add_ligand_hydrogens == "Yes":
fixer = PDBFixer(filename=ligand_pdb)
PDBFile.writeFile(fixer.topology, fixer.positions, open("temp.pdb", 'w'))
ppdb = PandasPdb().read_pdb("temp.pdb")
ppdb.df['ATOM'] = ppdb.df['ATOM']
ppdb.df['HETATM']= ppdb.df['HETATM'][ppdb.df['HETATM']['element_symbol'] != 'H']
ppdb.to_pdb(path="temp.pdb", records=['ATOM', 'HETATM'], gz=False, append_newline=True)
mol= [m for m in pybel.readfile(filename="temp.pdb", format='pdb')][0]
mol.calccharges
mol.addh()
out=pybel.Outputfile(filename="temp2.pdb",format='pdb',overwrite=True)
out.write(mol)
out.close()
md.load("temp2.pdb").save("temp2.pdb")
halogens = ['Cl', 'F', 'Br', 'I']
atom_id = []
H_id = []
with open("temp2.pdb") as f:
for line in f:
data = line.split()
if data[0] == "ATOM":
if data[2] in halogens:
atom_id.append(data[1])
if data[0] == "CONECT":
if data[1] in atom_id:
if len(data) > 3:
H_id.append(data[3])
H_id.append(data[4])
H_id.append(data[5])
with open(ligand_pdb2, 'w') as h:
with open("temp2.pdb") as f:
for line in f:
data = line.split()
if data[0] == "ATOM":
if data[1] not in H_id:
print(line, file=h)
elif data[0] == "CONECT":
if data[1] not in atom_id:
print(line, file=h)
else:
print(line, file=h)
fixer = PDBFixer(filename=ligand_pdb2)
PDBFile.writeFile(fixer.topology, fixer.positions, open(ligand_pdb2, 'w'))
else:
fixer = PDBFixer(filename=ligand_pdb)
PDBFile.writeFile(fixer.topology, fixer.positions, open(ligand_pdb2, 'w'))
#Fix protein
pdb_parm = pmd.load_file(file_name)
pdb_parm.save(initial_pdb, standard_resnames=True, overwrite=True)
ppdb = PandasPdb().read_pdb(initial_pdb)
ppdb.df['ATOM'] = ppdb.df['ATOM']
ppdb.df['HETATM'] = ppdb.df['HETATM'][ppdb.df['HETATM']['residue_name'] == 'HOH']
ppdb.df['ATOM'] = ppdb.df['ATOM'][ppdb.df['ATOM']['atom_name'] != 'OXT']
ppdb.df['ATOM']= ppdb.df['ATOM'][ppdb.df['ATOM']['element_symbol'] != 'H']
ppdb.to_pdb(path=starting, records=['ATOM', 'HETATM'], gz=False, append_newline=True)
from Bio.PDB import is_aa
from Bio.PDB import PDBParser, PDBIO, Select
class ProtSelect(Select):
def accept_residue(self, residue):
print(f"{residue} -> {is_aa(residue)}")
return is_aa(residue, standard=True)
from Bio import PDB
pdb_ini = PDBParser().get_structure("pdb", starting)
io = PDBIO()
io.set_structure(pdb_ini)
io.save(starting2, ProtSelect());
pdb4amber_cmd = "pdb4amber -i " + str(starting2) + " -o " + str(starting_end) + " -p"
original_stdout = sys.stdout # Save a reference to the original standard output
with open('pdb4amber.sh', 'w') as f:
sys.stdout = f # Change the standard output to the file we created.
print(pdb4amber_cmd)
sys.stdout = original_stdout # Reset the standard output to its original value
!chmod 700 pdb4amber.sh 2>&1 1>/dev/null
!bash pdb4amber.sh 2> /dev/null
!rm pdb4amber.sh temp.pdb temp2.pdb
#@markdown ---
import rdkit
from rdkit import Chem
from rdkit.Chem import AllChem,Draw
from rdkit.Chem.Draw import IPythonConsole
#@title **Enumerate Stereoisomers to generate ligand topology:**
##@markdown **You can find the smiles for your lingad at: https://pubchem.ncbi.nlm.nih.gov/**
mol= [m for m in pybel.readfile(filename=ligand_pdb2, format='pdb')][0]
mol.calccharges
mol.addh()
out=pybel.Outputfile(filename="temp2.smi",format='smiles',overwrite=True)
out.write(mol)
out.close()
fileObj = open("temp2.smi", "r",) #opens the file in read mode
for aRow in fileObj:
smi = aRow.split('\t')
fileObj.close()
Ligand_smiles = smi[0]
!rm temp2.smi >/dev/null 2>&1
mol = Chem.MolFromSmiles(Ligand_smiles)
def spam(n):
out=[]
for perm in getPerms(n):
elem = [ int(i) for i in list(perm) ]
out.append(elem)
return out
def getPerms(n):
from itertools import permutations
for i in getCandidates(n):
for perm in set(permutations(i)):
yield ''.join(perm)
def getCandidates(n):
for i in range(0, n+1):
res = "1" * i + "0" * (n - i)
yield res
def GetStereoIsomers(mol):
from rdkit import Chem
from copy import copy
out = []
chiralCentres = Chem.FindMolChiralCenters(mol, includeUnassigned=True)
#return the molecule object when no chiral centres where identified
if chiralCentres == []:
return [mol]
#All bit permutations with number of bits equals number of chiralCentres
elements = spam(len(chiralCentres))
!rm smiles.txt temp2.smi >/dev/null 2>&1
for isoId,element in enumerate(elements):
for centreId,i in enumerate(element):
atomId = chiralCentres[centreId][0]
if i == 0:
mol.GetAtomWithIdx(atomId).SetChiralTag(Chem.rdchem.ChiralType.CHI_TETRAHEDRAL_CW)
elif i == 1:
mol.GetAtomWithIdx(atomId).SetChiralTag(Chem.rdchem.ChiralType.CHI_TETRAHEDRAL_CCW)
outmol = copy(mol)
out.append(outmol)
print(Chem.MolToSmiles(mol,isomericSmiles=True), file=open("smiles.txt", "a",))
return out
Draw.MolsToGridImage(GetStereoIsomers(mol), subImgSize=(500,200), molsPerRow=1)
chiralCentres = Chem.FindMolChiralCenters(mol, includeUnassigned=True)
if chiralCentres != []:
print("Follow the stereoisomers for your ligand: \n")
fileObj = open("smiles.txt", "r",) #opens the file in read mode
smiles = fileObj.read().splitlines() #puts the file into an array
fileObj.close()
x = len(smiles[:-1])
for a in range(x+1):
y = smiles[0+a:(a+1)]
globals()[f"isomer{a+1}"] = str(y[0])
print("Isomer " + str(a+1) + " = " + str(y[0]) + "\n")
else:
isomer1 = Ligand_smiles
print("No chiral centres were identified! \nIsomer 1 = " + str(isomer1) )
Draw.MolsToGridImage(GetStereoIsomers(mol), subImgSize=(700,200), molsPerRow=1, returnPNG=True)
from rdkit import Chem
from rdkit.Chem import PandasTools
from openff.toolkit.typing.engines.smirnoff import ForceField
import parmed
#@title **Parameters to generate the topology:**
#@markdown **Parameters to generate the protein topology:**
Force_field = "ff19SB" #@param ["ff19SB", "ff14SB"]
if Force_field == "ff19SB":
ff = "leaprc.protein.ff19SB"
else:
ff = "leaprc.protein.ff14SB"
Water_type = "OPC" #@param ["TIP3P", "OPC"]
if Water_type == "TIP3P":
water = "leaprc.water.tip3p"
water_box = "TIP3PBOX"
else:
water = "leaprc.water.opc"
water_box = "OPCBOX"
#@markdown Size Box (Angstrons):
Size_box = 12 #@param {type:"slider", min:10, max:20, step:1}
size_box = Size_box
#@markdown **ATTENTION**: Give the concentration in Molar units, AMBER tleap will neutralize your system automatically:
Ions = "NaCl" #@param ["NaCl", "KCl" ]
Concentration = "0.15" #@param {type:"string"}
#@markdown **Parameters to generate the ligand topology:**
Ligand_Force_field = "GAFF2" #@param ["GAFF2", "OpenFF 2.0.0 (Sage)"]
Ligand_isomer = "1" #@param {type:"string", min:1, max:10, step:100}
if chiralCentres == []:
isomer_end = isomer1
else:
isomer_end = globals()[f"isomer{Ligand_isomer}"]
Ligand_net_charges = "0" #@param {type:"string", min:-10, max:10, step:1}
#@markdown ---
tleap = os.path.join(workDir, "tleap.in")
top_nw = os.path.join(workDir, "SYS_nw.prmtop")
crd_nw = os.path.join(workDir, "SYS_nw.crd")
pdb_nw = os.path.join(workDir, "SYS_nw.pdb")
top = os.path.join(workDir, "SYS_gaff2.prmtop")
crd = os.path.join(workDir, "SYS_gaff2.crd")
pdb = os.path.join(workDir, "SYS.pdb")
ligand_noh = os.path.join(workDir, "ligand_noh.pdb")
ligand_h = os.path.join(workDir, "ligand_h.pdb")
ligand_mol2 = os.path.join(workDir, "ligand.mol2")
ligand_frcmod = os.path.join(workDir, "ligand.frcmod")
lig_new = os.path.join(workDir, "ligand_gaff.pdb")
protein_ligand = os.path.join(workDir, "protein_ligand.pdb")
lib = os.path.join(workDir, "lig.lib")
#gaff_command1 = "pdb4amber -i " + str(ligand_pdb2) + " -o " + str(ligand_h)
gaff_command1 = "pdb4amber -i " + str(ligand_pdb2) + " -o " + str(ligand_h)
gaff_command3 = "antechamber -i " + str(ligand_h) + " -fi pdb -o " + str(ligand_mol2) + " -fo mol2 -c bcc -nc " + str(Ligand_net_charges) + " -rn LIG -at gaff2"
gaff_command4 = "parmchk2 -i " + str(ligand_mol2) + " -f mol2 -o " + str(ligand_frcmod) + " -s gaff2"
original_stdout = sys.stdout # Save a reference to the original standard output
with open('gaff.sh', 'w') as f:
sys.stdout = f # Change the standard output to the file we created.
print(gaff_command1)
print(gaff_command3)
print(gaff_command4)
sys.stdout = original_stdout # Reset the standard output to its original value
!chmod 700 gaff.sh 2>&1 1>/dev/null
!bash gaff.sh >/dev/null 2>&1
f = open(tleap, "w")
f.write("""source """ + str(ff) + "\n"
"""source leaprc.gaff2
LIG = loadmol2 """ + str(ligand_mol2) + "\n"
"""loadamberparams """ + str(ligand_frcmod) + "\n"
"""saveoff LIG """ + str(lib) + "\n"
"""savepdb LIG """ + str(lig_new) + "\n"
"""quit""")
f.close()
tleap_command = "tleap -f " + str(tleap)
cat_command = "cat " + str(starting_end) + " " + str(lig_new) + str(" > ") + str(protein_ligand)
original_stdout = sys.stdout # Save a reference to the original standard output
with open('run_tleap.sh', 'w') as f:
sys.stdout = f # Change the standard output to the file we created.
print(tleap_command)
print(cat_command)
sys.stdout = original_stdout # Reset the standard output to its original value
!chmod 700 run_tleap.sh 2>&1 1>/dev/null
!bash run_tleap.sh 2>&1 1>/dev/null
ppdb = PandasPdb().read_pdb(protein_ligand)
ppdb.df['ATOM'] = ppdb.df['ATOM']
ppdb.df['OTHERS'] = [ppdb.df['OTHERS'] != 'OTHERS']
ppdb.to_pdb(path=protein_ligand, records=['ATOM', 'HETATM'], gz=False, append_newline=True)
f = open(tleap, "w")
f.write("""source """ + str(ff) + "\n"
"""source leaprc.DNA.OL15
source leaprc.RNA.OL3
source leaprc.GLYCAM_06j-1
source leaprc.lipid17
source leaprc.gaff2
source """ + str(water) + "\n"
"""loadamberparams """ + str(ligand_frcmod) + "\n"
"""loadoff """ + str(lib) + "\n"
"""SYS = loadpdb """ + str(protein_ligand) + "\n"
"""alignaxes SYS
savepdb SYS """ + str(pdb_nw) + "\n"
"""saveamberparm SYS """ + str(top_nw) + " " + str(crd_nw) + "\n"
"""solvatebox SYS """ + str(water_box) + " " + str(size_box) + """ 0.7
saveamberparm SYS """ + str(top) + " " + str(crd) + "\n"
"""savepdb SYS """ + str(pdb) + "\n"
"""quit""")
f.close()
tleap_command = "tleap -f " + str(tleap)
original_stdout = sys.stdout # Save a reference to the original standard output
with open('run_tleap.sh', 'w') as f:
sys.stdout = f # Change the standard output to the file we created.
print(tleap_command)
sys.stdout = original_stdout # Reset the standard output to its original value
SYS = os.path.join(workDir, "SYS*")
rm_sys = "rm " + SYS
original_stdout = sys.stdout # Save a reference to the original standard output
with open('rm_sys.sh', 'w') as f:
sys.stdout = f # Change the standard output to the file we created.
print(rm_sys)
sys.stdout = original_stdout # Reset the standard output to its original value
!chmod 700 rm_sys.sh 2>&1 1>/dev/null
!bash rm_sys.sh 2> /dev/null
!chmod 700 run_tleap.sh 2>&1 1>/dev/null
!bash run_tleap.sh 2>&1 1>/dev/null
!grep "Volume:" leap.log > temp.txt
with open("temp.txt", 'r') as f:
for line in f:
vol = float(line.split()[1])
vol_lit = vol * pow(10, -27)
atom_lit = 9.03 * pow(10, 22)
conc = float(Concentration)
num_ion = int(vol_lit * (conc/0.15) * atom_lit)
if Ions == "NaCl":
pos_neut = "Na+ 0"
pos_num = "Na+ " + str(num_ion)
Cl_num = num_ion
else:
pos_neut = "K+ 0"
pos_num = "K+ " + str(num_ion)
Cl_num = num_ion
f = open(tleap, "w")
f.write("""source """ + str(ff) + "\n"
"""source leaprc.DNA.OL15
source leaprc.RNA.OL3
source leaprc.GLYCAM_06j-1
source leaprc.lipid17
source leaprc.gaff2
source """ + str(water) + "\n"
"""loadamberparams """ + str(ligand_frcmod) + "\n"
"""loadoff """ + str(lib) + "\n"
"""SYS = loadpdb """ + str(protein_ligand) + "\n"
"""alignaxes SYS
check SYS
charge SYS
addions SYS """ + str(pos_neut) + "\n"
"""addions SYS Cl- 0
check SYS
charge SYS
savepdb SYS """ + str(pdb_nw) + "\n"
"""saveamberparm SYS """ + str(top_nw) + " " + str(crd_nw) + "\n"
"""solvatebox SYS """ + str(water_box) + " " + str(size_box) + """ 0.7 """ + "\n"
"""addIonsRand SYS """ + str(pos_num) + """ Cl- """ + str(Cl_num) + "\n"
"""saveamberparm SYS """ + str(top) + " " + str(crd) + "\n"
"""savepdb SYS """ + str(pdb) + "\n"
"""quit""")
f.close()
!chmod 700 run_tleap.sh 2>&1 1>/dev/null
!bash run_tleap.sh 2>&1 1>/dev/null
if Ligand_Force_field == "OpenFF 2.0.0 (Sage)":
mol = Chem.MolFromPDBFile(lig_new, removeHs=False)
Chem.MolToPDBFile(mol, os.path.join(workDir, "ligand_openFF.pdb"))
in_prmtop = top
in_crd = crd
orig_structure = parmed.amber.AmberParm(in_prmtop, in_crd)
pieces = orig_structure.split()
for piece in pieces:
print(f"There are {len(piece[1])} instance(s) of {piece[0]}")
from openmm.app import PDBFile
from openff.toolkit.topology import Molecule, Topology
from openff.toolkit.tests.utils import get_data_file_path
# rdmol = Chem.MolFromMolFile(os.path.join(workDir, "ligand_openFF.sdf"))
# ligand_off_molecule = Molecule.from_rdkit(rdmol, hydrogens_are_explicit=True)
ligand_off_molecule = Molecule.from_smiles(isomer_end)
ligand_pdbfile = PDBFile(os.path.join(workDir, "ligand_openFF.pdb"))
ligand_off_topology = Topology.from_openmm(
ligand_pdbfile.topology,
unique_molecules=[ligand_off_molecule],)
force_field = ForceField("openff_unconstrained-2.0.0.offxml")
ligand_system = force_field.create_openmm_system(ligand_off_topology)
new_ligand_structure = parmed.openmm.load_topology(
ligand_off_topology.to_openmm(),
ligand_system,
xyz=pieces[1][0].positions,)
new_ligand_structure.save(os.path.join(workDir, "ligand.prmtop"), overwrite=True)
new_ligand_structure.save(os.path.join(workDir, "ligand.inpcrd"), overwrite=True)
# Check how many atoms and which order elements are in the new ligand
n_atoms_new = len(new_ligand_structure.atoms)
elements_new = [atom.element for atom in new_ligand_structure.atoms]
# Check how many atoms and which order elements are in the old ligand
old_ligand_structure, n_copies = pieces[1]
n_atoms_old = len(old_ligand_structure.atoms)
elements_old = [atom.element for atom in old_ligand_structure.atoms]
print(
f"There are {n_atoms_old} in the old ligand structure and {n_atoms_new} atoms "
f"in the new ligand structure")
# Print out error message if number of atoms doesn't match
if n_atoms_new != n_atoms_old:
print(
"Error: Number of atoms in input ligand doesn't match number extracted "
"from prmtop file.")
if elements_new != elements_old:
print(
"Error: Elements in input ligand don't match elements in the ligand "
"from the prmtop file.")
print(f"Old elements: {elements_old}")
print(f"New elements: {elements_new}")
# Create a new, empty system
complex_structure = parmed.Structure()
# Add the protein. Convert explicitly to an AmberParm object to ensure that 1-4 scaling factors are preserved.
complex_structure += parmed.amber.AmberParm.from_structure(pieces[0][0])
# Add the ligand
complex_structure += parmed.amber.AmberParm.from_structure(new_ligand_structure)
# Add ions and Waters
ppdb = PandasPdb().read_pdb(pdb)
Cl = [ppdb.df['ATOM']['atom_name'] == 'Cl-']
Na = [ppdb.df['ATOM']['atom_name'] == 'Na+']
K = [ppdb.df['ATOM']['atom_name'] == 'K+']
Cl = np.array(Cl)
Na = np.array(Na)
K = np.array(K)
if True in Cl and True in Na:
just_ion1_structure = parmed.Structure()
just_ion1_structure += pieces[2][0]
just_ion1_structure *= len(pieces[2][1])
just_ion2_structure = parmed.Structure()
just_ion2_structure += pieces[3][0]
just_ion2_structure *= len(pieces[3][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_ion1_structure)
complex_structure += parmed.amber.AmberParm.from_structure(just_ion2_structure)
just_water_structure = parmed.Structure()
just_water_structure += pieces[4][0]
just_water_structure *= len(pieces[4][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_water_structure)
elif True in Cl and True in K:
just_ion1_structure = parmed.Structure()
just_ion1_structure += pieces[2][0]
just_ion1_structure *= len(pieces[2][1])
just_ion2_structure = parmed.Structure()
just_ion2_structure += pieces[3][0]
just_ion2_structure *= len(pieces[3][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_ion1_structure)
complex_structure += parmed.amber.AmberParm.from_structure(just_ion2_structure)
just_water_structure = parmed.Structure()
just_water_structure += pieces[4][0]
just_water_structure *= len(pieces[4][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_water_structure)
elif True in Cl:
just_ion1_structure = parmed.Structure()
just_ion1_structure += pieces[2][0]
just_ion1_structure *= len(pieces[2][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_ion1_structure)
just_water_structure = parmed.Structure()
just_water_structure += pieces[3][0]
just_water_structure *= len(pieces[3][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_water_structure)
elif True in Na:
just_ion1_structure = parmed.Structure()
just_ion1_structure += pieces[2][0]
just_ion1_structure *= len(pieces[2][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_ion1_structure)
just_water_structure = parmed.Structure()
just_water_structure += pieces[3][0]
just_water_structure *= len(pieces[3][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_water_structure)
elif True in K:
just_ion1_structure = parmed.Structure()
just_ion1_structure += pieces[2][0]
just_ion1_structure *= len(pieces[2][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_ion1_structure)
just_water_structure = parmed.Structure()
just_water_structure += pieces[3][0]
just_water_structure *= len(pieces[3][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_water_structure)
else:
just_water_structure = parmed.Structure()
just_water_structure += pieces[2][0]
just_water_structure *= len(pieces[2][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_water_structure)
# Copy over the original coordinates and box vectors
complex_structure.coordinates = orig_structure.coordinates
complex_structure.box_vectors = orig_structure.box_vectors
# Export the Structure to AMBER files
top = os.path.join(workDir, "SYS_openff.prmtop")
crd = os.path.join(workDir, "SYS_openff.inpcrd")
complex_structure.save(top, overwrite=True)
complex_structure.save(crd, overwrite=True)
top_openff = os.path.exists(top)
crd_openff = os.path.exists(crd)
if top_openff == True and crd_openff == True:
print("Successfully generated topology! :-)")
else:
print("ERROR: Check your inputs! ")
else:
pdb_amber = os.path.exists(pdb)
top_amber = os.path.exists(top)
crd_amber = os.path.exists(crd)
if pdb_amber == True and top_amber == True and crd_amber == True:
print("Successfully generated topology! :-)")
else:
print("ERROR: Check your inputs! ")
!!rm *.sh ANTECHAMBER* ATOMTYPE* temp.txt >/dev/null 2>&1
###Output
_____no_output_____
###Markdown
Let's take a look on our simulation box:
###Code
#@title **Show 3D structure**
import ipywidgets
from ipywidgets import interact, fixed
import warnings
warnings.filterwarnings('ignore')
def show_pdb(show_box=True,
show_ligand=True,
show_sidechains=False,
show_mainchain=False,
color="None"):
def mainchain(p, color="white", model=0):
BB = ['C','O','N','CA']
p.addStyle({"model":model,'atom':BB},
{'stick':{'colorscheme':f"{color}Carbon",'radius':0.3}})
p.setViewStyle({'style':'outline','color':'black','width':0.1})
def ligand(p, model=0):
HP = ['LIG']
p.addStyle({"model":model,'and':[{'resn':HP}]},
{'stick':{'colorscheme':'greenCarbon','radius':0.3}})
p.setViewStyle({'style':'outline','color':'black','width':0.1})
def box(p, model=0):
p.addModelsAsFrames(pdb)
p.addSurface(py3Dmol.SAS, {'opacity': 0.6, 'color':'white'}) #comment this line if you dont want to see the water box
p.setViewStyle({'style':'outline','color':'black','width':0.1})
def sidechain(p, model=0):
HP = ["ALA","GLY","VAL","ILE","LEU","PHE","MET","PRO","TRP","CYS","TYR"]
BB = ['C','O','N']
p.addStyle({"model":model,'and':[{'resn':HP},{'atom':BB,'invert':True}]},
{'stick':{'colorscheme':"whiteCarbon",'radius':0.3}})
p.addStyle({"model":model,'and':[{'resn':"GLY"},{'atom':'CA'}]},
{'sphere':{'colorscheme':"whiteCarbon",'radius':0.3}})
p.addStyle({"model":model,'and':[{'resn':"PRO"},{'atom':['C','O'],'invert':True}]},
{'stick':{'colorscheme':"whiteCarbon",'radius':0.3}})
p.addStyle({"model":model,'and':[{'resn':HP,'invert':True},{'atom':BB,'invert':True}]},
{'stick':{'colorscheme':"whiteCarbon",'radius':0.3}})
p = py3Dmol.view(js='https://3dmol.org/build/3Dmol.js')
p.addModel(open(pdb,'r').read(),'pdb')
if color == "rainbow":
p.setStyle({'cartoon': {'color':'spectrum'}})
else:
p.setStyle({'cartoon':{}})
if show_sidechains: sidechain(p)
if show_mainchain: mainchain(p)
if show_ligand: ligand(p)
if show_box: box(p)
p.zoomTo()
return p.show()
interact(show_pdb,
show_box=ipywidgets.Checkbox(value=True),
show_ligand=ipywidgets.Checkbox(value=True),
show_sidechains=ipywidgets.Checkbox(value=False),
show_mainchain=ipywidgets.Checkbox(value=False),
color=ipywidgets.Dropdown(options=['None', 'rainbow'], value='None'))
#@title **View and check the Ligand Interaction Network (LigPlot)**
#@markdown This diagram is interactive and allows moving around the residues, as well as clicking the legend to toggle the display of specific residues types or interactions. The diagram will be saved as an HTML file (initial.html).
import MDAnalysis as mda
import prolif as plf
import numpy as np
import os
from prolif.plotting.network import LigNetwork
# load topology
u = mda.Universe(os.path.join(workDir, "SYS_gaff2.prmtop"), pdb)
lig = u.select_atoms("resname LIG")
prot = u.select_atoms("protein")
# create RDKit-like molecules for visualisation
lmol = plf.Molecule.from_mda(lig)
pmol = plf.Molecule.from_mda(prot)
fp = plf.Fingerprint()
fp.run(u.trajectory[::1], lig, prot)
df = fp.to_dataframe(return_atoms=True)
net = LigNetwork.from_ifp(df, lmol,
# replace with `kind="frame", frame=0` for the other depiction
kind="frame", frame=0,
rotation=270)
net.save(os.path.join(workDir, "initial.html"))
net.display()
###Output
_____no_output_____
###Markdown
------ **Equilibrating the simulation box**Proper MD equilibration protocol is designed to equilibrate both temperature and pressure throughout the simulation box while preserving the protein experimental conformation. In addition, we also allow the solvent to accomodate around the protein, creating proper solvation layers.Below, we will set up the MD equilibration parameters, such as temperature, pressure and the desired simulation time. We will define the force constant used to restraint protein heavy-atoms in place and the frequency at which we want to save atomic coordinates in a trajectory file (.dcd).After you are done, you can run the next 2 cells to equilibrate your system.
###Code
#@title ### **Parameters for MD Equilibration protocol:**
# remove whitespaces
Jobname = 'prot_lig_equil' #@param {type:"string"}
Minimization_steps = "1000" #@param ["1000", "5000", "10000", "20000", "50000", "100000"]
#@markdown Simulation time (in nanoseconds) and integration time (in femtoseconds):
Time = "5" #@param {type:"string"}
stride_time_eq = Time
Integration_timestep = "2" #@param ["0.5", "1", "2", "3", "4"]
dt_eq = Integration_timestep
#@markdown Temperature (in Kelvin) and Pressure (in bar)
Temperature = 298 #@param {type:"string"}
temperature_eq = Temperature
Pressure = 1 #@param {type:"string"}
pressure_eq = Pressure
#@markdown Position restraints force constant (in kJ/mol):
Force_constant = 800 #@param {type:"slider", min:0, max:2000, step:100}
#@markdown Frequency to write the trajectory file (in picoseconds):
Write_the_trajectory = "10" #@param ["10", "100", "200", "500", "1000"]
write_the_trajectory_eq = Write_the_trajectory
#@markdown Frequency to write the log file (in picoseconds):
Write_the_log = "10" #@param ["10", "100", "200", "500", "1000"]
write_the_log_eq = Write_the_log
#@markdown ---
#@title **Runs an Equilibration MD simulation (NPT ensemble)**
#@markdown Now, let's equilibrate our system!
###########################################
import openmm as mm
from openmm import *
from openmm.app import *
from openmm.unit import *
import pytraj as pt
from sys import stdout, exit, stderr
import os, math, fnmatch
#############################################
# Defining MD simulation parameters
jobname = os.path.join(workDir, Jobname)
coordinatefile = crd
pdbfile = pdb
topologyfile = top
time_ps = float(Time)*1000
simulation_time = float(time_ps)*picosecond # in ps
dt = int(dt_eq)*femtosecond
temperature = float(temperature_eq)*kelvin
savcrd_freq = int(write_the_trajectory_eq)*picosecond
print_freq = int(write_the_log_eq)*picosecond
pressure = float(pressure_eq)*bar
restraint_fc = int(Force_constant) # kJ/mol
nsteps = int(simulation_time.value_in_unit(picosecond)/dt.value_in_unit(picosecond))
nprint = int(print_freq.value_in_unit(picosecond)/dt.value_in_unit(picosecond))
nsavcrd = int(savcrd_freq.value_in_unit(picosecond)/dt.value_in_unit(picosecond))
#############################################
# Defining functions to use below:
def backup_old_log(pattern, string):
result = []
for root, dirs, files in os.walk("./"):
for name in files:
if fnmatch.fnmatch(name, pattern):
try:
number = int(name[-2])
avail = isinstance(number, int)
#print(name,avail)
if avail == True:
result.append(number)
except:
pass
if len(result) > 0:
maxnumber = max(result)
else:
maxnumber = 0
backup_file = "\#" + string + "." + str(maxnumber + 1) + "#"
os.system("mv " + string + " " + backup_file)
return backup_file
def restraints(system, crd, fc, restraint_array):
boxlx = system.getDefaultPeriodicBoxVectors()[0][0].value_in_unit(nanometers)
boxly = system.getDefaultPeriodicBoxVectors()[1][1].value_in_unit(nanometers)
boxlz = system.getDefaultPeriodicBoxVectors()[2][2].value_in_unit(nanometers)
if fc > 0:
# positional restraints for all heavy-atoms
posresPROT = CustomExternalForce('k*periodicdistance(x, y, z, x0, y0, z0)^2;')
posresPROT.addPerParticleParameter('k')
posresPROT.addPerParticleParameter('x0')
posresPROT.addPerParticleParameter('y0')
posresPROT.addPerParticleParameter('z0')
for atom1 in restraint_array:
atom1 = int(atom1)
xpos = crd.positions[atom1].value_in_unit(nanometers)[0]
ypos = crd.positions[atom1].value_in_unit(nanometers)[1]
zpos = crd.positions[atom1].value_in_unit(nanometers)[2]
posresPROT.addParticle(atom1, [fc, xpos, ypos, zpos])
system.addForce(posresPROT)
return system
##############################################
#############################################
print("\n> Simulation details:\n")
print("\tJob name = " + jobname)
print("\tCoordinate file = " + str(coordinatefile))
print("\tPDB file = " + str(pdbfile))
print("\tTopology file = " + str(topologyfile))
print("\n\tSimulation_time = " + str(simulation_time))
print("\tIntegration timestep = " + str(dt))
print("\tTotal number of steps = " + str(nsteps))
print("\n\tSave coordinates each " + str(savcrd_freq))
print("\tPrint in log file each " + str(print_freq))
print("\n\tTemperature = " + str(temperature))
print("\tPressure = " + str(pressure))
#############################################
print("\n> Setting the system:\n")
if Ligand_Force_field == "OpenFF 2.0.0 (Sage)":
print("\t- Reading topology and structure file...")
prmtop = pmd.load_file(topologyfile)
inpcrd = AmberInpcrdFile(coordinatefile)
print("\t- Creating system and setting parameters...")
nonbondedMethod = PME
nonbondedCutoff = 1.0*nanometers
ewaldErrorTolerance = 0.0005
constraints = HBonds
rigidWater = True
constraintTolerance = 0.000001
friction = 1.0
system = complex_structure.createSystem(nonbondedMethod=nonbondedMethod, nonbondedCutoff=nonbondedCutoff,
constraints=constraints, rigidWater=rigidWater, ewaldErrorTolerance=ewaldErrorTolerance)
else:
print("\t- Reading topology and structure file...")
prmtop = AmberPrmtopFile(topologyfile)
inpcrd = AmberInpcrdFile(coordinatefile)
print("\t- Creating system and setting parameters...")
nonbondedMethod = PME
nonbondedCutoff = 1.0*nanometers
ewaldErrorTolerance = 0.0005
constraints = HBonds
rigidWater = True
constraintTolerance = 0.000001
friction = 1.0
system = prmtop.createSystem(nonbondedMethod=nonbondedMethod, nonbondedCutoff=nonbondedCutoff,
constraints=constraints, rigidWater=rigidWater, ewaldErrorTolerance=ewaldErrorTolerance)
print("\t- Applying restraints. Force Constant = " + str(Force_constant) + "kJ/mol")
pt_system = pt.iterload(coordinatefile, topologyfile)
pt_topology = pt_system.top
restraint_array = pt.select_atoms('!(:H*) & !(:WAT) & !(:Na+) & !(:Cl-) & !(:Mg+) & !(:K+)', pt_topology)
system = restraints(system, inpcrd, restraint_fc, restraint_array)
print("\t- Setting barostat...")
system.addForce(MonteCarloBarostat(pressure, temperature))
print("\t- Setting integrator...")
integrator = LangevinIntegrator(temperature, friction, dt)
integrator.setConstraintTolerance(constraintTolerance)
simulation = Simulation(prmtop.topology, system, integrator)
simulation.context.setPositions(inpcrd.positions)
if inpcrd.boxVectors is not None:
simulation.context.setPeriodicBoxVectors(*inpcrd.boxVectors)
print("\t- Energy minimization: " + str(Minimization_steps) + " steps")
simulation.minimizeEnergy(tolerance=10*kilojoule/mole, maxIterations=int(Minimization_steps))
print("\t-> Potential Energy = " + str(simulation.context.getState(getEnergy=True).getPotentialEnergy()))
print("\t- Setting initial velocities...")
simulation.context.setVelocitiesToTemperature(temperature)
#############################################
# Running Equilibration on NPT ensemble
dcd_file = jobname + ".dcd"
log_file = jobname + ".log"
rst_file = jobname + ".rst"
prv_rst_file = jobname + ".rst"
pdb_file = jobname + ".pdb"
# Creating a trajectory file and reporters
dcd = DCDReporter(dcd_file, nsavcrd)
firstdcdstep = (nsteps) + nsavcrd
dcd._dcd = DCDFile(dcd._out, simulation.topology, simulation.integrator.getStepSize(), firstdcdstep, nsavcrd) # charmm doesn't like first step to be 0
simulation.reporters.append(dcd)
simulation.reporters.append(StateDataReporter(stdout, nprint, step=True, speed=True, progress=True, totalSteps=nsteps, remainingTime=True, separator='\t\t'))
simulation.reporters.append(StateDataReporter(log_file, nprint, step=True, kineticEnergy=True, potentialEnergy=True, totalEnergy=True, temperature=True, volume=True, speed=True))
print("\n> Simulating " + str(nsteps) + " steps...")
simulation.step(nsteps)
simulation.reporters.clear() # remove all reporters so the next iteration don't trigger them.
##################################
# Writing last frame information of stride
print("\n> Writing state file (" + str(rst_file) + ")...")
state = simulation.context.getState( getPositions=True, getVelocities=True )
with open(rst_file, 'w') as f:
f.write(XmlSerializer.serialize(state))
last_frame = int(nsteps/nsavcrd)
print("> Writing coordinate file (" + str(pdb_file) + ", frame = " + str(last_frame) + ")...")
positions = simulation.context.getState(getPositions=True).getPositions()
PDBFile.writeFile(simulation.topology, positions, open(pdb_file, 'w'))
print("\n> Finished!\n")
###Output
_____no_output_____
###Markdown
------ **Running a Production MD simulation**Finally, we will proceed with the Production simulation itself using the equilibrated system coordinates as input structure.Note that we will use here a *.rst state file* , which contains atomic velocities and positions from the last frame of the equilibration simulation, guaranteeing that our production simulation begins from a thermodynamically equilibrated system.Another important information here is the **Number_of_strides** and the **Stride_Time**. In this notebook, we simulate a defined number of *strides*, so the **simulation time = Number_of_strides*Stride_Time**. For example, we can simulate 100ns by setting *Number_of_strides=10* and *Stride_Time=10 ns*.**Important: at the end of the Production simulation, we concatenate all strides to create a complete trajectory file which can be visualized and analyzed**The idea behind this approach is to make use of the intermitent 12h/24h period that Google Colab allows us to use its GPUs.
###Code
#@markdown ### **Provide input file names below:**
Equilibrated_PDB = 'prot_lig_equil.pdb' #@param {type:"string"}
State_file = 'prot_lig_equil.rst' #@param {type:"string"}
#@markdown ---
#@markdown ### **Parameters for MD Production protocol:**
# remove whitespaces
Jobname = 'prot_lig_prod' #@param {type:"string"}
#@markdown Simulation time (in nanoseconds), number of strides (integers) and integration timestep (in femtoseconds):
Stride_Time = "5" #@param {type:"string"}
stride_time_prod = Stride_Time
Number_of_strides = "1" #@param {type:"string"}
nstride = Number_of_strides
Integration_timestep = "2" #@param ["0.5", "1", "2", "3", "4"]
dt_prod = Integration_timestep
#@markdown Temperature (in Kelvin) and Pressure (in bar)
Temperature = 298 #@param {type:"string"}
temperature_prod = Temperature
Pressure = 1 #@param {type:"string"}
pressure_prod = Pressure
#@markdown Frequency to write the trajectory file (in picoseconds):
Write_the_trajectory = "10" #@param ["10", "100", "200", "500", "1000"]
write_the_trajectory_prod = Write_the_trajectory
#@markdown Frequency to write the log file (in picoseconds):
Write_the_log = "10" #@param ["10", "100", "200", "500", "1000"]
write_the_log_prod = Write_the_log
#@markdown ---
#@title **Runs a Production MD simulation (NPT ensemble) after equilibration**
#
###########################################
import openmm as mm
from openmm import *
from openmm.app import *
from openmm.unit import *
from sys import stdout, exit, stderr
import os, math, fnmatch
#############################################
# Defining MD simulation parameters
jobname = os.path.join(workDir, str(Jobname))
coordinatefile = crd
pdbfile = os.path.join(workDir, Equilibrated_PDB)
topologyfile = top
equil_rst_file = os.path.join(workDir, State_file)
stride_time_ps = float(stride_time_prod)*1000
stride_time = float(stride_time_ps)*picosecond
nstride = int(Number_of_strides)
dt = int(dt_prod)*femtosecond
temperature = float(temperature_prod)*kelvin
savcrd_freq = int(write_the_trajectory_prod)*picosecond
print_freq = int(write_the_log_prod)*picosecond
pressure = float(pressure_prod)*bar
simulation_time = stride_time*nstride
nsteps = int(stride_time.value_in_unit(picosecond)/dt.value_in_unit(picosecond))
nprint = int(print_freq.value_in_unit(picosecond)/dt.value_in_unit(picosecond))
nsavcrd = int(savcrd_freq.value_in_unit(picosecond)/dt.value_in_unit(picosecond))
firststride = 1 # must be integer
#############################################
# Defining functions to use below:
def backup_old_log(pattern, string):
result = []
for root, dirs, files in os.walk("./"):
for name in files:
if fnmatch.fnmatch(name, pattern):
try:
number = int(name[-2])
avail = isinstance(number, int)
#print(name,avail)
if avail == True:
result.append(number)
except:
pass
if len(result) > 0:
maxnumber = max(result)
else:
maxnumber = 0
backup_file = "\#" + string + "." + str(maxnumber + 1) + "#"
os.system("mv " + string + " " + backup_file)
return backup_file
##############################################
#############################################
print("\n> Simulation details:\n")
print("\tJob name = " + jobname)
print("\tCoordinate file = " + str(coordinatefile))
print("\tPDB file = " + str(pdbfile))
print("\tTopology file = " + str(topologyfile))
print("\n\tSimulation_time = " + str(stride_time*nstride))
print("\tIntegration timestep = " + str(dt))
print("\tTotal number of steps = " + str(nsteps*nstride))
print("\tNumber of strides = " + str(nstride) + " (" + str(stride_time) + " in each stride)")
print("\n\tSave coordinates each " + str(savcrd_freq))
print("\tSave checkpoint each " + str(savcrd_freq))
print("\tPrint in log file each " + str(print_freq))
print("\n\tTemperature = " + str(temperature))
print("\tPressure = " + str(pressure))
#############################################
print("\n> Setting the system:\n")
if Ligand_Force_field == "OpenFF 2.0.0 (Sage)":
print("\t- Reading topology and structure file...")
prmtop = pmd.load_file(topologyfile)
inpcrd = AmberInpcrdFile(coordinatefile)
print("\t- Creating system and setting parameters...")
nonbondedMethod = PME
nonbondedCutoff = 1.0*nanometers
ewaldErrorTolerance = 0.0005
constraints = HBonds
rigidWater = True
constraintTolerance = 0.000001
friction = 1.0
system = complex_structure.createSystem(nonbondedMethod=nonbondedMethod, nonbondedCutoff=nonbondedCutoff,
constraints=constraints, rigidWater=rigidWater, ewaldErrorTolerance=ewaldErrorTolerance)
else:
print("\t- Reading topology and structure file...")
prmtop = AmberPrmtopFile(topologyfile)
inpcrd = AmberInpcrdFile(coordinatefile)
print("\t- Creating system and setting parameters...")
nonbondedMethod = PME
nonbondedCutoff = 1.0*nanometers
ewaldErrorTolerance = 0.0005
constraints = HBonds
rigidWater = True
constraintTolerance = 0.000001
friction = 1.0
system = prmtop.createSystem(nonbondedMethod=nonbondedMethod, nonbondedCutoff=nonbondedCutoff,
constraints=constraints, rigidWater=rigidWater, ewaldErrorTolerance=ewaldErrorTolerance)
print("\t- Setting barostat...")
system.addForce(MonteCarloBarostat(pressure, temperature))
print("\t- Setting integrator...")
integrator = LangevinIntegrator(temperature, friction, dt)
integrator.setConstraintTolerance(constraintTolerance)
simulation = Simulation(prmtop.topology, system, integrator)
simulation.context.setPositions(inpcrd.positions)
if inpcrd.boxVectors is not None:
simulation.context.setPeriodicBoxVectors(*inpcrd.boxVectors)
#############################################
# Opening a loop of extension NSTRIDE to simulate the entire STRIDE_TIME*NSTRIDE
for n in range(1, nstride + 1):
print("\n\n>>> Simulating Stride #" + str(n) + " <<<")
dcd_file = jobname + "_" + str(n) + ".dcd"
log_file = jobname + "_" + str(n) + ".log"
rst_file = jobname + "_" + str(n) + ".rst"
prv_rst_file = jobname + "_" + str(n-1) + ".rst"
pdb_file = jobname + "_" + str(n) + ".pdb"
if os.path.exists(rst_file):
print("> Stride #" + str(n) + " finished (" + rst_file + " present). Moving to next stride... <")
continue
if n == 1:
print("\n> Loading previous state from equilibration > " + equil_rst_file + " <")
with open(equil_rst_file, 'r') as f:
simulation.context.setState(XmlSerializer.deserialize(f.read()))
currstep = int((n-1)*nsteps)
currtime = currstep*dt.in_units_of(picosecond)
simulation.currentStep = currstep
simulation.context.setTime(currtime)
print("> Current time: " + str(currtime) + " (Step = " + str(currstep) + ")")
else:
print("> Loading previous state from > " + prv_rst_file + " <")
with open(prv_rst_file, 'r') as f:
simulation.context.setState(XmlSerializer.deserialize(f.read()))
currstep = int((n-1)*nsteps)
currtime = currstep*dt.in_units_of(picosecond)
simulation.currentStep = currstep
simulation.context.setTime(currtime)
print("> Current time: " + str(currtime) + " (Step = " + str(currstep) + ")")
dcd = DCDReporter(dcd_file, nsavcrd)
firstdcdstep = (currstep) + nsavcrd
dcd._dcd = DCDFile(dcd._out, simulation.topology, simulation.integrator.getStepSize(), firstdcdstep, nsavcrd) # first step should not be 0
simulation.reporters.append(dcd)
simulation.reporters.append(StateDataReporter(stdout, nprint, step=True, speed=True, progress=True, totalSteps=(nsteps*nstride), remainingTime=True, separator='\t\t'))
simulation.reporters.append(StateDataReporter(log_file, nprint, step=True, kineticEnergy=True, potentialEnergy=True, totalEnergy=True, temperature=True, volume=True, speed=True))
print("\n> Simulating " + str(nsteps) + " steps... (Stride #" + str(n) + ")")
simulation.step(nsteps)
simulation.reporters.clear() # remove all reporters so the next iteration don't trigger them.
##################################
# Writing last frame information of stride
print("\n> Writing state file (" + str(rst_file) + ")...")
state = simulation.context.getState( getPositions=True, getVelocities=True )
with open(rst_file, 'w') as f:
f.write(XmlSerializer.serialize(state))
last_frame = int(nsteps/nsavcrd)
print("> Writing coordinate file (" + str(pdb_file) + ", frame = " + str(last_frame) + ")...")
positions = simulation.context.getState(getPositions=True).getPositions()
PDBFile.writeFile(simulation.topology, positions, open(pdb_file, 'w'))
print("\n> Finished!\n")
#@title **Concatenate and align the trajectory**
Skip = "1" #@param ["1", "2", "5", "10", "20", "50"]
stride_traj = Skip
Output_format = "dcd" #@param ["dcd", "pdb", "trr", "xtc"]
#@markdown **Attention:** A high number of frames can explode the memory on Colab. You should be fine with 5000 frames or less.
simulation_time_analysis = stride_time_ps*nstride
simulation_ns = float(Stride_Time)*int(Number_of_strides)
number_frames = int(simulation_time_analysis)/int(Write_the_trajectory)
number_frames_analysis = number_frames/int(stride_traj)
traj_end = os.path.join(workDir, str(Jobname) + "_all.dcd")
traj_end2 = os.path.join(workDir, str(Jobname) + "_all." + str(Output_format))
template = os.path.join(workDir, str(Jobname) + '_%s.dcd')
flist = [template % str(i) for i in range(1, nstride + 1)]
#print(flist)
trajlist = pt.load(flist, pdb, stride=stride_traj)
traj_image = trajlist.iterframe(autoimage=True, rmsfit=0)
traj_write = pt.write_traj(traj_end, traj_image, overwrite=True)
traj_load = pt.load(traj_end, pdb)
traj_align = pt.align(traj_load, mask="@CA", ref=0)
traj_write = pt.write_traj(traj_end, traj_align, overwrite=True, options='dcd')
traj_write = pt.write_traj(traj_end2, traj_align, overwrite=True, options=Output_format)
traj_load = pt.load(traj_end, os.path.join(workDir, "SYS_gaff2.prmtop"))
print(traj_load)
traj_end_check = os.path.exists(traj_end2)
if traj_end_check == True:
print("Trajectory concatenated successfully! :-)")
else:
print("ERROR: Check your inputs! ")
#@title **Load, view and check the trajectory**
#@markdown This will take a few minutes. Another coffee would be great. :-)
import warnings
warnings.filterwarnings('ignore')
!rm *.pdb 2> /dev/null
#py3dmol functions
class Atom(dict):
def __init__(self, line):
self["type"] = line[0:6].strip()
self["idx"] = line[6:11].strip()
self["name"] = line[12:16].strip()
self["resname"] = line[17:20].strip()
self["resid"] = int(int(line[22:26]))
self["x"] = float(line[30:38])
self["y"] = float(line[38:46])
self["z"] = float(line[46:54])
self["sym"] = line[76:78].strip()
def __str__(self):
line = list(" " * 80)
line[0:6] = self["type"].ljust(6)
line[6:11] = self["idx"].ljust(5)
line[12:16] = self["name"].ljust(4)
line[17:20] = self["resname"].ljust(3)
line[22:26] = str(self["resid"]).ljust(4)
line[30:38] = str(self["x"]).rjust(8)
line[38:46] = str(self["y"]).rjust(8)
line[46:54] = str(self["z"]).rjust(8)
line[76:78] = self["sym"].rjust(2)
return "".join(line) + "\n"
class Molecule(list):
def __init__(self, file):
for line in file:
if "ATOM" in line or "HETATM" in line:
self.append(Atom(line))
def __str__(self):
outstr = ""
for at in self:
outstr += str(at)
return outstr
if number_frames_analysis > 10:
stride_animation = number_frames_analysis/10
else:
stride_animation = 1
u = mda.Universe(pdb, traj_end)
# Write out frames for animation
protein = u.select_atoms('not (resname WAT)')
i = 0
for ts in u.trajectory[0:len(u.trajectory):int(stride_animation)]:
if i > -1:
with mda.Writer('' + str(i) + '.pdb', protein.n_atoms) as W:
W.write(protein)
i = i + 1
# Load frames as molecules
molecules = []
for i in range(int(len(u.trajectory)/int(stride_animation))):
with open('' + str(i) + '.pdb') as ifile:
molecules.append(Molecule(ifile))
models = ""
for i in range(len(molecules)):
models += "MODEL " + str(i) + "\n"
for j,mol in enumerate(molecules[i]):
models += str(mol)
models += "ENDMDL\n"
#view.addModelsAsFrames(models)
# Animation
view = py3Dmol.view(width=800, height=600)
view.addModelsAsFrames(models)
for i, at in enumerate(molecules[0]):
default = {"cartoon": {'color': 'spectrum'}}
view.setViewStyle({'style':'outline','color':'black','width':0.1})
view.setStyle({'model': -1, 'serial': i+1}, at.get("pymol", default))
HP = ['LIG']
view.setStyle({"model":-1,'and':[{'resn':HP}]},{'stick':{'radius':0.3}})
view.zoomTo()
view.animate({'loop': "forward"})
view.show()
#@title **View and check the Ligand Interaction Network (LigPlot) during MD simulations**
#@markdown This diagram is interactive and allows moving around the residues, as well as clicking the legend to toggle the display of specific residues types or interactions. The diagram will be saved as an HTML file (output.html).
#@markdown **Provide output file names below:**
Output_name = 'Interaction' #@param {type:"string"}
#@markdown The frequency with which an interaction is seen will control the width of the corresponding edge. You can hide the least frequent interactions by using a threshold, i.e. threshold=0.3 will hide interactions that occur in less than 30% of frames.
Threshold = 0.3 #@param {type:"slider", min:0, max:1.0, step:0.1}
import MDAnalysis as mda
import prolif as plf
import numpy as np
import os
from prolif.plotting.network import LigNetwork
# load topology
u = mda.Universe(os.path.join(workDir, "SYS_gaff2.prmtop"), traj_end)
lig = u.select_atoms("resname LIG")
prot = u.select_atoms("protein")
# create RDKit-like molecules for visualisation
lmol = plf.Molecule.from_mda(lig)
pmol = plf.Molecule.from_mda(prot)
if number_frames_analysis > 10:
stride_animation = number_frames_analysis/10
else:
stride_animation = 1
fp = plf.Fingerprint()
fp.run(u.trajectory[::int(stride_animation)], lig, prot)
df = fp.to_dataframe(return_atoms=True)
net = LigNetwork.from_ifp(df, lmol,
# replace with `kind="frame", frame=0` for the other depiction
kind="aggregate", threshold=float(Threshold),
rotation=270)
net.save(os.path.join(workDir, Output_name + ".html"))
net.display()
###Output
_____no_output_____
###Markdown
------ **Analysis**Although visualizing your trajectory can be quite useful, sometimes you also want more quantitative data.Analyses of MD trajectories vary a lot and we do not intend to cover it all here. However, one can make use of MDanalysis or PyTraj to easily analyze simulations. Below, you can find a few examples of code snippets that can help you to shed some light on your simulation behavior.
###Code
#@title **MM-PBSA method to calculate the binding free energy**
#@markdown **Important:** We will now calculate the interaction energy and solvation free energy for the complex, receptor and ligand and average the results to obtain an estimate of the binding free energy. Please note that we will not perform a calculation of the entropy contribution to binding and so strictly speaking our result will not be a true free energy but could be used to compare against similar systems. We will carry out the binding energy calculation using both the MM-GBSA method and the MM-PBSA method for comparison.
#@markdown Select the GB/SA input parameters, the "OBC" models (igb=2 and 5) are newer, but appear to give significant improvements and are recommended for most projects (For more information check the Section 4.1 of the [Amber Manual](https://ambermd.org/doc12/Amber20.pdf)):
igb = "2" #@param ["0", "1", "2", "5", "6", "7", "8", "10"]
Salt_concentration = '0.15' #@param {type:"string"}
#@markdown **Provide output file names below:**
Output_name = 'FINAL_RESULTS_MMPBSA' #@param {type:"string"}
final_mmpbsa = os.path.join(workDir, Output_name)
if number_frames_analysis > 10:
stride = number_frames_analysis/10
else:
stride = 1
f = open("mmpbsa.in", "w")
f.write("""&general """ + "\n"
""" endframe=""" + str(int(number_frames_analysis)) + """, interval=""" + str(int(stride)) + """, strip_mask=:WAT:Na+:Cl-:Mg+:K+, """ + "\n"
"""/ """ + "\n"
"""&gb """ + "\n"
""" igb=""" + str(igb) + """, saltcon=""" + str(Salt_concentration) + """, """ + "\n"
"""/ """ + "\n"
"""&pb """ + "\n"
""" istrng=""" + str(Salt_concentration) + """, inp=2, radiopt=0, prbrad=1.4, """ + "\n"
"""/""")
f.close()
amberhome = "source /usr/local/amber.sh"
ante_MMPBSA = "ante-MMPBSA.py -p " + os.path.join(workDir, "SYS_gaff2.prmtop") + " -c com.prmtop -r rec.prmtop -l ligand.prmtop -s :WAT:Na+:Cl-:Mg+:K+ -n :LIG --radii mbondi2"
MMPBSA = "MMPBSA.py -O -i mmpbsa.in -o " + str(final_mmpbsa) + ".dat -sp " + os.path.join(workDir, "SYS_gaff2.prmtop") + " -cp com.prmtop -rp rec.prmtop -lp ligand.prmtop -y " + str(traj_end)
mkdir = "mkdir " + os.path.join(workDir, "MMPBSA")
mv = "mv _MMPBSA* *.prmtop reference.frc mmpbsa.in " + os.path.join(workDir, "MMPBSA")
original_stdout = sys.stdout # Save a reference to the original standard output
with open('run_MMPBSA.sh', 'w') as f:
sys.stdout = f # Change the standard output to the file we created.
print(amberhome)
print(ante_MMPBSA)
print(MMPBSA)
print(mkdir)
print(mv)
sys.stdout = original_stdout # Reset the standard output to its original value
!chmod 700 run_MMPBSA.sh 2>&1 1>/dev/null
!bash run_MMPBSA.sh 2>&1 1>/dev/null
f_mmpbsa = open(final_mmpbsa + '.dat', 'r')
file_contents = f_mmpbsa.read()
print(file_contents)
f_mmpbsa.close()
#@title **Interaction Energy**
#@markdown **Important:** To quantify the strength of the interaction between the ligand and the protein, we will compute the nonbonded interaction energy between these two species. It is important to note that this quantity is NOT a free energy or a binding energy.
#@markdown **Provide output file names below:**
Output_name = 'Interaction_energy' #@param {type:"string"}
pt_topology = traj_load.top
restraint_array = pt.select_atoms('!(:WAT) & !(:Na+) & !(:Cl-) & !(:Mg+) & !(:K+) & !(:LIG)', pt_topology)
first_atom = restraint_array[0]
last_atom = restraint_array[-1]
mask = "LIE :LIG @" + str(first_atom+1) + "-" + str(last_atom+1)
lie = pt.analysis.energy_analysis.lie(traj_load, mask=mask, options='cutvdw 12.0 cutelec 12.0 diel 2.0', dtype='dict')
lie_elec = lie['LIE[EELEC]']
lie_vdw = lie['LIE[EVDW]']
lie_total = lie_elec + lie_vdw
lie_total_mean = mean(lie_total)
lie_total_stdev = stdev(lie_total)
print("Interaction Energy Average = " + str("{:.2f}".format(lie_total_mean)) + " \u00B1 " + str("{:.2f}".format(lie_total_stdev)) + " kcal/mol")
time = len(lie_total)*int(Write_the_trajectory)/1000
time_array = np.arange(0,time,int(Write_the_trajectory)/1000)*int(stride_traj)
# Plotting:
ax = plt.plot(time_array, lie_total, alpha=0.6, color = 'blue', linewidth = 1.5, label= "Total Energy")
ax = plt.plot(time_array, lie_elec, alpha=0.6, color = 'green', linewidth = 1.5, label= "Electrostatic Energy")
ax = plt.plot(time_array, lie_vdw, alpha=0.6, color = 'red', linewidth = 1.5, label= "van der Waals Energy")
plt.xlim(0, simulation_ns)
#plt.ylim(2, 6)
plt.xlabel("Time (ns)", fontsize = 14, fontweight = 'bold')
plt.ylabel('Interaction Energy \n (kcal/mol)', fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
plt.legend(frameon=False, loc='center left', bbox_to_anchor=(1, 0.5))
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
lie_eelec = pd.DataFrame(lie['LIE[EELEC]'])
lie_eelec.to_csv(os.path.join(workDir, Output_name + "_eelec.csv"))
lie_evdw = pd.DataFrame(lie['LIE[EVDW]'])
lie_evdw.to_csv(os.path.join(workDir, Output_name + "_evdw.csv"))
#@title **Compute distance between the ligand and catalytic site residues**
#@markdown **Provide output file names below:**
Output_name = 'distance' #@param {type:"string"}
#@markdown **Cutoff distance to nearest residues (Angstrons):**
Distance = '5' #@param {type:"string"}
ini = 0
top = pt_topology
for frame in traj_load:
top.set_reference(traj_load[ini])
indices = traj_load.top.select('(:LIG<:' + str(Distance) + ')&!(:WAT|:Na+,Cl-,LIG)')
residues = [res.original_resid for res in top[indices].residues]
res_string = ','.join(str(e) for e in residues)
print("Selected residues = " + res_string + "\n")
mask = ":LIG :" + str(res_string)
dist = pt.distance(traj_load, mask)
dist_mean = mean(dist)
dist_stdev = stdev(dist)
print("Distance Average = " + str("{:.2f}".format(dist_mean)) + " \u00B1 " + str("{:.2f}".format(dist_stdev)) + " Å")
time = len(dist)*int(Write_the_trajectory)/1000
time_array = np.arange(0,time,int(Write_the_trajectory)/1000)*int(stride_traj)
# Plotting:
ax = plt.plot(time_array, dist, alpha=1, color = 'springgreen', linewidth = 1.0)
plt.xlim(0, simulation_ns)
#plt.ylim(2, 6)
plt.xlabel("Time (ns)", fontsize = 14, fontweight = 'bold')
plt.ylabel("Distance [$\AA$]", fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
raw_data=pd.DataFrame(dist)
raw_data.to_csv(os.path.join(workDir, Output_name + ".csv"))
#@title **Compute distance between the ligand and specific residues**
#@markdown **Provide output file names below:**
Output_name = 'distance_select' #@param {type:"string"}
#@markdown **Type the number of residues separated by commas and without spaces (1,2,3...):**
Residues = '78,84,85' #@param {type:"string"}
mask = ":LIG :" + str(Residues)
dist = pt.distance(traj_load, mask)
print("Selected residues = " + Residues + "\n")
dist_mean = mean(dist)
dist_stdev = stdev(dist)
print("Distance Average = " + str("{:.2f}".format(dist_mean)) + " \u00B1 " + str("{:.2f}".format(dist_stdev)) + " Å")
time = len(dist)*int(Write_the_trajectory)/1000
time_array = np.arange(0,time,int(Write_the_trajectory)/1000)*int(stride_traj)
# Plotting:
ax = plt.plot(time_array, dist, alpha=1, color = 'magenta', linewidth = 1.0)
plt.xlim(0, simulation_ns)
#plt.ylim(2, 6)
plt.xlabel("Time (ns)", fontsize = 14, fontweight = 'bold')
plt.ylabel("Distance [$\AA$]", fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
raw_data=pd.DataFrame(dist)
raw_data.to_csv(os.path.join(workDir, Output_name + ".csv"))
#@title **Compute RMSD of protein's CA atoms**
#@markdown **Provide output file names below:**
Output_name = 'rmsd_ca' #@param {type:"string"}
rmsd = pt.rmsd(traj_load, ref = 0, mask = "@CA")
time = len(rmsd)*int(Write_the_trajectory)/1000
time_array = np.arange(0,time,int(Write_the_trajectory)/1000)*int(stride_traj)
# Plotting:
ax = plt.plot(time_array, rmsd, alpha=0.6, color = 'blue', linewidth = 1.0)
plt.xlim(0, simulation_ns)
#plt.ylim(2, 6)
plt.xlabel("Time (ns)", fontsize = 14, fontweight = 'bold')
plt.ylabel("RMSD [$\AA$]", fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
raw_data=pd.DataFrame(rmsd)
raw_data.to_csv(os.path.join(workDir, Output_name + ".csv"))
#@title **Plot RMSD as a ditribution**
#@markdown **Provide output file names below:**
Output_name = 'rmsd_dist' #@param {type:"string"}
ax = sb.kdeplot(rmsd, color="blue", shade=True, alpha=0.2, linewidth=0.5)
plt.xlabel('RMSD [$\AA$]', fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks([])
plt.ylabel('')
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(True)
ax.spines['left'].set_visible(False)
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
#@title **Compute radius of gyration of protein's CA atoms**
#@markdown **Provide output file names below:**
Output_name = 'radius_gyration' #@param {type:"string"}
radgyr = pt.radgyr(traj_load, mask = "@CA")
time = len(rmsd)*int(Write_the_trajectory)/1000
time_array = np.arange(0,time,int(Write_the_trajectory)/1000)*int(stride_traj)
# Plotting:
plt.plot(time_array, radgyr, alpha=0.6, color = 'green', linewidth = 1.0)
plt.xlim(0, simulation_ns)
#plt.ylim(2, 6)
plt.xlabel("Time (ns)", fontsize = 14, fontweight = 'bold')
plt.ylabel("Radius of gyration ($\AA$)", fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
raw_data=pd.DataFrame(radgyr)
raw_data.to_csv(os.path.join(workDir, Output_name + ".csv"))
#@title **Plot radius of gyration as a ditribution**
#@markdown **Provide output file names below:**
Output_name = 'radius_gyration_dist' #@param {type:"string"}
ax = sb.kdeplot(radgyr, color="green", shade=True, alpha=0.2, linewidth=0.5)
plt.xlabel('Radius of gyration ($\AA$)', fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks([])
plt.ylabel('')
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(True)
ax.spines['left'].set_visible(False)
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
#@title **Compute RMSF of protein's CA atoms**
#@markdown **Provide output file names below:**
Output_name = 'rmsf_ca' #@param {type:"string"}
rmsf = pt.rmsf(traj_load, "@CA")
bfactor = pt.bfactors(traj_load, byres=True)
# Plotting:
plt.plot(rmsf[:,1], alpha=1.0, color = 'red', linewidth = 1.0)
plt.xlabel("Residue", fontsize = 14, fontweight = 'bold')
plt.ylabel("RMSF ($\AA$)", fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.xlim(0, len(rmsf[:-1]))
#plt.xticks(np.arange(min(rmsf[:1]), max(rmsf[:1])))
plt.yticks(fontsize = 12)
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
raw_data=pd.DataFrame(rmsf)
raw_data.to_csv(os.path.join(workDir, Output_name + ".csv"))
#@title **2D RMSD**
#@markdown **Provide output file names below:**
Output_name = '2D_rmsd' #@param {type:"string"}
last_frame = len(time_array)
stride_ticks_f = (last_frame)/5
ticks_frame = np.arange(0,(len(time_array) + float(stride_ticks_f)), float(stride_ticks_f))
a = ticks_frame.astype(float)
stride_ticks_t = (simulation_ns)/5
tick_time = np.arange(0,(float(simulation_ns) + float(stride_ticks_t)), float(stride_ticks_t))
b = tick_time.astype(float)
mat1 = pt.pairwise_rmsd(traj_load, mask="@CA", frame_indices=range(int(number_frames_analysis)))
ax = plt.imshow(mat1, cmap = 'PRGn', origin='lower', interpolation = 'bicubic')
plt.title('2D RMSD')
plt.xlabel('Time (ns)', fontsize = 14, fontweight = 'bold')
plt.ylabel('Time (ns)', fontsize = 14, fontweight = 'bold')
# plt.xticks(fontsize = 12)
# plt.yticks(fontsize = 12)
plt.xticks(a, b.round(decimals=3), fontsize = 12)
plt.yticks(a, b.round(decimals=3), fontsize = 12)
# plt.xlim(0, a[-1])
# plt.ylim(0, a[-1])
cbar1 = plt.colorbar()
cbar1.set_label("RMSD ($\AA$)", fontsize = 14, fontweight = 'bold')
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
raw_data=pd.DataFrame(mat1)
raw_data.to_csv(os.path.join(workDir, Output_name + ".csv"))
#@title **Calculate eigvenctors of Principle Component Analysis (PCA)**
data = pt.pca(traj_load, fit=True, ref=0, mask='@CA', n_vecs=2)
#print('projection values of each frame to first mode = {} \n'.format(data[0][0]))
#print('projection values of each frame to second mode = {} \n'.format(data[0][1]))
#print('eigvenvalues of first two modes', data[1][0])
#print("")
#print('eigvenvectors of first two modes: \n', data[1][1])
last_frame = len(time_array)
stride_ticks_f = (last_frame)/5
ticks_frame = np.arange(0,(len(time_array) + float(stride_ticks_f)), float(stride_ticks_f))
a = ticks_frame.astype(float)
a2 = a.tolist()
stride_ticks_t = (simulation_ns)/5
tick_time = np.arange(0,(float(simulation_ns) + float(stride_ticks_t)), float(stride_ticks_t))
b = tick_time.astype(float)
#@markdown **Provide output file names below:**
Output_name = 'PCA' #@param {type:"string"}
Output_PC1 = 'PC1' #@param {type:"string"}
Output_PC2 = 'PC2' #@param {type:"string"}
%matplotlib inline
%config InlineBackend.figure_format = 'retina' # high resolution
projection_data = data[0]
plt.title(r'PCA of C-$\alpha$')
PC1 = data[0][0]
PC2 = data[0][1]
a = plt.scatter(PC1,PC2, c=range(int(number_frames_analysis)), cmap='Greens', marker='o',s=8, alpha=1)
plt.clim(0, last_frame)
plt.xlabel('PC1', fontsize = 14, fontweight = 'bold')
plt.ylabel('PC2', fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
# N = len(number_frames)
# x2 = np.arange(N)
cbar1 = plt.colorbar(a, orientation="vertical")
cbar1.set_label('Time(ns)', fontsize = 14, fontweight = 'bold')
cbar1.set_ticks(a2)
cbar1.set_ticklabels(b.round(decimals=3))
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
pc1=pd.DataFrame(PC1)
pc1.to_csv(os.path.join(workDir, Output_PC1 + ".csv"))
pc2=pd.DataFrame(PC2)
pc2.to_csv(os.path.join(workDir, Output_PC2 + ".csv"))
#@title **Plot Principal Component 1 (PC1) and Principal Component 2 (PC2) as a ditribution**
Output_name = 'PCA_dist' #@param {type:"string"}
fig = plt.figure(figsize=(9,5))
plt.subplot(1, 2, 1)
ax = sb.kdeplot(PC1, color="green", shade=True, alpha=0.2, linewidth=0.5)
plt.xlabel('PC1', fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks([])
plt.ylabel('')
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(True)
ax.spines['left'].set_visible(False)
plt.subplot(1, 2, 2)
ax2 = sb.kdeplot(PC2, color="purple", shade=True, alpha=0.2, linewidth=0.5)
plt.xlabel('PC2', fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks([])
plt.ylabel('')
ax2.spines['top'].set_visible(False)
ax2.spines['right'].set_visible(False)
ax2.spines['bottom'].set_visible(True)
ax2.spines['left'].set_visible(False)
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
#@title **Pearson's Cross Correlation (CC)**
#@markdown **Provide output file names below:**
Output_name = 'cross_correlation' #@param {type:"string"}
traj_align = pt.align(traj_load, mask='@CA', ref=0)
mat_cc = matrix.correl(traj_align, '@CA')
ax = plt.imshow(mat_cc, cmap = 'PiYG_r', interpolation = 'bicubic', vmin = -1, vmax = 1, origin='lower')
plt.xlabel('Residues', fontsize = 14, fontweight = 'bold')
plt.ylabel('Residues', fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
cbar1 = plt.colorbar()
cbar1.set_label('$CC_ij$', fontsize = 14, fontweight = 'bold')
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
raw_data=pd.DataFrame(mat_cc)
raw_data.to_csv(os.path.join(workDir, Output_name + ".csv"))
###Output
_____no_output_____
###Markdown
**Hello there!**This is a Jupyter notebook for running Molecular Dynamics (MD) simulations using OpenMM engine and AMBER force field for **Protein and Ligand** systems. This notebook is a supplementary material of the paper "***Making it rain: Cloud-based molecular simulations for everyone***" ([link here](https://doi.org/10.1021/acs.jcim.1c00998)) and we encourage you to read it before using this pipeline.The main goal of this notebook is to demonstrate how to harness the power of cloud-computing to run microsecond-long MD simulations in a cheap and yet feasible fashion.--- **This notebook is NOT a standard protocol for MD simulations!** It is just simple MD pipeline illustrating each step of a simulation protocol.--- **Bugs**- If you encounter any bugs, please report the issue to https://github.com/pablo-arantes/making-it-rain/issues**Acknowledgments**- We would like to thank the OpenMM team for developing an excellent and open source engine. - We would like to thank the ChemosimLab ([@ChemosimLab](https://twitter.com/ChemosimLab)) team for their incredible [ProLIF](https://prolif.readthedocs.io/en/latest/index.html) (Protein-Ligand Interaction Fingerprints) tool.- A Making-it-rain by **Pablo R. Arantes** ([@pablitoarantes](https://twitter.com/pablitoarantes)), **Marcelo D. Polêto** ([@mdpoleto](https://twitter.com/mdpoleto)), **Conrado Pedebos** ([@ConradoPedebos](https://twitter.com/ConradoPedebos)) and **Rodrigo Ligabue-Braun** ([@ligabue_braun](https://twitter.com/ligabue_braun)).- Also, credit to [David Koes](https://github.com/dkoes) for his awesome [py3Dmol](https://3dmol.csb.pitt.edu/) plugin.- For related notebooks see: [Making-it-rain](https://github.com/pablo-arantes/making-it-rain) **Introduction**In general, MD simulations rely on 1) a set of atomic coordinates of all atoms on a simulation box and 2) a set of force field parameters that describes the interaction energies between atoms.In terms of inputs, we wil need:* A .pdb file of the protein and a .pdb file of the ligand containing a set of atomic coordinates.In this notebook, we will simulate PDB 3HTB. To build our simulation box, we will use LEaP program (https://ambermd.org/tutorials/pengfei/index.php). The LEaP program is a portal between many chemical structure file types (.pdb and .mol2, primarily), and the Amber model parameter file types such as .lib, .prepi, parm.dat, and .frcmod. Each of the parameter files contains pieces of information needed for constructing a simulation, whether for energy minimization or molecular dynamics. LEaP functions within a larger workflow described in Section 1.1 of the [Amber Manual](https://ambermd.org/doc12/Amber20.pdf). To build ligand topology we will use general AMBER force field (GAFF - http://ambermd.org/antechamber/gaff.html) and The Open Force Field Toolkit (OpenFF - https://openforcefield.org/). GAFF is compatible with the AMBER force field and it has parameters for almost all the organic molecules made of C, N, O, H, S, P, F, Cl, Br and I. As a complete force field, GAFF is suitable for study of a great number of molecules in an automatic fashion. The Open Force Field Toolkit, built by the [Open Force Field Initiative](https://openforcefield.org/), is a Python toolkit for the development and application of modern molecular mechanics force fields based on direct chemical perception and rigorous statistical parameterization methods. You can download the input files examples from [here](https://github.com/pablo-arantes/making-it-rain/tree/main/PROTEIN_LIGAND); --- ------ **Setting the environment for MD calculation**Firstly, we need to install all necessary libraries and packages for our simulation. The main packages we will be installing are:1. Anaconda (https://docs.conda.io/en/latest/miniconda.html)2. OpenMM (https://openmm.org/)3. PyTraj (https://amber-md.github.io/pytraj/latest/index.html)4. py3Dmol (https://pypi.org/project/py3Dmol/)5. ProLIF (https://github.com/chemosim-lab/ProLIF)6. Numpy (https://numpy.org/)7. Matplotlib (https://matplotlib.org/)8. AmberTools (https://ambermd.org/AmberTools.php)
###Code
#@title **Install dependencies**
#@markdown It will take a few minutes, please, drink a coffee and wait. ;-)
# install dependencies
%%capture
import sys
!pip -q install py3Dmol 2>&1 1>/dev/null
!pip install --upgrade MDAnalysis 2>&1 1>/dev/null
!pip install biopandas 2>&1 1>/dev/null
!pip install rdkit-pypi
!pip install Cython
!git clone https://github.com/chemosim-lab/ProLIF.git
prolif1 = "cd /content/ProLIF"
prolif2 = "sed -i 's/mdanalysis.*/mdanalysis==2.0.0/' setup.cfg"
prolif3 = "pip install ."
original_stdout = sys.stdout # Save a reference to the original standard output
with open('prolif.sh', 'w') as f:
sys.stdout = f # Change the standard output to the file we created.
print(prolif1)
print(prolif2)
print(prolif3)
sys.stdout = original_stdout # Reset the standard output to its original value
!chmod 700 prolif.sh 2>&1 1>/dev/null
!bash prolif.sh >/dev/null 2>&1
# install conda
!wget -qnc https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
!bash Miniconda3-latest-Linux-x86_64.sh -bfp /usr/local 2>&1 1>/dev/null
!rm -r Miniconda3-latest-Linux-x86_64.sh /content/ProLIF prolif.sh
!conda install -y -q -c conda-forge openmm=7.6 python=3.7 pdbfixer 2>&1 1>/dev/null
!conda install -c conda-forge ambertools --yes 2>&1 1>/dev/null
!conda install -c ambermd pytraj --yes 2>&1 1>/dev/null
!conda install -c conda-forge parmed --yes 2>&1 1>/dev/null
!conda install -c conda-forge openff-toolkit --yes 2>&1 1>/dev/null
!conda install -c bioconda pybel --yes
!conda install -c openbabel openbabel --yes
#load dependencies
sys.path.append('/usr/local/lib/python3.7/site-packages/')
from openmm import app, unit
from openmm.app import HBonds, NoCutoff, PDBFile
from openff.toolkit.topology import Molecule, Topology
from openff.toolkit.typing.engines.smirnoff import ForceField
from openff.toolkit.utils import get_data_file_path
import parmed as pmd
from biopandas.pdb import PandasPdb
import openmm as mm
from openmm import *
from openmm.app import *
from openmm.unit import *
import os
import urllib.request
import numpy as np
import MDAnalysis as mda
import py3Dmol
from __future__ import print_function
import pytraj as pt
import platform
import scipy.cluster.hierarchy
from scipy.spatial.distance import squareform
import scipy.stats as stats
import matplotlib.pyplot as plt
import pandas as pd
from scipy.interpolate import griddata
import seaborn as sb
from statistics import mean, stdev
from pytraj import matrix
from matplotlib import colors
from IPython.display import set_matplotlib_formats
!wget https://raw.githubusercontent.com/openforcefield/openff-forcefields/master/openforcefields/offxml/openff_unconstrained-2.0.0.offxml 2>&1 1>/dev/null
###Output
_____no_output_____
###Markdown
Using Google Drive to store simulation dataGoogle Colab does not allow users to keep data on their computing nodes. However, we can use Google Drive to read, write, and store our simulations files. Therefore, we suggest to you to:1. Create a folder in your own Google Drive and copy the necessary input files there.2. Copy the path of your created directory. We will use it below.
###Code
#@title ### **Import Google Drive**
#@markdown Click in the "Run" buttom to make your Google Drive accessible.
from google.colab import drive
drive.flush_and_unmount()
drive.mount('/content/drive', force_remount=True)
#@title **Check if you correctly allocated GPU nodes**
gpu_info = !nvidia-smi
gpu_info = '\n'.join(gpu_info)
if gpu_info.find('failed') >= 0:
print('Select the Runtime > "Change runtime type" menu to enable a GPU accelerator, ')
print('and then re-execute this cell.')
else:
print(gpu_info)
###Output
_____no_output_____
###Markdown
------ **Loading the necessary input files**At this point, we should have all libraries and dependencies installed and all necessary input files already at your Google Drive folder.**Important**: Make sure the PDB file points to the correct pathway. If necessary, correct the pathway and re-upload the files. We will merge the receptor and ligand structure objects to form the complex. Note that the coordinates of protein and ligand are determined by the PDB file, and they should be consistent with the ligand being positioned in the binding pocket.Below, you should provide the names of all input files and the pathway of your Google Drive folder containing them.
###Code
#@title **Please, provide the necessary input files below**:
#@markdown **Important:** The protonation of your ligand is crucial for the correct parameterization of the molecule.
%%capture
import pybel
import rdkit
import mdtraj as md
from rdkit import Chem
from rdkit.Chem import AllChem,Draw
from rdkit.Chem.Draw import IPythonConsole
from pdbfixer import PDBFixer
Protein_PDB_file_name = 'protein.pdb' #@param {type:"string"}
Ligand_PDB_file_name = 'ligand.pdb' #@param {type:"string"}
Add_ligand_hydrogens = "Yes" #@param ["Yes", "No"]
ligand_name = Ligand_PDB_file_name
Google_Drive_Path = '/content/drive/MyDrive/protein_ligand' #@param {type:"string"}
workDir = Google_Drive_Path
file_name = os.path.join(workDir, str(Protein_PDB_file_name))
initial_pdb = os.path.join(workDir, "starting0.pdb")
ligand_pdb = os.path.join(workDir, str(ligand_name))
ligand_pdb2 = os.path.join(workDir, "ligand_H.pdb")
starting = os.path.join(workDir, "starting1.pdb")
starting2 = os.path.join(workDir, "starting2.pdb")
starting_end = os.path.join(workDir, "starting_end.pdb")
#Add hydrogens in the ligand
if Add_ligand_hydrogens == "Yes":
fixer = PDBFixer(filename=ligand_pdb)
PDBFile.writeFile(fixer.topology, fixer.positions, open("temp.pdb", 'w'))
mol= [m for m in pybel.readfile(filename="temp.pdb", format='pdb')][0]
mol.calccharges
mol.addh()
out=pybel.Outputfile(filename="temp2.pdb",format='pdb',overwrite=True)
out.write(mol)
out.close()
md.load("temp2.pdb").save("temp2.pdb")
halogens = ['Cl', 'F', 'Br', 'I']
atom_id = []
H_id = []
with open("temp2.pdb") as f:
for line in f:
data = line.split()
if data[0] == "ATOM":
if data[2] in halogens:
atom_id.append(data[1])
if data[0] == "CONECT":
if data[1] in atom_id:
if len(data) > 3:
H_id.append(data[3])
H_id.append(data[4])
H_id.append(data[5])
with open(ligand_pdb2, 'w') as h:
with open("temp2.pdb") as f:
for line in f:
data = line.split()
if data[0] == "ATOM":
if data[1] not in H_id:
print(line, file=h)
elif data[0] == "CONECT":
if data[1] not in atom_id:
print(line, file=h)
else:
print(line, file=h)
fixer = PDBFixer(filename=ligand_pdb2)
PDBFile.writeFile(fixer.topology, fixer.positions, open(ligand_pdb2, 'w'))
else:
fixer = PDBFixer(filename=ligand_pdb)
PDBFile.writeFile(fixer.topology, fixer.positions, open(ligand_pdb2, 'w'))
#Fix protein
pdb_parm = pmd.load_file(file_name)
pdb_parm.save(initial_pdb, standard_resnames=True, overwrite=True)
ppdb = PandasPdb().read_pdb(initial_pdb)
ppdb.df['ATOM'] = ppdb.df['ATOM']
ppdb.df['HETATM'] = ppdb.df['HETATM'][ppdb.df['HETATM']['residue_name'] == 'HOH']
ppdb.df['ATOM'] = ppdb.df['ATOM'][ppdb.df['ATOM']['atom_name'] != 'OXT']
ppdb.df['ATOM']= ppdb.df['ATOM'][ppdb.df['ATOM']['element_symbol'] != 'H']
ppdb.to_pdb(path=starting, records=['ATOM', 'HETATM'], gz=False, append_newline=True)
from Bio.PDB import is_aa
from Bio.PDB import PDBParser, PDBIO, Select
class ProtSelect(Select):
def accept_residue(self, residue):
print(f"{residue} -> {is_aa(residue)}")
return is_aa(residue, standard=True)
from Bio import PDB
pdb_ini = PDBParser().get_structure("pdb", starting)
io = PDBIO()
io.set_structure(pdb_ini)
io.save(starting2, ProtSelect());
pdb4amber_cmd = "pdb4amber -i " + str(starting2) + " -o " + str(starting_end) + " -p"
original_stdout = sys.stdout # Save a reference to the original standard output
with open('pdb4amber.sh', 'w') as f:
sys.stdout = f # Change the standard output to the file we created.
print(pdb4amber_cmd)
sys.stdout = original_stdout # Reset the standard output to its original value
!chmod 700 pdb4amber.sh 2>&1 1>/dev/null
!bash pdb4amber.sh 2> /dev/null
!rm pdb4amber.sh temp.pdb temp2.pdb
#@markdown ---
import rdkit
from rdkit import Chem
from rdkit.Chem import AllChem,Draw
from rdkit.Chem.Draw import IPythonConsole
#@title **Enumerate Stereoisomers to generate ligand topology:**
##@markdown **You can find the smiles for your lingad at: https://pubchem.ncbi.nlm.nih.gov/**
mol= [m for m in pybel.readfile(filename=ligand_pdb2, format='pdb')][0]
mol.calccharges
mol.addh()
out=pybel.Outputfile(filename="temp2.smi",format='smiles',overwrite=True)
out.write(mol)
out.close()
fileObj = open("temp2.smi", "r",) #opens the file in read mode
for aRow in fileObj:
smi = aRow.split('\t')
fileObj.close()
Ligand_smiles = smi[0]
!rm temp2.smi >/dev/null 2>&1
mol = Chem.MolFromSmiles(Ligand_smiles)
def spam(n):
out=[]
for perm in getPerms(n):
elem = [ int(i) for i in list(perm) ]
out.append(elem)
return out
def getPerms(n):
from itertools import permutations
for i in getCandidates(n):
for perm in set(permutations(i)):
yield ''.join(perm)
def getCandidates(n):
for i in range(0, n+1):
res = "1" * i + "0" * (n - i)
yield res
def GetStereoIsomers(mol):
from rdkit import Chem
from copy import copy
out = []
chiralCentres = Chem.FindMolChiralCenters(mol, includeUnassigned=True)
#return the molecule object when no chiral centres where identified
if chiralCentres == []:
return [mol]
#All bit permutations with number of bits equals number of chiralCentres
elements = spam(len(chiralCentres))
!rm smiles.txt temp2.smi >/dev/null 2>&1
for isoId,element in enumerate(elements):
for centreId,i in enumerate(element):
atomId = chiralCentres[centreId][0]
if i == 0:
mol.GetAtomWithIdx(atomId).SetChiralTag(Chem.rdchem.ChiralType.CHI_TETRAHEDRAL_CW)
elif i == 1:
mol.GetAtomWithIdx(atomId).SetChiralTag(Chem.rdchem.ChiralType.CHI_TETRAHEDRAL_CCW)
outmol = copy(mol)
out.append(outmol)
print(Chem.MolToSmiles(mol,isomericSmiles=True), file=open("smiles.txt", "a",))
return out
Draw.MolsToGridImage(GetStereoIsomers(mol), subImgSize=(500,200), molsPerRow=1)
chiralCentres = Chem.FindMolChiralCenters(mol, includeUnassigned=True)
if chiralCentres != []:
print("Follow the stereoisomers for your ligand: \n")
fileObj = open("smiles.txt", "r",) #opens the file in read mode
smiles = fileObj.read().splitlines() #puts the file into an array
fileObj.close()
x = len(smiles[:-1])
for a in range(x+1):
y = smiles[0+a:(a+1)]
globals()[f"isomer{a+1}"] = str(y[0])
print("Isomer " + str(a+1) + " = " + str(y[0]) + "\n")
else:
isomer1 = Ligand_smiles
print("No chiral centres were identified! \nIsomer 1 = " + str(isomer1) )
Draw.MolsToGridImage(GetStereoIsomers(mol), subImgSize=(700,200), molsPerRow=1, returnPNG=True)
from rdkit import Chem
from rdkit.Chem import PandasTools
from openff.toolkit.typing.engines.smirnoff import ForceField
import parmed
#@title **Parameters to generate the topology:**
#@markdown **Parameters to generate the protein topology:**
Force_field = "ff19SB" #@param ["ff19SB", "ff14SB"]
if Force_field == "ff19SB":
ff = "leaprc.protein.ff19SB"
else:
ff = "leaprc.protein.ff14SB"
Water_type = "OPC" #@param ["TIP3P", "OPC"]
if Water_type == "TIP3P":
water = "leaprc.water.tip3p"
else:
water = "leaprc.water.opc"
#@markdown Box Size (Angstrons):
Box_size = 12 #@param {type:"slider", min:10, max:20, step:1}
size_box = Box_size
#@markdown **Parameters to generate the ligand topology:**
Ligand_Force_field = "GAFF2" #@param ["GAFF2", "OpenFF 2.0.0 (Sage)"]
Ligand_isomer = "1" #@param {type:"string", min:1, max:10, step:100}
if chiralCentres == []:
isomer_end = isomer1
else:
isomer_end = globals()[f"isomer{Ligand_isomer}"]
Ligand_net_charges = "0" #@param {type:"string", min:-10, max:10, step:1}
#@markdown **ATTENTION**: AMBER tleap will neutralize your system automatically, adding Na+ and Cl- ions.
#@markdown ---
tleap = os.path.join(workDir, "tleap.in")
top_nw = os.path.join(workDir, "SYS_nw.prmtop")
crd_nw = os.path.join(workDir, "SYS_nw.crd")
pdb_nw = os.path.join(workDir, "SYS_nw.pdb")
top = os.path.join(workDir, "SYS_gaff2.prmtop")
crd = os.path.join(workDir, "SYS_gaff2.crd")
pdb = os.path.join(workDir, "SYS.pdb")
ligand_noh = os.path.join(workDir, "ligand_noh.pdb")
ligand_h = os.path.join(workDir, "ligand_h.pdb")
ligand_mol2 = os.path.join(workDir, "ligand.mol2")
ligand_frcmod = os.path.join(workDir, "ligand.frcmod")
lig_new = os.path.join(workDir, "ligand_gaff.pdb")
protein_ligand = os.path.join(workDir, "protein_ligand.pdb")
lib = os.path.join(workDir, "lig.lib")
#gaff_command1 = "pdb4amber -i " + str(ligand_pdb2) + " -o " + str(ligand_h)
gaff_command1 = "pdb4amber -i " + str(ligand_pdb2) + " -o " + str(ligand_h)
gaff_command3 = "antechamber -i " + str(ligand_h) + " -fi pdb -o " + str(ligand_mol2) + " -fo mol2 -c bcc -nc " + str(Ligand_net_charges) + " -rn LIG -at gaff2"
gaff_command4 = "parmchk2 -i " + str(ligand_mol2) + " -f mol2 -o " + str(ligand_frcmod) + " -s gaff2"
original_stdout = sys.stdout # Save a reference to the original standard output
with open('gaff.sh', 'w') as f:
sys.stdout = f # Change the standard output to the file we created.
print(gaff_command1)
print(gaff_command3)
print(gaff_command4)
sys.stdout = original_stdout # Reset the standard output to its original value
!chmod 700 gaff.sh 2>&1 1>/dev/null
!bash gaff.sh >/dev/null 2>&1
f = open(tleap, "w")
f.write("""source """ + str(ff) + "\n"
"""source leaprc.gaff2
LIG = loadmol2 """ + str(ligand_mol2) + "\n"
"""loadamberparams """ + str(ligand_frcmod) + "\n"
"""saveoff LIG """ + str(lib) + "\n"
"""savepdb LIG """ + str(lig_new) + "\n"
"""quit""")
f.close()
tleap_command = "tleap -f " + str(tleap)
cat_command = "cat " + str(starting_end) + " " + str(lig_new) + str(" > ") + str(protein_ligand)
original_stdout = sys.stdout # Save a reference to the original standard output
with open('run_tleap.sh', 'w') as f:
sys.stdout = f # Change the standard output to the file we created.
print(tleap_command)
print(cat_command)
sys.stdout = original_stdout # Reset the standard output to its original value
!chmod 700 run_tleap.sh 2>&1 1>/dev/null
!bash run_tleap.sh 2>&1 1>/dev/null
ppdb = PandasPdb().read_pdb(protein_ligand)
ppdb.df['ATOM'] = ppdb.df['ATOM']
ppdb.df['OTHERS'] = [ppdb.df['OTHERS'] != 'OTHERS']
ppdb.to_pdb(path=protein_ligand, records=['ATOM', 'HETATM'], gz=False, append_newline=True)
f = open(tleap, "w")
f.write("""source """ + str(ff) + "\n"
"""source leaprc.DNA.OL15
source leaprc.RNA.OL3
source leaprc.GLYCAM_06j-1
source leaprc.lipid17
source leaprc.gaff2
source """ + str(water) + "\n"
"""loadamberparams """ + str(ligand_frcmod) + "\n"
"""loadoff """ + str(lib) + "\n"
"""SYS = loadpdb """ + str(protein_ligand) + "\n"
"""alignaxes SYS
check SYS
charge SYS
addions SYS Na+ 0
addions2 SYS Cl- 0
check SYS
charge SYS
savepdb SYS """ + str(pdb_nw) + "\n"
"""saveamberparm SYS """ + str(top_nw) + " " + str(crd_nw) + "\n"
"""solvatebox SYS TIP3PBOX """ + str(size_box) + """ 0.7
saveamberparm SYS """ + str(top) + " " + str(crd) + "\n"
"""savepdb SYS """ + str(pdb) + "\n"
"""quit""")
f.close()
tleap_command = "tleap -f " + str(tleap)
original_stdout = sys.stdout # Save a reference to the original standard output
with open('run_tleap.sh', 'w') as f:
sys.stdout = f # Change the standard output to the file we created.
print(tleap_command)
sys.stdout = original_stdout # Reset the standard output to its original value
SYS = os.path.join(workDir, "SYS*")
rm_sys = "rm " + SYS
original_stdout = sys.stdout # Save a reference to the original standard output
with open('rm_sys.sh', 'w') as f:
sys.stdout = f # Change the standard output to the file we created.
print(rm_sys)
sys.stdout = original_stdout # Reset the standard output to its original value
!chmod 700 rm_sys.sh 2>&1 1>/dev/null
!bash rm_sys.sh 2> /dev/null
!chmod 700 run_tleap.sh 2>&1 1>/dev/null
!bash run_tleap.sh 2>&1 1>/dev/null
if Ligand_Force_field == "OpenFF 2.0.0 (Sage)":
mol = Chem.MolFromPDBFile(lig_new, removeHs=False)
Chem.MolToPDBFile(mol, os.path.join(workDir, "ligand_openFF.pdb"))
in_prmtop = top
in_crd = crd
orig_structure = parmed.amber.AmberParm(in_prmtop, in_crd)
pieces = orig_structure.split()
for piece in pieces:
print(f"There are {len(piece[1])} instance(s) of {piece[0]}")
from openmm.app import PDBFile
from openff.toolkit.topology import Molecule, Topology
from openff.toolkit.tests.utils import get_data_file_path
# rdmol = Chem.MolFromMolFile(os.path.join(workDir, "ligand_openFF.sdf"))
# ligand_off_molecule = Molecule.from_rdkit(rdmol, hydrogens_are_explicit=True)
ligand_off_molecule = Molecule.from_smiles(isomer_end)
ligand_pdbfile = PDBFile(os.path.join(workDir, "ligand_openFF.pdb"))
ligand_off_topology = Topology.from_openmm(
ligand_pdbfile.topology,
unique_molecules=[ligand_off_molecule],)
force_field = ForceField("openff_unconstrained-2.0.0.offxml")
ligand_system = force_field.create_openmm_system(ligand_off_topology)
new_ligand_structure = parmed.openmm.load_topology(
ligand_off_topology.to_openmm(),
ligand_system,
xyz=pieces[1][0].positions,)
new_ligand_structure.save(os.path.join(workDir, "ligand.prmtop"), overwrite=True)
new_ligand_structure.save(os.path.join(workDir, "ligand.inpcrd"), overwrite=True)
# Check how many atoms and which order elements are in the new ligand
n_atoms_new = len(new_ligand_structure.atoms)
elements_new = [atom.element for atom in new_ligand_structure.atoms]
# Check how many atoms and which order elements are in the old ligand
old_ligand_structure, n_copies = pieces[1]
n_atoms_old = len(old_ligand_structure.atoms)
elements_old = [atom.element for atom in old_ligand_structure.atoms]
print(
f"There are {n_atoms_old} in the old ligand structure and {n_atoms_new} atoms "
f"in the new ligand structure")
# Print out error message if number of atoms doesn't match
if n_atoms_new != n_atoms_old:
print(
"Error: Number of atoms in input ligand doesn't match number extracted "
"from prmtop file.")
if elements_new != elements_old:
print(
"Error: Elements in input ligand don't match elements in the ligand "
"from the prmtop file.")
print(f"Old elements: {elements_old}")
print(f"New elements: {elements_new}")
# Create a new, empty system
complex_structure = parmed.Structure()
# Add the protein. Convert explicitly to an AmberParm object to ensure that 1-4 scaling factors are preserved.
complex_structure += parmed.amber.AmberParm.from_structure(pieces[0][0])
# Add the ligand
complex_structure += parmed.amber.AmberParm.from_structure(new_ligand_structure)
# Add ions and Waters
ppdb = PandasPdb().read_pdb(pdb)
Cl = [ppdb.df['ATOM']['atom_name'] == 'Cl-']
Na = [ppdb.df['ATOM']['atom_name'] == 'Na+']
Cl = np.array(Cl)
Na = np.array(Na)
if True in Cl and True in Na:
just_ion1_structure = parmed.Structure()
just_ion1_structure += pieces[2][0]
just_ion1_structure *= len(pieces[2][1])
just_ion2_structure = parmed.Structure()
just_ion2_structure += pieces[3][0]
just_ion2_structure *= len(pieces[3][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_ion1_structure)
complex_structure += parmed.amber.AmberParm.from_structure(just_ion2_structure)
just_water_structure = parmed.Structure()
just_water_structure += pieces[4][0]
just_water_structure *= len(pieces[4][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_water_structure)
elif True in Cl:
just_ion1_structure = parmed.Structure()
just_ion1_structure += pieces[2][0]
just_ion1_structure *= len(pieces[2][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_ion1_structure)
just_water_structure = parmed.Structure()
just_water_structure += pieces[3][0]
just_water_structure *= len(pieces[3][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_water_structure)
elif True in Na:
just_ion1_structure = parmed.Structure()
just_ion1_structure += pieces[2][0]
just_ion1_structure *= len(pieces[2][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_ion1_structure)
just_water_structure = parmed.Structure()
just_water_structure += pieces[3][0]
just_water_structure *= len(pieces[3][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_water_structure)
else:
just_water_structure = parmed.Structure()
just_water_structure += pieces[2][0]
just_water_structure *= len(pieces[2][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_water_structure)
# Copy over the original coordinates and box vectors
complex_structure.coordinates = orig_structure.coordinates
complex_structure.box_vectors = orig_structure.box_vectors
# Export the Structure to AMBER files
top = os.path.join(workDir, "SYS_openff.prmtop")
crd = os.path.join(workDir, "SYS_openff.inpcrd")
complex_structure.save(top, overwrite=True)
complex_structure.save(crd, overwrite=True)
top_openff = os.path.exists(top)
crd_openff = os.path.exists(crd)
if top_openff == True and crd_openff == True:
print("Successfully generated topology! :-)")
else:
print("ERROR: Check your inputs! ")
else:
pdb_amber = os.path.exists(pdb)
top_amber = os.path.exists(top)
crd_amber = os.path.exists(crd)
if pdb_amber == True and top_amber == True and crd_amber == True:
print("Successfully generated topology! :-)")
else:
print("ERROR: Check your inputs! ")
!!rm *.sh *.log ANTECHAMBER* ATOMTYPE* >/dev/null 2>&1
###Output
_____no_output_____
###Markdown
Let's take a look on our simulation box:
###Code
#@title **Show 3D structure**
import ipywidgets
from ipywidgets import interact, fixed
import warnings
warnings.filterwarnings('ignore')
def show_pdb(show_box=True,
show_ligand=True,
show_sidechains=False,
show_mainchain=False,
color="None"):
def mainchain(p, color="white", model=0):
BB = ['C','O','N','CA']
p.addStyle({"model":model,'atom':BB},
{'stick':{'colorscheme':f"{color}Carbon",'radius':0.3}})
p.setViewStyle({'style':'outline','color':'black','width':0.1})
def ligand(p, model=0):
HP = ['LIG']
p.addStyle({"model":model,'and':[{'resn':HP}]},
{'stick':{'colorscheme':'greenCarbon','radius':0.3}})
p.setViewStyle({'style':'outline','color':'black','width':0.1})
def box(p, model=0):
p.addModelsAsFrames(pdb)
p.addSurface(py3Dmol.SAS, {'opacity': 0.6, 'color':'white'}) #comment this line if you dont want to see the water box
p.setViewStyle({'style':'outline','color':'black','width':0.1})
def sidechain(p, model=0):
HP = ["ALA","GLY","VAL","ILE","LEU","PHE","MET","PRO","TRP","CYS","TYR"]
BB = ['C','O','N']
p.addStyle({"model":model,'and':[{'resn':HP},{'atom':BB,'invert':True}]},
{'stick':{'colorscheme':"whiteCarbon",'radius':0.3}})
p.addStyle({"model":model,'and':[{'resn':"GLY"},{'atom':'CA'}]},
{'sphere':{'colorscheme':"whiteCarbon",'radius':0.3}})
p.addStyle({"model":model,'and':[{'resn':"PRO"},{'atom':['C','O'],'invert':True}]},
{'stick':{'colorscheme':"whiteCarbon",'radius':0.3}})
p.addStyle({"model":model,'and':[{'resn':HP,'invert':True},{'atom':BB,'invert':True}]},
{'stick':{'colorscheme':"whiteCarbon",'radius':0.3}})
p = py3Dmol.view(js='https://3dmol.org/build/3Dmol.js')
p.addModel(open(pdb,'r').read(),'pdb')
if color == "rainbow":
p.setStyle({'cartoon': {'color':'spectrum'}})
else:
p.setStyle({'cartoon':{}})
if show_sidechains: sidechain(p)
if show_mainchain: mainchain(p)
if show_ligand: ligand(p)
if show_box: box(p)
p.zoomTo()
return p.show()
interact(show_pdb,
show_box=ipywidgets.Checkbox(value=True),
show_ligand=ipywidgets.Checkbox(value=True),
show_sidechains=ipywidgets.Checkbox(value=False),
show_mainchain=ipywidgets.Checkbox(value=False),
color=ipywidgets.Dropdown(options=['None', 'rainbow'], value='None'))
#@title **View and check the Ligand Interaction Network (LigPlot)**
#@markdown This diagram is interactive and allows moving around the residues, as well as clicking the legend to toggle the display of specific residues types or interactions. The diagram will be saved as an HTML file (initial.html).
import MDAnalysis as mda
import prolif as plf
import numpy as np
import os
from prolif.plotting.network import LigNetwork
# load topology
u = mda.Universe(os.path.join(workDir, "SYS_gaff2.prmtop"), pdb)
lig = u.select_atoms("resname LIG")
prot = u.select_atoms("protein")
# create RDKit-like molecules for visualisation
lmol = plf.Molecule.from_mda(lig)
pmol = plf.Molecule.from_mda(prot)
fp = plf.Fingerprint()
fp.run(u.trajectory[::1], lig, prot)
df = fp.to_dataframe(return_atoms=True)
net = LigNetwork.from_ifp(df, lmol,
# replace with `kind="frame", frame=0` for the other depiction
kind="frame", frame=0,
rotation=270)
net.save(os.path.join(workDir, "initial.html"))
net.display()
###Output
_____no_output_____
###Markdown
------ **Equilibrating the simulation box**Proper MD equilibration protocol is designed to equilibrate both temperature and pressure throughout the simulation box while preserving the protein experimental conformation. In addition, we also allow the solvent to accomodate around the protein, creating proper solvation layers.Below, we will set up the MD equilibration parameters, such as temperature, pressure and the desired simulation time. We will define the force constant used to restraint protein heavy-atoms in place and the frequency at which we want to save atomic coordinates in a trajectory file (.dcd).After you are done, you can run the next 2 cells to equilibrate your system.
###Code
#@title ### **Parameters for MD Equilibration protocol:**
# remove whitespaces
Jobname = 'prot_lig_equil' #@param {type:"string"}
Minimization_steps = "1000" #@param ["1000", "5000", "10000", "20000", "50000", "100000"]
#@markdown Simulation time (in nanoseconds) and integration time (in femtoseconds):
Time = "2" #@param {type:"string"}
stride_time_eq = Time
Integration_timestep = "2" #@param ["0.5", "1", "2", "3", "4"]
dt_eq = Integration_timestep
#@markdown Temperature (in Kelvin) and Pressure (in bar)
Temperature = 298 #@param {type:"string"}
temperature_eq = Temperature
Pressure = 1 #@param {type:"string"}
pressure_eq = Pressure
#@markdown Position restraints force constant (in kJ/mol):
Force_constant = 800 #@param {type:"slider", min:0, max:2000, step:100}
#@markdown Frequency to write the trajectory file (in picoseconds):
Write_the_trajectory = "10" #@param ["10", "100", "200", "500", "1000"]
write_the_trajectory_eq = Write_the_trajectory
#@markdown Frequency to write the log file (in picoseconds):
Write_the_log = "10" #@param ["10", "100", "200", "500", "1000"]
write_the_log_eq = Write_the_log
#@markdown ---
#@title **Runs an Equilibration MD simulation (NPT ensemble)**
#@markdown Now, let's equilibrate our system!
###########################################
import openmm as mm
from openmm import *
from openmm.app import *
from openmm.unit import *
import pytraj as pt
from sys import stdout, exit, stderr
import os, math, fnmatch
#############################################
# Defining MD simulation parameters
jobname = os.path.join(workDir, Jobname)
coordinatefile = crd
pdbfile = pdb
topologyfile = top
time_ps = float(Time)*1000
simulation_time = float(time_ps)*picosecond # in ps
dt = int(dt_eq)*femtosecond
temperature = float(temperature_eq)*kelvin
savcrd_freq = int(write_the_trajectory_eq)*picosecond
print_freq = int(write_the_log_eq)*picosecond
pressure = float(pressure_eq)*bar
restraint_fc = int(Force_constant) # kJ/mol
nsteps = int(simulation_time.value_in_unit(picosecond)/dt.value_in_unit(picosecond))
nprint = int(print_freq.value_in_unit(picosecond)/dt.value_in_unit(picosecond))
nsavcrd = int(savcrd_freq.value_in_unit(picosecond)/dt.value_in_unit(picosecond))
#############################################
# Defining functions to use below:
def backup_old_log(pattern, string):
result = []
for root, dirs, files in os.walk("./"):
for name in files:
if fnmatch.fnmatch(name, pattern):
try:
number = int(name[-2])
avail = isinstance(number, int)
#print(name,avail)
if avail == True:
result.append(number)
except:
pass
if len(result) > 0:
maxnumber = max(result)
else:
maxnumber = 0
backup_file = "\#" + string + "." + str(maxnumber + 1) + "#"
os.system("mv " + string + " " + backup_file)
return backup_file
def restraints(system, crd, fc, restraint_array):
boxlx = system.getDefaultPeriodicBoxVectors()[0][0].value_in_unit(nanometers)
boxly = system.getDefaultPeriodicBoxVectors()[1][1].value_in_unit(nanometers)
boxlz = system.getDefaultPeriodicBoxVectors()[2][2].value_in_unit(nanometers)
if fc > 0:
# positional restraints for all heavy-atoms
posresPROT = CustomExternalForce('k*periodicdistance(x, y, z, x0, y0, z0)^2;')
posresPROT.addPerParticleParameter('k')
posresPROT.addPerParticleParameter('x0')
posresPROT.addPerParticleParameter('y0')
posresPROT.addPerParticleParameter('z0')
for atom1 in restraint_array:
atom1 = int(atom1)
xpos = crd.positions[atom1].value_in_unit(nanometers)[0]
ypos = crd.positions[atom1].value_in_unit(nanometers)[1]
zpos = crd.positions[atom1].value_in_unit(nanometers)[2]
posresPROT.addParticle(atom1, [fc, xpos, ypos, zpos])
system.addForce(posresPROT)
return system
##############################################
#############################################
print("\n> Simulation details:\n")
print("\tJob name = " + jobname)
print("\tCoordinate file = " + str(coordinatefile))
print("\tPDB file = " + str(pdbfile))
print("\tTopology file = " + str(topologyfile))
print("\n\tSimulation_time = " + str(simulation_time))
print("\tIntegration timestep = " + str(dt))
print("\tTotal number of steps = " + str(nsteps))
print("\n\tSave coordinates each " + str(savcrd_freq))
print("\tPrint in log file each " + str(print_freq))
print("\n\tTemperature = " + str(temperature))
print("\tPressure = " + str(pressure))
#############################################
print("\n> Setting the system:\n")
if Ligand_Force_field == "OpenFF 2.0.0 (Sage)":
print("\t- Reading topology and structure file...")
prmtop = pmd.load_file(topologyfile)
inpcrd = AmberInpcrdFile(coordinatefile)
print("\t- Creating system and setting parameters...")
nonbondedMethod = PME
nonbondedCutoff = 1.0*nanometers
ewaldErrorTolerance = 0.0005
constraints = HBonds
rigidWater = True
constraintTolerance = 0.000001
friction = 1.0
system = complex_structure.createSystem(nonbondedMethod=nonbondedMethod, nonbondedCutoff=nonbondedCutoff,
constraints=constraints, rigidWater=rigidWater, ewaldErrorTolerance=ewaldErrorTolerance)
else:
print("\t- Reading topology and structure file...")
prmtop = AmberPrmtopFile(topologyfile)
inpcrd = AmberInpcrdFile(coordinatefile)
print("\t- Creating system and setting parameters...")
nonbondedMethod = PME
nonbondedCutoff = 1.0*nanometers
ewaldErrorTolerance = 0.0005
constraints = HBonds
rigidWater = True
constraintTolerance = 0.000001
friction = 1.0
system = prmtop.createSystem(nonbondedMethod=nonbondedMethod, nonbondedCutoff=nonbondedCutoff,
constraints=constraints, rigidWater=rigidWater, ewaldErrorTolerance=ewaldErrorTolerance)
print("\t- Applying restraints. Force Constant = " + str(Force_constant) + "kJ/mol")
pt_system = pt.iterload(coordinatefile, topologyfile)
pt_topology = pt_system.top
restraint_array = pt.select_atoms('!(:H*) & !(:WAT) & !(:Na+) & !(:Cl-) & !(:Mg+) & !(:K+)', pt_topology)
system = restraints(system, inpcrd, restraint_fc, restraint_array)
print("\t- Setting barostat...")
system.addForce(MonteCarloBarostat(pressure, temperature))
print("\t- Setting integrator...")
integrator = LangevinIntegrator(temperature, friction, dt)
integrator.setConstraintTolerance(constraintTolerance)
simulation = Simulation(prmtop.topology, system, integrator)
simulation.context.setPositions(inpcrd.positions)
if inpcrd.boxVectors is not None:
simulation.context.setPeriodicBoxVectors(*inpcrd.boxVectors)
print("\t- Energy minimization: " + str(Minimization_steps) + " steps")
simulation.minimizeEnergy(tolerance=10*kilojoule/mole, maxIterations=int(Minimization_steps))
print("\t-> Potential Energy = " + str(simulation.context.getState(getEnergy=True).getPotentialEnergy()))
print("\t- Setting initial velocities...")
simulation.context.setVelocitiesToTemperature(temperature)
#############################################
# Running Equilibration on NPT ensemble
dcd_file = jobname + ".dcd"
log_file = jobname + ".log"
rst_file = jobname + ".rst"
prv_rst_file = jobname + ".rst"
pdb_file = jobname + ".pdb"
# Creating a trajectory file and reporters
dcd = DCDReporter(dcd_file, nsavcrd)
firstdcdstep = (nsteps) + nsavcrd
dcd._dcd = DCDFile(dcd._out, simulation.topology, simulation.integrator.getStepSize(), firstdcdstep, nsavcrd) # charmm doesn't like first step to be 0
simulation.reporters.append(dcd)
simulation.reporters.append(StateDataReporter(stdout, nprint, step=True, speed=True, progress=True, totalSteps=nsteps, remainingTime=True, separator='\t\t'))
simulation.reporters.append(StateDataReporter(log_file, nprint, step=True, kineticEnergy=True, potentialEnergy=True, totalEnergy=True, temperature=True, volume=True, speed=True))
print("\n> Simulating " + str(nsteps) + " steps...")
simulation.step(nsteps)
simulation.reporters.clear() # remove all reporters so the next iteration don't trigger them.
##################################
# Writing last frame information of stride
print("\n> Writing state file (" + str(rst_file) + ")...")
state = simulation.context.getState( getPositions=True, getVelocities=True )
with open(rst_file, 'w') as f:
f.write(XmlSerializer.serialize(state))
last_frame = int(nsteps/nsavcrd)
print("> Writing coordinate file (" + str(pdb_file) + ", frame = " + str(last_frame) + ")...")
positions = simulation.context.getState(getPositions=True).getPositions()
PDBFile.writeFile(simulation.topology, positions, open(pdb_file, 'w'))
print("\n> Finished!\n")
###Output
_____no_output_____
###Markdown
------ **Running a Production MD simulation**Finally, we will proceed with the Production simulation itself using the equilibrated system coordinates as input structure.Note that we will use here a *.rst state file* , which contains atomic velocities and positions from the last frame of the equilibration simulation, guaranteeing that our production simulation begins from a thermodynamically equilibrated system.Another important information here is the **Number_of_strides** and the **Stride_Time**. In this notebook, we simulate a defined number of *strides*, so the **simulation time = Number_of_strides*Stride_Time**. For example, we can simulate 100ns by setting *Number_of_strides=10* and *Stride_Time=10 ns*.**Important: at the end of the Production simulation, we concatenate all strides to create a complete trajectory file which can be visualized and analyzed**The idea behind this approach is to make use of the intermitent 12h/24h period that Google Colab allows us to use its GPUs.
###Code
#@markdown ### **Provide input file names below:**
Equilibrated_PDB = 'prot_lig_equil.pdb' #@param {type:"string"}
State_file = 'prot_lig_equil.rst' #@param {type:"string"}
#@markdown ---
#@markdown ### **Parameters for MD Production protocol:**
# remove whitespaces
Jobname = 'prot_lig_prod' #@param {type:"string"}
#@markdown Simulation time (in nanoseconds), number of strides (integers) and integration timestep (in femtoseconds):
Stride_Time = "5" #@param {type:"string"}
stride_time_prod = Stride_Time
Number_of_strides = "1" #@param {type:"string"}
nstride = Number_of_strides
Integration_timestep = "2" #@param ["0.5", "1", "2", "3", "4"]
dt_prod = Integration_timestep
#@markdown Temperature (in Kelvin) and Pressure (in bar)
Temperature = 298 #@param {type:"string"}
temperature_prod = Temperature
Pressure = 1 #@param {type:"string"}
pressure_prod = Pressure
#@markdown Frequency to write the trajectory file (in picoseconds):
Write_the_trajectory = "10" #@param ["10", "100", "200", "500", "1000"]
write_the_trajectory_prod = Write_the_trajectory
#@markdown Frequency to write the log file (in picoseconds):
Write_the_log = "10" #@param ["10", "100", "200", "500", "1000"]
write_the_log_prod = Write_the_log
#@markdown ---
#@title **Runs a Production MD simulation (NPT ensemble) after equilibration**
#
###########################################
import openmm as mm
from openmm import *
from openmm.app import *
from openmm.unit import *
from sys import stdout, exit, stderr
import os, math, fnmatch
#############################################
# Defining MD simulation parameters
jobname = os.path.join(workDir, str(Jobname))
coordinatefile = crd
pdbfile = os.path.join(workDir, Equilibrated_PDB)
topologyfile = top
equil_rst_file = os.path.join(workDir, State_file)
stride_time_ps = float(stride_time_prod)*1000
stride_time = float(stride_time_ps)*picosecond
nstride = int(Number_of_strides)
dt = int(dt_prod)*femtosecond
temperature = float(temperature_prod)*kelvin
savcrd_freq = int(write_the_trajectory_prod)*picosecond
print_freq = int(write_the_log_prod)*picosecond
pressure = float(pressure_prod)*bar
simulation_time = stride_time*nstride
nsteps = int(stride_time.value_in_unit(picosecond)/dt.value_in_unit(picosecond))
nprint = int(print_freq.value_in_unit(picosecond)/dt.value_in_unit(picosecond))
nsavcrd = int(savcrd_freq.value_in_unit(picosecond)/dt.value_in_unit(picosecond))
firststride = 1 # must be integer
#############################################
# Defining functions to use below:
def backup_old_log(pattern, string):
result = []
for root, dirs, files in os.walk("./"):
for name in files:
if fnmatch.fnmatch(name, pattern):
try:
number = int(name[-2])
avail = isinstance(number, int)
#print(name,avail)
if avail == True:
result.append(number)
except:
pass
if len(result) > 0:
maxnumber = max(result)
else:
maxnumber = 0
backup_file = "\#" + string + "." + str(maxnumber + 1) + "#"
os.system("mv " + string + " " + backup_file)
return backup_file
##############################################
#############################################
print("\n> Simulation details:\n")
print("\tJob name = " + jobname)
print("\tCoordinate file = " + str(coordinatefile))
print("\tPDB file = " + str(pdbfile))
print("\tTopology file = " + str(topologyfile))
print("\n\tSimulation_time = " + str(stride_time*nstride))
print("\tIntegration timestep = " + str(dt))
print("\tTotal number of steps = " + str(nsteps*nstride))
print("\tNumber of strides = " + str(nstride) + " (" + str(stride_time) + " in each stride)")
print("\n\tSave coordinates each " + str(savcrd_freq))
print("\tSave checkpoint each " + str(savcrd_freq))
print("\tPrint in log file each " + str(print_freq))
print("\n\tTemperature = " + str(temperature))
print("\tPressure = " + str(pressure))
#############################################
print("\n> Setting the system:\n")
if Ligand_Force_field == "OpenFF 2.0.0 (Sage)":
print("\t- Reading topology and structure file...")
prmtop = pmd.load_file(topologyfile)
inpcrd = AmberInpcrdFile(coordinatefile)
print("\t- Creating system and setting parameters...")
nonbondedMethod = PME
nonbondedCutoff = 1.0*nanometers
ewaldErrorTolerance = 0.0005
constraints = HBonds
rigidWater = True
constraintTolerance = 0.000001
friction = 1.0
system = complex_structure.createSystem(nonbondedMethod=nonbondedMethod, nonbondedCutoff=nonbondedCutoff,
constraints=constraints, rigidWater=rigidWater, ewaldErrorTolerance=ewaldErrorTolerance)
else:
print("\t- Reading topology and structure file...")
prmtop = AmberPrmtopFile(topologyfile)
inpcrd = AmberInpcrdFile(coordinatefile)
print("\t- Creating system and setting parameters...")
nonbondedMethod = PME
nonbondedCutoff = 1.0*nanometers
ewaldErrorTolerance = 0.0005
constraints = HBonds
rigidWater = True
constraintTolerance = 0.000001
friction = 1.0
system = prmtop.createSystem(nonbondedMethod=nonbondedMethod, nonbondedCutoff=nonbondedCutoff,
constraints=constraints, rigidWater=rigidWater, ewaldErrorTolerance=ewaldErrorTolerance)
print("\t- Setting barostat...")
system.addForce(MonteCarloBarostat(pressure, temperature))
print("\t- Setting integrator...")
integrator = LangevinIntegrator(temperature, friction, dt)
integrator.setConstraintTolerance(constraintTolerance)
simulation = Simulation(prmtop.topology, system, integrator)
simulation.context.setPositions(inpcrd.positions)
if inpcrd.boxVectors is not None:
simulation.context.setPeriodicBoxVectors(*inpcrd.boxVectors)
#############################################
# Opening a loop of extension NSTRIDE to simulate the entire STRIDE_TIME*NSTRIDE
for n in range(1, nstride + 1):
print("\n\n>>> Simulating Stride #" + str(n) + " <<<")
dcd_file = jobname + "_" + str(n) + ".dcd"
log_file = jobname + "_" + str(n) + ".log"
rst_file = jobname + "_" + str(n) + ".rst"
prv_rst_file = jobname + "_" + str(n-1) + ".rst"
pdb_file = jobname + "_" + str(n) + ".pdb"
if os.path.exists(rst_file):
print("> Stride #" + str(n) + " finished (" + rst_file + " present). Moving to next stride... <")
continue
if n == 1:
print("\n> Loading previous state from equilibration > " + equil_rst_file + " <")
with open(equil_rst_file, 'r') as f:
simulation.context.setState(XmlSerializer.deserialize(f.read()))
currstep = int((n-1)*nsteps)
currtime = currstep*dt.in_units_of(picosecond)
simulation.currentStep = currstep
simulation.context.setTime(currtime)
print("> Current time: " + str(currtime) + " (Step = " + str(currstep) + ")")
else:
print("> Loading previous state from > " + prv_rst_file + " <")
with open(prv_rst_file, 'r') as f:
simulation.context.setState(XmlSerializer.deserialize(f.read()))
currstep = int((n-1)*nsteps)
currtime = currstep*dt.in_units_of(picosecond)
simulation.currentStep = currstep
simulation.context.setTime(currtime)
print("> Current time: " + str(currtime) + " (Step = " + str(currstep) + ")")
dcd = DCDReporter(dcd_file, nsavcrd)
firstdcdstep = (currstep) + nsavcrd
dcd._dcd = DCDFile(dcd._out, simulation.topology, simulation.integrator.getStepSize(), firstdcdstep, nsavcrd) # first step should not be 0
simulation.reporters.append(dcd)
simulation.reporters.append(StateDataReporter(stdout, nprint, step=True, speed=True, progress=True, totalSteps=(nsteps*nstride), remainingTime=True, separator='\t\t'))
simulation.reporters.append(StateDataReporter(log_file, nprint, step=True, kineticEnergy=True, potentialEnergy=True, totalEnergy=True, temperature=True, volume=True, speed=True))
print("\n> Simulating " + str(nsteps) + " steps... (Stride #" + str(n) + ")")
simulation.step(nsteps)
simulation.reporters.clear() # remove all reporters so the next iteration don't trigger them.
##################################
# Writing last frame information of stride
print("\n> Writing state file (" + str(rst_file) + ")...")
state = simulation.context.getState( getPositions=True, getVelocities=True )
with open(rst_file, 'w') as f:
f.write(XmlSerializer.serialize(state))
last_frame = int(nsteps/nsavcrd)
print("> Writing coordinate file (" + str(pdb_file) + ", frame = " + str(last_frame) + ")...")
positions = simulation.context.getState(getPositions=True).getPositions()
PDBFile.writeFile(simulation.topology, positions, open(pdb_file, 'w'))
print("\n> Finished!\n")
#@title **Concatenate and align the trajectory**
Skip = "1" #@param ["1", "2", "5", "10", "20", "50"]
stride_traj = Skip
Output_format = "dcd" #@param ["dcd", "pdb", "trr", "xtc"]
#@markdown **Attention:** A high number of frames can explode the memory on Colab. You should be fine with 5000 frames or less.
simulation_time_analysis = stride_time_ps*nstride
simulation_ns = float(Stride_Time)*int(Number_of_strides)
number_frames = int(simulation_time_analysis)/int(Write_the_trajectory)
number_frames_analysis = number_frames/int(stride_traj)
traj_end = os.path.join(workDir, str(Jobname) + "_all.dcd")
traj_end2 = os.path.join(workDir, str(Jobname) + "_all." + str(Output_format))
template = os.path.join(workDir, str(Jobname) + '_%s.dcd')
flist = [template % str(i) for i in range(1, nstride + 1)]
#print(flist)
trajlist = pt.load(flist, pdb, stride=stride_traj)
traj_image = trajlist.iterframe(autoimage=True, rmsfit=0)
traj_write = pt.write_traj(traj_end, traj_image, overwrite=True)
traj_load = pt.load(traj_end, pdb)
traj_align = pt.align(traj_load, mask="@CA", ref=0)
traj_write = pt.write_traj(traj_end, traj_align, overwrite=True, options='dcd')
traj_write = pt.write_traj(traj_end2, traj_align, overwrite=True, options=Output_format)
traj_load = pt.load(traj_end, os.path.join(workDir, "SYS_gaff2.prmtop"))
print(traj_load)
traj_end_check = os.path.exists(traj_end2)
if traj_end_check == True:
print("Trajectory concatenated successfully! :-)")
else:
print("ERROR: Check your inputs! ")
#@title **Load, view and check the trajectory**
#@markdown This will take a few minutes. Another coffee would be great. :-)
import warnings
warnings.filterwarnings('ignore')
!rm *.pdb 2> /dev/null
#py3dmol functions
class Atom(dict):
def __init__(self, line):
self["type"] = line[0:6].strip()
self["idx"] = line[6:11].strip()
self["name"] = line[12:16].strip()
self["resname"] = line[17:20].strip()
self["resid"] = int(int(line[22:26]))
self["x"] = float(line[30:38])
self["y"] = float(line[38:46])
self["z"] = float(line[46:54])
self["sym"] = line[76:78].strip()
def __str__(self):
line = list(" " * 80)
line[0:6] = self["type"].ljust(6)
line[6:11] = self["idx"].ljust(5)
line[12:16] = self["name"].ljust(4)
line[17:20] = self["resname"].ljust(3)
line[22:26] = str(self["resid"]).ljust(4)
line[30:38] = str(self["x"]).rjust(8)
line[38:46] = str(self["y"]).rjust(8)
line[46:54] = str(self["z"]).rjust(8)
line[76:78] = self["sym"].rjust(2)
return "".join(line) + "\n"
class Molecule(list):
def __init__(self, file):
for line in file:
if "ATOM" in line or "HETATM" in line:
self.append(Atom(line))
def __str__(self):
outstr = ""
for at in self:
outstr += str(at)
return outstr
if number_frames_analysis > 10:
stride_animation = number_frames_analysis/10
else:
stride_animation = 1
u = mda.Universe(pdb, traj_end)
# Write out frames for animation
protein = u.select_atoms('not (resname WAT)')
i = 0
for ts in u.trajectory[0:len(u.trajectory):int(stride_animation)]:
if i > -1:
with mda.Writer('' + str(i) + '.pdb', protein.n_atoms) as W:
W.write(protein)
i = i + 1
# Load frames as molecules
molecules = []
for i in range(int(len(u.trajectory)/int(stride_animation))):
with open('' + str(i) + '.pdb') as ifile:
molecules.append(Molecule(ifile))
models = ""
for i in range(len(molecules)):
models += "MODEL " + str(i) + "\n"
for j,mol in enumerate(molecules[i]):
models += str(mol)
models += "ENDMDL\n"
#view.addModelsAsFrames(models)
# Animation
view = py3Dmol.view(width=800, height=600)
view.addModelsAsFrames(models)
for i, at in enumerate(molecules[0]):
default = {"cartoon": {'color': 'spectrum'}}
view.setViewStyle({'style':'outline','color':'black','width':0.1})
view.setStyle({'model': -1, 'serial': i+1}, at.get("pymol", default))
HP = ['LIG']
view.setStyle({"model":-1,'and':[{'resn':HP}]},{'stick':{'radius':0.3}})
view.zoomTo()
view.animate({'loop': "forward"})
view.show()
#@title **View and check the Ligand Interaction Network (LigPlot) during MD simulations**
#@markdown This diagram is interactive and allows moving around the residues, as well as clicking the legend to toggle the display of specific residues types or interactions. The diagram will be saved as an HTML file (output.html).
#@markdown **Provide output file names below:**
Output_name = 'Interaction' #@param {type:"string"}
#@markdown The frequency with which an interaction is seen will control the width of the corresponding edge. You can hide the least frequent interactions by using a threshold, i.e. threshold=0.3 will hide interactions that occur in less than 30% of frames.
Threshold = 0.3 #@param {type:"slider", min:0, max:1.0, step:0.1}
import MDAnalysis as mda
import prolif as plf
import numpy as np
import os
from prolif.plotting.network import LigNetwork
# load topology
u = mda.Universe(os.path.join(workDir, "SYS_gaff2.prmtop"), traj_end)
lig = u.select_atoms("resname LIG")
prot = u.select_atoms("protein")
# create RDKit-like molecules for visualisation
lmol = plf.Molecule.from_mda(lig)
pmol = plf.Molecule.from_mda(prot)
if number_frames_analysis > 10:
stride_animation = number_frames_analysis/10
else:
stride_animation = 1
fp = plf.Fingerprint()
fp.run(u.trajectory[::int(stride_animation)], lig, prot)
df = fp.to_dataframe(return_atoms=True)
net = LigNetwork.from_ifp(df, lmol,
# replace with `kind="frame", frame=0` for the other depiction
kind="aggregate", threshold=float(Threshold),
rotation=270)
net.save(os.path.join(workDir, Output_name + ".html"))
net.display()
###Output
_____no_output_____
###Markdown
------ **Analysis**Although visualizing your trajectory can be quite useful, sometimes you also want more quantitative data.Analyses of MD trajectories vary a lot and we do not intend to cover it all here. However, one can make use of MDanalysis or PyTraj to easily analyze simulations. Below, you can find a few examples of code snippets that can help you to shed some light on your simulation behavior.
###Code
#@title **Interaction Energy**
#@markdown **Important:** To quantify the strength of the interaction between the ligand and the protein, we will compute the nonbonded interaction energy between these two species. It is important to note that this quantity is NOT a free energy or a binding energy.
#@markdown **Provide output file names below:**
Output_name = 'Interaction_energy' #@param {type:"string"}
pt_topology = traj_load.top
restraint_array = pt.select_atoms('!(:WAT) & !(:Na+) & !(:Cl-) & !(:Mg+) & !(:K+) & !(:LIG)', pt_topology)
first_atom = restraint_array[0]
last_atom = restraint_array[-1]
mask = "LIE :LIG @" + str(first_atom+1) + "-" + str(last_atom+1)
lie = pt.analysis.energy_analysis.lie(traj_load, mask=mask, options='cutvdw 12.0 cutelec 12.0 diel 2.0', dtype='dict')
lie_elec = lie['LIE[EELEC]']
lie_vdw = lie['LIE[EVDW]']
lie_total = lie_elec + lie_vdw
lie_total_mean = mean(lie_total)
lie_total_stdev = stdev(lie_total)
print("Interaction Energy Average = " + str("{:.2f}".format(lie_total_mean)) + " \u00B1 " + str("{:.2f}".format(lie_total_stdev)) + " kcal/mol")
time = len(lie_total)*int(Write_the_trajectory)/1000
time_array = np.arange(0,time,int(Write_the_trajectory)/1000)*int(stride_traj)
# Plotting:
ax = plt.plot(time_array, lie_total, alpha=0.6, color = 'blue', linewidth = 1.5, label= "Total Energy")
ax = plt.plot(time_array, lie_elec, alpha=0.6, color = 'green', linewidth = 1.5, label= "Electrostatic Energy")
ax = plt.plot(time_array, lie_vdw, alpha=0.6, color = 'red', linewidth = 1.5, label= "van der Waals Energy")
plt.xlim(0, simulation_ns)
#plt.ylim(2, 6)
plt.xlabel("Time (ns)", fontsize = 14, fontweight = 'bold')
plt.ylabel('Interaction Energy \n (kcal/mol)', fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
plt.legend(frameon=False, loc='center left', bbox_to_anchor=(1, 0.5))
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
lie_eelec = pd.DataFrame(lie['LIE[EELEC]'])
lie_eelec.to_csv(os.path.join(workDir, Output_name + "_eelec.csv"))
lie_evdw = pd.DataFrame(lie['LIE[EVDW]'])
lie_evdw.to_csv(os.path.join(workDir, Output_name + "_evdw.csv"))
#@title **Compute distance between the ligand and catalytic site residues**
#@markdown **Provide output file names below:**
Output_name = 'distance' #@param {type:"string"}
#@markdown **Cutoff distance to nearest residues (Angstrons):**
Distance = '5' #@param {type:"string"}
ini = 0
top = pt_topology
for frame in traj_load:
top.set_reference(traj_load[ini])
indices = traj_load.top.select('(:LIG<:' + str(Distance) + ')&!(:WAT|:Na+,Cl-,LIG)')
residues = [res.original_resid for res in top[indices].residues]
res_string = ','.join(str(e) for e in residues)
print("Selected residues = " + res_string + "\n")
mask = ":LIG :" + str(res_string)
dist = pt.distance(traj_load, mask)
dist_mean = mean(dist)
dist_stdev = stdev(dist)
print("Distance Average = " + str("{:.2f}".format(dist_mean)) + " \u00B1 " + str("{:.2f}".format(dist_stdev)) + " Å")
time = len(dist)*int(Write_the_trajectory)/1000
time_array = np.arange(0,time,int(Write_the_trajectory)/1000)*int(stride_traj)
# Plotting:
ax = plt.plot(time_array, dist, alpha=1, color = 'springgreen', linewidth = 1.0)
plt.xlim(0, simulation_ns)
#plt.ylim(2, 6)
plt.xlabel("Time (ns)", fontsize = 14, fontweight = 'bold')
plt.ylabel("Distance [$\AA$]", fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
raw_data=pd.DataFrame(dist)
raw_data.to_csv(os.path.join(workDir, Output_name + ".csv"))
#@title **Compute distance between the ligand and specific residues**
#@markdown **Provide output file names below:**
Output_name = 'distance_select' #@param {type:"string"}
#@markdown **Type the number of residues separated by commas and without spaces (1,2,3...):**
Residues = '57,58,59' #@param {type:"string"}
mask = ":LIG :" + str(Residues)
dist = pt.distance(traj_load, mask)
print("Selected residues = " + Residues + "\n")
dist_mean = mean(dist)
dist_stdev = stdev(dist)
print("Distance Average = " + str("{:.2f}".format(dist_mean)) + " \u00B1 " + str("{:.2f}".format(dist_stdev)) + " Å")
time = len(dist)*int(Write_the_trajectory)/1000
time_array = np.arange(0,time,int(Write_the_trajectory)/1000)*int(stride_traj)
# Plotting:
ax = plt.plot(time_array, dist, alpha=1, color = 'magenta', linewidth = 1.0)
plt.xlim(0, simulation_ns)
#plt.ylim(2, 6)
plt.xlabel("Time (ns)", fontsize = 14, fontweight = 'bold')
plt.ylabel("Distance [$\AA$]", fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
raw_data=pd.DataFrame(dist)
raw_data.to_csv(os.path.join(workDir, Output_name + ".csv"))
#@title **Compute RMSD of protein's CA atoms**
#@markdown **Provide output file names below:**
Output_name = 'rmsd_ca' #@param {type:"string"}
rmsd = pt.rmsd(traj_load, ref = 0, mask = "@CA")
time = len(rmsd)*int(Write_the_trajectory)/1000
time_array = np.arange(0,time,int(Write_the_trajectory)/1000)*int(stride_traj)
# Plotting:
ax = plt.plot(time_array, rmsd, alpha=0.6, color = 'blue', linewidth = 1.0)
plt.xlim(0, simulation_ns)
#plt.ylim(2, 6)
plt.xlabel("Time (ns)", fontsize = 14, fontweight = 'bold')
plt.ylabel("RMSD [$\AA$]", fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
raw_data=pd.DataFrame(rmsd)
raw_data.to_csv(os.path.join(workDir, Output_name + ".csv"))
#@title **Plot RMSD as a ditribution**
#@markdown **Provide output file names below:**
Output_name = 'rmsd_dist' #@param {type:"string"}
ax = sb.kdeplot(rmsd, color="blue", shade=True, alpha=0.2, linewidth=0.5)
plt.xlabel('RMSD [$\AA$]', fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks([])
plt.ylabel('')
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(True)
ax.spines['left'].set_visible(False)
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
#@title **Compute radius of gyration of protein's CA atoms**
#@markdown **Provide output file names below:**
Output_name = 'radius_gyration' #@param {type:"string"}
radgyr = pt.radgyr(traj_load, mask = "@CA")
time = len(rmsd)*int(Write_the_trajectory)/1000
time_array = np.arange(0,time,int(Write_the_trajectory)/1000)*int(stride_traj)
# Plotting:
plt.plot(time_array, radgyr, alpha=0.6, color = 'green', linewidth = 1.0)
plt.xlim(0, simulation_ns)
#plt.ylim(2, 6)
plt.xlabel("Time (ns)", fontsize = 14, fontweight = 'bold')
plt.ylabel("Radius of gyration ($\AA$)", fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
raw_data=pd.DataFrame(radgyr)
raw_data.to_csv(os.path.join(workDir, Output_name + ".csv"))
#@title **Plot radius of gyration as a ditribution**
#@markdown **Provide output file names below:**
Output_name = 'radius_gyration_dist' #@param {type:"string"}
ax = sb.kdeplot(radgyr, color="green", shade=True, alpha=0.2, linewidth=0.5)
plt.xlabel('Radius of gyration ($\AA$)', fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks([])
plt.ylabel('')
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(True)
ax.spines['left'].set_visible(False)
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
#@title **Compute RMSF of protein's CA atoms**
#@markdown **Provide output file names below:**
Output_name = 'rmsf_ca' #@param {type:"string"}
rmsf = pt.rmsf(traj_load, "@CA")
bfactor = pt.bfactors(traj_load, byres=True)
# Plotting:
plt.plot(rmsf[:,1], alpha=1.0, color = 'red', linewidth = 1.0)
plt.xlabel("Residue", fontsize = 14, fontweight = 'bold')
plt.ylabel("RMSF ($\AA$)", fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.xlim(0, len(rmsf[:-1]))
#plt.xticks(np.arange(min(rmsf[:1]), max(rmsf[:1])))
plt.yticks(fontsize = 12)
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
raw_data=pd.DataFrame(rmsf)
raw_data.to_csv(os.path.join(workDir, Output_name + ".csv"))
#@title **2D RMSD**
#@markdown **Provide output file names below:**
Output_name = '2D_rmsd' #@param {type:"string"}
last_frame = len(time_array)
stride_ticks_f = (last_frame)/5
ticks_frame = np.arange(0,(len(time_array) + float(stride_ticks_f)), float(stride_ticks_f))
a = ticks_frame.astype(float)
stride_ticks_t = (simulation_ns)/5
tick_time = np.arange(0,(float(simulation_ns) + float(stride_ticks_t)), float(stride_ticks_t))
b = tick_time.astype(float)
mat1 = pt.pairwise_rmsd(traj_load, mask="@CA", frame_indices=range(int(number_frames_analysis)))
ax = plt.imshow(mat1, cmap = 'PRGn', origin='lower', interpolation = 'bicubic')
plt.title('2D RMSD')
plt.xlabel('Time (ns)', fontsize = 14, fontweight = 'bold')
plt.ylabel('Time (ns)', fontsize = 14, fontweight = 'bold')
# plt.xticks(fontsize = 12)
# plt.yticks(fontsize = 12)
plt.xticks(a, b.round(decimals=3), fontsize = 12)
plt.yticks(a, b.round(decimals=3), fontsize = 12)
# plt.xlim(0, a[-1])
# plt.ylim(0, a[-1])
cbar1 = plt.colorbar()
cbar1.set_label("RMSD ($\AA$)", fontsize = 14, fontweight = 'bold')
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
raw_data=pd.DataFrame(mat1)
raw_data.to_csv(os.path.join(workDir, Output_name + ".csv"))
#@title **Calculate eigvenctors of Principle Component Analysis (PCA)**
data = pt.pca(traj_load, fit=True, ref=0, mask='@CA', n_vecs=2)
#print('projection values of each frame to first mode = {} \n'.format(data[0][0]))
#print('projection values of each frame to second mode = {} \n'.format(data[0][1]))
#print('eigvenvalues of first two modes', data[1][0])
#print("")
#print('eigvenvectors of first two modes: \n', data[1][1])
last_frame = len(time_array)
stride_ticks_f = (last_frame)/5
ticks_frame = np.arange(0,(len(time_array) + float(stride_ticks_f)), float(stride_ticks_f))
a = ticks_frame.astype(float)
a2 = a.tolist()
stride_ticks_t = (simulation_ns)/5
tick_time = np.arange(0,(float(simulation_ns) + float(stride_ticks_t)), float(stride_ticks_t))
b = tick_time.astype(float)
#@markdown **Provide output file names below:**
Output_name = 'PCA' #@param {type:"string"}
Output_PC1 = 'PC1' #@param {type:"string"}
Output_PC2 = 'PC2' #@param {type:"string"}
%matplotlib inline
%config InlineBackend.figure_format = 'retina' # high resolution
projection_data = data[0]
plt.title(r'PCA of C-$\alpha$')
PC1 = data[0][0]
PC2 = data[0][1]
a = plt.scatter(PC1,PC2, c=range(int(number_frames_analysis)), cmap='Greens', marker='o',s=8, alpha=1)
plt.clim(0, last_frame)
plt.xlabel('PC1', fontsize = 14, fontweight = 'bold')
plt.ylabel('PC2', fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
# N = len(number_frames)
# x2 = np.arange(N)
cbar1 = plt.colorbar(a, orientation="vertical")
cbar1.set_label('Time(ns)', fontsize = 14, fontweight = 'bold')
cbar1.set_ticks(a2)
cbar1.set_ticklabels(b.round(decimals=3))
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
pc1=pd.DataFrame(PC1)
pc1.to_csv(os.path.join(workDir, Output_PC1 + ".csv"))
pc2=pd.DataFrame(PC2)
pc2.to_csv(os.path.join(workDir, Output_PC2 + ".csv"))
#@title **Plot Principal Component 1 (PC1) and Principal Component 2 (PC2) as a ditribution**
Output_name = 'PCA_dist' #@param {type:"string"}
fig = plt.figure(figsize=(9,5))
plt.subplot(1, 2, 1)
ax = sb.kdeplot(PC1, color="green", shade=True, alpha=0.2, linewidth=0.5)
plt.xlabel('PC1', fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks([])
plt.ylabel('')
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(True)
ax.spines['left'].set_visible(False)
plt.subplot(1, 2, 2)
ax2 = sb.kdeplot(PC2, color="purple", shade=True, alpha=0.2, linewidth=0.5)
plt.xlabel('PC2', fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks([])
plt.ylabel('')
ax2.spines['top'].set_visible(False)
ax2.spines['right'].set_visible(False)
ax2.spines['bottom'].set_visible(True)
ax2.spines['left'].set_visible(False)
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
#@title **Pearson's Cross Correlation (CC)**
#@markdown **Provide output file names below:**
Output_name = 'cross_correlation' #@param {type:"string"}
traj_align = pt.align(traj_load, mask='@CA', ref=0)
mat_cc = matrix.correl(traj_align, '@CA')
ax = plt.imshow(mat_cc, cmap = 'PiYG_r', interpolation = 'bicubic', vmin = -1, vmax = 1, origin='lower')
plt.xlabel('Residues', fontsize = 14, fontweight = 'bold')
plt.ylabel('Residues', fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
cbar1 = plt.colorbar()
cbar1.set_label('$CC_ij$', fontsize = 14, fontweight = 'bold')
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
raw_data=pd.DataFrame(mat_cc)
raw_data.to_csv(os.path.join(workDir, Output_name + ".csv"))
###Output
_____no_output_____ |
Geron/Geron_ch15_Sequences/Sequence_01_TimeSeries.ipynb | ###Markdown
Time Series
###Code
import sys
import sklearn
import tensorflow
import numpy as np
import tensorflow as tf
from tensorflow import keras
import os
from pathlib import Path
np.random.seed(42)
tf.random.set_seed(42)
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Data generator and baseline for comparison.
# Combination of 2 sine waves plus noise
def generate_time_series (batch_size, n_steps):
freq1, freq2, offset1, offset2 = np.random.rand(4, batch_size, 1)
time = np.linspace(0, 1, n_steps)
series = 0.5 * np.sin((time - offset1) * (freq1 * 10 + 10))
series += 0.2 * np.sin((time - offset2) * (freq2 * 20 + 20))
series += 0.1 * (np.random.rand(batch_size, n_steps) - 0.5)
return series[..., np.newaxis].astype(np.float32)
# Training set, etc.
n_steps = 50
series = generate_time_series (10000, n_steps+1)
X_train,y_train = series[:7000, :n_steps], series[:7000, -1]
X_valid,y_valid = series[7000:9000, :n_steps], series[7000:9000, -1]
X_test,y_test = series[9000:, :n_steps], series[9000:, -1]
X_train.shape
y_train.shape
def plot_series(series, y=None, y_pred=None, x_label="$t$", y_label="$x(t)$"):
plt.plot(series, ".-")
if y is not None:
plt.plot(n_steps, y, "bx", markersize=10)
if y_pred is not None:
plt.plot(n_steps, y_pred, "ro")
plt.grid(True)
if x_label:
plt.xlabel(x_label, fontsize=16)
if y_label:
plt.ylabel(y_label, fontsize=16, rotation=0)
plt.hlines(0, 0, 100, linewidth=1)
plt.axis([0, n_steps + 1, -1, 1])
fig, axes = plt.subplots(nrows=1, ncols=3, sharey=True, figsize=(12, 4))
for col in range(3):
plt.sca(axes[col])
plot_series(X_valid[col, :, 0], y_valid[col, 0],
y_label=("$x(t)$" if col==0 else None))
plt.show()
# Here are three validation instances
###Output
_____no_output_____
###Markdown
Try a naive predictorprediction = previous value
###Code
naive_pred = X_valid[:,-1] # prediction = previous value
naive_mse = np.mean(keras.losses.mean_squared_error(y_valid,naive_pred))
naive_mse
# MSE
plot_series(X_valid[0, :, 0], y_valid[0, 0], naive_pred[0, 0])
plt.show()
# The red dot is the correct value generated by the series.
# The blue X is the naive prediction for validation instance 0.
###Output
_____no_output_____
###Markdown
Try a traditional neural network.Use the keras Sequential API.By default, this uses linear regression.Flatten each instance into 50 features (lose sense of time).Use a fully dense neural network.Use MSE loss.Use Adam optimizer (default).The mini-batch size (219) is due to my memory constraint;it was 7000 in the book using Google CoLab.
###Code
dnn = keras.models.Sequential(
[keras.layers.Flatten(input_shape=[50,1]),
keras.layers.Dense(1)]) # more layers didn't help
dnn.compile(loss="mse", optimizer="adam")
history = dnn.fit(X_train, y_train, epochs=20,
validation_data=(X_valid, y_valid))
# Num parameters = W + bias = 50+1 = 51.
# MSE loss stabilizes at 0.004
# Different route to same estimate
dnn.evaluate(X_valid, y_valid)
###Output
63/63 [==============================] - 0s 2ms/step - loss: 0.0046
###Markdown
Simple RNNSingle layer, single neuron. No need to specify 50 timepoints. By default, initial h_init value is zero.By default, SimpleRNN activation is tanh().The mini-batch size (219) is due to my memory constraint;it was 7000 in the book using Google CoLab.Number of parameters to train is 3: w_in + w_out + bias.This simple RNN does not beat the simple DNN.
###Code
units = 1
optimizer = keras.optimizers.Adam(lr=0.005)
rnn1 = keras.models.Sequential([
keras.layers.SimpleRNN(units,input_shape=[None,1])
]) # num timepoints = None
rnn1.compile(loss="mse", optimizer=optimizer)
history = rnn1.fit(X_train, y_train, epochs=20,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/20
219/219 [==============================] - 3s 13ms/step - loss: 0.1018 - val_loss: 0.0443
Epoch 2/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0345 - val_loss: 0.0281
Epoch 3/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0243 - val_loss: 0.0210
Epoch 4/20
219/219 [==============================] - 2s 9ms/step - loss: 0.0192 - val_loss: 0.0172
Epoch 5/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0162 - val_loss: 0.0148
Epoch 6/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0143 - val_loss: 0.0132
Epoch 7/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0130 - val_loss: 0.0122
Epoch 8/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0123 - val_loss: 0.0115
Epoch 9/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0118 - val_loss: 0.0112
Epoch 10/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0115 - val_loss: 0.0110
Epoch 11/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0114 - val_loss: 0.0109
Epoch 12/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0114 - val_loss: 0.0109
Epoch 13/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0114 - val_loss: 0.0109
Epoch 14/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0114 - val_loss: 0.0109
Epoch 15/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0114 - val_loss: 0.0109
Epoch 16/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0114 - val_loss: 0.0109
Epoch 17/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0114 - val_loss: 0.0109
Epoch 18/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0114 - val_loss: 0.0109
Epoch 19/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0114 - val_loss: 0.0109
Epoch 20/20
219/219 [==============================] - 2s 10ms/step - loss: 0.0114 - val_loss: 0.0109
###Markdown
wider RNNAdd more units i.e. more neurons in the single layer.This helps a little bit.
###Code
units = 5
rnn2 = keras.models.Sequential([
keras.layers.SimpleRNN(units,input_shape=[None,1])
]) # num timepoints = None
rnn2.compile(loss="mse", optimizer=optimizer)
history = rnn2.fit(X_train, y_train, epochs=20,
validation_data=(X_valid, y_valid))
def plot_learning_curves(loss, val_loss):
plt.plot(np.arange(len(loss)) + 0.5, loss, "b.-", label="Training loss")
plt.plot(np.arange(len(val_loss)) + 1, val_loss, "r.-", label="Validation loss")
plt.gca().xaxis.set_major_locator(mpl.ticker.MaxNLocator(integer=True))
plt.axis([1, 20, 0, 0.05])
plt.legend(fontsize=14)
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.grid(True)
plot_learning_curves(history.history["loss"], history.history["val_loss"])
plt.show()
###Output
_____no_output_____
###Markdown
deeper RNNAdd more layers. SimpleRNN(return_sequences=True) outputs a 3D array of size [batch size, time steps, feature dimensionality].By default, SimpleRNN(return_sequences=False) outputs a 2D array of size [batch size, feature dimensionality] for the last time step only.Use True for all but the last layer.
###Code
rnn3 = keras.models.Sequential([
keras.layers.SimpleRNN(20,return_sequences=True,input_shape=[None,1]),
keras.layers.SimpleRNN(20,return_sequences=True),
keras.layers.SimpleRNN(1)
])
# The single-unit last layer is there to output a single number.
rnn3.compile(loss="mse", optimizer=optimizer)
history = rnn3.fit(X_train, y_train, epochs=20,
validation_data=(X_valid, y_valid))
plot_learning_curves(history.history["loss"], history.history["val_loss"])
plt.show()
###Output
_____no_output_____
###Markdown
deep RNN with dense layerWe can do better using Dense for the last layer.Continue to use a single unit so we get a single number output.But keras Dense() does better than keras SimpleRNN in last layer.It trains faster and gets better accuracy.It can have a different activation function from the other layers.
###Code
# deep RNN with dense layer
rnn4 = keras.models.Sequential([
keras.layers.SimpleRNN(20,return_sequences=True,input_shape=[None,1]),
keras.layers.SimpleRNN(20), # no sequences into the dense layer
keras.layers.Dense(1)
])
rnn4.compile(loss="mse", optimizer=optimizer)
history = rnn4.fit(X_train, y_train, epochs=20,
validation_data=(X_valid, y_valid))
plot_learning_curves(history.history["loss"], history.history["val_loss"])
plt.show()
# Now it trains much faster.
###Output
_____no_output_____ |
technical_validation/label_validation.ipynb | ###Markdown
Notebook to analyze various statistics of the labeled events Furthermore, the events were visually inspected in the labeling notebooks to ensure their quality Import and Initialize Everything
###Code
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import h5py
import pandas as pd
import os
import sys
from pathlib import Path
from datetime import datetime
from datetime import timedelta
import math
import pdb
import scipy
# Add project path to path for import
project_path = os.path.abspath("..")
if project_path not in sys.path:
sys.path.append(project_path)
# Add module path to path for import
module_path = os.path.abspath("../data_utility/data_utility.py")
if module_path not in sys.path:
sys.path.append(module_path)
from data_utility import CREAM_Day # class to work with a day of the CREAM Dataset
import seaborn as sns
%load_ext autoreload
# Reload all modules every time before executing the Python code typed.
%autoreload 2
# Import some graphical modules
from IPython.display import display, clear_output
from ipywidgets import Button, Layout, ButtonStyle, HBox, VBox, widgets, Output
from IPython.display import SVG, display, clear_output
%matplotlib widget
import subprocess
import glob
machine = "X8"
PATH_TO_DATA = os.path.abspath(os.path.join("..", "..","rbgstorage", "nilm", "i13-dataset", "CREAM", machine))
ALL_DAYS = glob.glob(os.path.join(PATH_TO_DATA, "*"))
ALL_DAYS = [os.path.basename(d) for d in ALL_DAYS if "2018" in os.path.basename(d) or "2019" in os.path.basename(d) ]
ALL_DAYS.sort()
###Output
_____no_output_____
###Markdown
Load the Event Files
###Code
#necessary for the plotting
# Load the events
day_path = os.path.join(PATH_TO_DATA, ALL_DAYS[0]) #arbitrary day to initialize the object
current_CREAM_day = CREAM_Day(cream_day_location=day_path,use_buffer=True, buffer_size_files=2)
if machine == "X9":
all_component_events_fine = current_CREAM_day.load_component_events(os.path.join(PATH_TO_DATA, "component_events_fine.csv"), filter_day=False)
all_component_events_coarse = current_CREAM_day.load_component_events(os.path.join(PATH_TO_DATA, "component_events_coarse.csv"), filter_day=False)
else:
all_component_events = current_CREAM_day.load_component_events(os.path.join(PATH_TO_DATA, "component_events.csv"), filter_day=False)
# Load the product and the maintenance events (the raw ones, per minute events) and filter for the day
all_maintenance_events = current_CREAM_day.load_machine_events(os.path.join(PATH_TO_DATA, "maintenance_events.csv"), raw_file=False, filter_day=False)
all_product_events = current_CREAM_day.load_machine_events(os.path.join(PATH_TO_DATA, "product_events.csv"), raw_file=False, filter_day=False)
###Output
_____no_output_____
###Markdown
Analyze the Event Types Display the different event types and labeled components, together with their cardinality
###Code
for events_df in [all_maintenance_events, all_product_events]:
print(events_df.Event_Type.value_counts())
print("-------------------------------------")
all_component_events_fine.Component.value_counts()
print("-------------------------------------")
all_component_events_coarse.Component.value_counts()
###Output
_____no_output_____
###Markdown
Functions necessary to do so
###Code
def plot_event_durations(events_df:pd.DataFrame, event_type_column: str = "Event_Type"):
"""
Function to plot the event duration, for every event type.
Parameters
----------
events_df (pd.DataFrame): maintenance events or product events dataframe
event_type_column (str): Name of the column containing the event type.
Returns
-------
"""
for e_type in np.unique(events_df[event_type_column]):
x = events_df[events_df[event_type_column] == e_type].Event_Duration_Seconds
n_samples = len(events_df[events_df[event_type_column] == e_type])
mean = np.mean(x)
stdev = np.std(x)
sns.distplot(x, label=e_type)
plt.legend()
plt.show()
def print_event_duration_statistics(events_df:pd.DataFrame, event_type_column: str = "Event_Type"):
"""
Function to print the event duration, for every event type.
Parameters
----------
events_df (pd.DataFrame): maintenance events or product events dataframe
event_type_column (str): Name of the column containing the event type.
Returns
-------
"""
data = { "event type" : [],
"samples": [],
"mean" : [],
"standard deviation" : []}
for e_type in np.unique(events_df[event_type_column]):
x = events_df[events_df[event_type_column] == e_type].Event_Duration_Seconds
n_samples = len(events_df[events_df[event_type_column] == e_type])
mean = np.mean(x)
stdev = np.std(x)
data["samples"].append(n_samples)
data["mean"].append(mean)
data["standard deviation"].append(stdev)
data["event type"].append(e_type)
data = pd.DataFrame(data)
data = data.sort_values(["samples"], ascending=False)
print(data.round(2).to_latex(index=False))
###Output
_____no_output_____
###Markdown
Product Events Event Durations
###Code
print_event_duration_statistics(all_product_events, "Event_Type")
print_event_duration_statistics(all_maintenance_events, "Event_Type")
plot_event_durations(all_product_events, "Event_Type")
###Output
_____no_output_____
###Markdown
Maintenance Events Event Durations
###Code
plot_event_durations(all_maintenance_events, "Event_Type", NOMINAL_TIMES_PER_EVENT)
###Output
_____no_output_____
###Markdown
Event Distribution per Hour
###Code
def create_time_bin(hours : float, minutes : float) -> str:
"""
Creates a hour:minutes timestamp, ceiled to full 30 minutes.
All minutes below 15, become 0.
All between 15 and 45 minutes, become 30 minutes.
All minutes between 45 and 60 become 0 and belong to the next hour.
"""
if minutes < 15:
minutes = "00"
elif minutes >= 15 and minutes < 45:
minutes = "30"
elif minutes >= 45:
minutes = "00"
hours += 1
if hours < 10:
hours = "0" + str(hours)
else:
hours = str(hours)
return hours + ":" + minutes
# create a new column with: hour:30, hour:0 in it for the x-axis as the labels
all_product_events["Time_Bin"] = all_product_events.Start_Timestamp.apply(lambda x: create_time_bin(x.hour, x.minute))
times, counts = np.unique(all_product_events["Time_Bin"], return_counts=True)
plt.figure(figsize=(16,4))
plt.title("Product Events")
plt.xlabel("Time")
plt.ylabel("Number of Events")
sns.barplot(x=times, y=counts, color="b")
plt.show()
# create a new column with: hour:30, hour:0 in it for the x-axis as the labels
all_maintenance_events["Time_Bin"] = all_maintenance_events.Start_Timestamp.apply(lambda x: create_time_bin(x.hour, x.minute))
times, counts = np.unique(all_maintenance_events["Time_Bin"], return_counts=True)
plt.figure(figsize=(16,4))
plt.title("Maintenance Events")
plt.xlabel("Time")
plt.ylabel("Number of Events")
sns.barplot(x=times, y=counts,color="b")
plt.show()
# create a new column with: hour:30, hour:0 in it for the x-axis as the labels
all_maintenance_events["Time_Bin"] = all_maintenance_events.Start_Timestamp.apply(lambda x: create_time_bin(x.hour, x.minute))
all_product_events["Time_Bin"] = all_product_events.Start_Timestamp.apply(lambda x: create_time_bin(x.hour, x.minute))
times, counts = np.unique(np.concatenate([all_product_events["Time_Bin"],all_maintenance_events["Time_Bin"]]), return_counts=True)
plt.figure(figsize=(16,4))
plt.title("Product and Maintenance Events")
plt.xlabel("Time")
plt.ylabel("Number of Events")
sns.barplot(x=times, y=counts,color="b")
plt.show()
fontdict_text = {"size" : 18}
# create a new column with: hour:30, hour:0 in it for the x-axis as the labels
all_component_events["Time_Bin"] = all_component_events.Timestamp.apply(lambda x: create_time_bin(x.hour, x.minute))
times, counts = np.unique(all_component_events["Time_Bin"] , return_counts=True)
plt.figure(figsize=(18,4))
plt.title("Electrical Component Events")
plt.xlabel("Time", fontdict=fontdict_text)
plt.ylabel("Number of Events", fontdict=fontdict_text)
plt.xticks(fontdict=fontdict_text)
sns.barplot(x=times, y=counts,color="b")
plt.show()
for component in np.unique(all_component_events.Component):
component_events = all_component_events[all_component_events.Component == component].Timestamp.apply(lambda x: create_time_bin(x.hour, x.minute))
times, counts = np.unique(component_events, return_counts=True)
plt.figure(figsize=(18,4))
plt.title(component + " Events")
plt.xlabel("Time")
plt.ylabel("Number of Events")
sns.barplot(x=times, y=counts,color="b")
plt.show()
###Output
_____no_output_____
###Markdown
Electrical Component Events Mean instantenous power of the components
###Code
mean_instant_power_list = []
component_list = []
x_axis = [] # x-axis, first component at 1, second at 2, third at 3
for index, component in enumerate( np.unique(all_component_events.Component), start=1):
if component == "unlabeled": #skip the unlabeled ones
continue
component_events = all_component_events[all_component_events.Component == component]
component_events = component_events.sample(n=100, random_state=10)
# for efficienfy reasons, iterate over each day separately
for day_date in np.unique(component_events.Date):
for event in component_events[component_events.Date == day_date].itertuples():
cream_day = CREAM_Day(cream_day_location=os.path.join(PATH_TO_DATA, str(day_date)), use_buffer=True, buffer_size_files=10)
voltage, current = cream_day.load_time_frame(event.Timestamp, duration=0.3, return_noise=False)
instant_power = voltage * current
mean_instant_power_list.append(np.mean(instant_power))
component_list.append(component)
x_axis.append(index)
import matplotlib
%matplotlib inline
component_list = np.array(component_list)
mean_instant_power_list = np.array(mean_instant_power_list)
fig, ax = plt.subplots(1, 3, figsize=(15,5), sharex=True, sharey=True)
#matplotlib.rcParams.update({'font.size': 10})
for i, component in enumerate(np.unique(component_list)):
mask = np.where(component_list == component)[0]
hist, bins = np.histogram(mean_instant_power_list[mask], bins=30)
biggest_bin = np.argmax(hist) # get biggest bin and its value
x, y, _ = ax[i].hist(mean_instant_power_list[mask], bins, color="b")
max_bin = np.argmax(x)
max_value = y[max_bin]
# ax[i].set_xticklabels(y)
ax[i].set_title(component, fontsize=18)
ax[i].set_ylim(0,60)
ax[i].set_ylabel("Samples", fontsize=16)
ax[i].set_xlabel("Mean instantenous power,\n with maximum bin at value %.2f" %(max_value), fontsize=16)
fig.tight_layout()
plt.show()
fig.savefig("./component_mean_instant_power.pdf")
###Output
_____no_output_____ |
classes_and_inheritance_practice.ipynb | ###Markdown
Week 1: Classes
###Code
class Point():
pass
#instance (e.g. factory = instances are the products of a factory)
#instance 1
point1 = Point()
#instance 2
point2 = Point()
#instance variable - lives inside an instance
point1.x = 5
point2.x = 10
print(point1)
print(point2)
print(point1 is point2) # different instances
print(point1.x)
print(point2.x)
class Point():
#method (its different from function)
def getX(self) :
return self.x
point1 = Point()
point2 = Point()
point1.x = 5
point2.x = 10
print(point1.getX()) #method belongs to a class
class Point:
""" Point class for representing and manipulating x,y coordinates. """
def __init__(self, x, y): # self is already called when we call the constructor using __init__
""" Create a new point at the origin """
self.x = x
self.y = y
def getX(self) :
return self.x
point1 = Point(5, 10)
point2 = Point(2, 3)
type(point1)
print(point1)
print('')
print(point1.getX())
class Point() :
def __init__ (self, x, y) :
self.x = x
self.y = y
def getX(self) :
return self.x
def getY(self) :
return self.y
point1 = Point(10, 100)
class Point:
""" Point class for representing and manipulating x,y coordinates. """
def __init__(self, initX, initY):
self.x = initX
self.y = initY
def getX(self):
return self.x
def getY(self):
return self.y
def distanceFromOrigin(self):
return ((self.x ** 2) + (self.y ** 2)) ** 0.5
p = Point(7,6)
print(p.distanceFromOrigin())
class Animal :
'''Create a class called Animal that accepts two numbers as inputs and
assigns them respectively to two instance variables: arms and legs.
Create an instance method called limbs that, when called, returns the total number of limbs the animal has.
To the variable name spider, assign an instance of Animal that has 4 arms and 4 legs.
Call the limbs method on the spider instance and save the result to the variable name spidlimbs.'''
def __init__(self, arms, legs) :
self.arms = arms
self.legs = legs
def limbs(self) :
return self.arms + self.legs
spider = Animal(4, 4)
spidlimbs = spider.limbs()
print(spidlimbs)
cityNames = ['Detroit', 'Ann Arbor', 'Pittsburgh', 'Mars', 'New York']
populations = [680250, 117070, 304391, 1683, 8406000]
states = ['MI', 'MI', 'PA', 'PA', 'NY']
city_tuples = zip(cityNames, populations, states)
class City :
def __init__(self, name, population, state) :
self.name = name
self.population = population
self.state = state
def __str__(self) :
return '{}, {} (pop:: {})'.format(self.name, self.state, self.population)
cities = []
#for city_tup in city_tuples :
# name, pop, state = city_tup
# city = City(name, pop, state)
# cities.append(city)
cities = [City(n, p, s) for (n, p, s) in city_tuples]
cities
import math
class Point:
""" Point class for representing and manipulating x,y coordinates. """
def __init__(self, initX, initY):
self.x = initX
self.y = initY
def getX(self):
return self.x
def getY(self):
return self.y
def distanceFromOrigin(self):
return ((self.x ** 2) + (self.y ** 2)) ** 0.5
def distance(point1, point2):
xdiff = point2.getX()-point1.getX()
ydiff = point2.getY()-point1.getY()
dist = math.sqrt(xdiff**2 + ydiff**2)
return dist
p = Point(4,3)
q = Point(0,0)
print(distance(p,q))
cityNames = ['Detroit', 'Ann Arbor', 'Pittsburgh', 'Mars', 'New York']
populations = [680250, 117070, 304391, 1683, 8406000]
states = ['MI', 'MI', 'PA', 'PA', 'NY']
city_tuples = zip(cityNames, populations, states)
class City :
def __init__(self, name, population, state) :
self.name = name
self.population = population
self.state = state
def __str__(self) :
return '{}, {} (pop:: {})'.format(self.name, self.state, self.population)
cities = []
#for city_tup in city_tuples :
# name, pop, state = city_tup
# city = City(name, pop, state)
# cities.append(city)
cities = [City(n, p, s) for (n, p, s) in city_tuples]
print(cities)
class Cereal :
def __init__(self, name, brand, fiber) :
self.name = name
self.brand = brand
self.fiber = int(fiber)
def __str__(self) :
return '{} cereal is produced by {} and has {} grams of fiber in every serving!'.format(
self.name, self.brand, self.fiber
)
c1 = Cereal('Corn Flakes', "Kellogg's", 2)
c2 = Cereal('Honey Nut Cheerios', 'General Mills', 3)
print(c1)
print(c2)
class Point:
""" Point class for representing and manipulating x,y coordinates. """
def __init__(self, initX, initY):
self.x = initX
self.y = initY
def __str__(self) :
return 'Point ({}, {})'.format(self.x, self.y)
def __add__(self, otherPoint) :
return Point(self.x + otherPoint.x,
self.y + otherPoint.y)
def __sub__(self, otherPoint) :
return Point(self.x - otherPoint.x,
self.y - otherPoint.y)
def distance(point1, point2):
xdiff = point2.getX()-point1.getX()
ydiff = point2.getY()-point1.getY()
p1 = Point(5, 10)
p2 = Point(6, 20)
print(p1)
print(p2)
print(p1 + p2)
print(p2 - p1)
print(p1 + p2 + p1 + p2)
class Fruit :
def __init__(self, name, price) :
self.name = name
self.price = price
def sort_priority(self) :
return self.price
L = [
Fruit('Cherry', 10),
Fruit('Apple', 5),
Fruit('Blueberry', 20)
]
for f in sorted(L, key = Fruit.sort_priority) :
print( f.name)
print('Another way: ')
for f in sorted(L, key = lambda x: x.sort_priority()) :
print( f.name)
L = ["Cherry", "Apple", "Blueberry"]
print(sorted(L, key=len))
#alternative form using lambda, if you find that easier to understand
print(sorted(L, key= lambda x: len(x)))
class Fruit():
def __init__(self, name, price):
self.name = name
self.price = price
L = [Fruit("Cherry", 10), Fruit("Apple", 5), Fruit("Blueberry", 20)]
for f in sorted(L, key=lambda x: x.price):
print(f.name)
#Sometimes you will find it convenient to define a method for the class that does some
# computation on the data in an instance.
# In this case, our class is too simple to really illustrate that.
# But to simulate it, I’ve defined a method sort_priority that just returns the price that’s stored in the instance.
# Now, that method, sort_priority takes one instance as input and returns a number.
# So it is exactly the kind of function we need to provide as the key parameter for sorted.
# Here it can get a little confusing: to refer to that method, without actually invoking it,
# you can refer to Fruit.sort_priority.
# This is analogous to the code above that referred to len rather than invoking len().
class Fruit():
def __init__(self, name, price):
self.name = name
self.price = price
def sort_priority(self):
return self.price
L = [Fruit("Cherry", 10), Fruit("Apple", 5), Fruit("Blueberry", 20)]
print("-----sorted by price, referencing a class method-----")
for f in sorted(L, key=Fruit.sort_priority):
print(f.name)
print("---- one more way to do the same thing-----")
for f in sorted(L, key=lambda x: x.sort_priority()):
print(f.name)
class Point:
""" Point class for representing and manipulating x,y coordinates. """
printed_rep = "X"
def __init__(self, initX, initY):
self.x = initX
self.y = initY
def graph(self):
rows = []
size = max(int(self.x), int(self.y)) + 2
for j in range(size-1) :
if (j+1) == int(self.y):
special_row = str((j+1) % 10) + (" "*(int(self.x) -1)) + self.printed_rep
rows.append(special_row)
else:
rows.append(str((j+1) % 10))
rows.reverse() # put higher values of y first
x_axis = ""
for i in range(size):
x_axis += str(i % 10)
rows.append(x_axis)
return "\n".join(rows)
p1 = Point(2, 3)
p2 = Point(3, 12)
print(p1.graph())
print()
print(p2.graph())
# tamagochi game
from random import randrange
class Pet():
boredom_decrement = 4
hunger_decrement = 6
boredom_threshold = 5
hunger_threshold = 10
sounds = ['Mrrp']
def __init__(self, name = "Kitty"):
self.name = name
self.hunger = randrange(self.hunger_threshold)
self.boredom = randrange(self.boredom_threshold)
self.sounds = self.sounds[:] # copy the class attribute, so that when we make changes to it, we won't affect the other Pets in the class
def clock_tick(self):
self.boredom += 1
self.hunger += 1
def mood(self):
if self.hunger <= self.hunger_threshold and self.boredom <= self.boredom_threshold:
return "happy"
elif self.hunger > self.hunger_threshold:
return "hungry"
else:
return "bored"
def __str__(self):
state = " I'm " + self.name + ". "
state += " I feel " + self.mood() + ". "
# state += "Hunger {} Boredom {} Words {}".format(self.hunger, self.boredom, self.sounds)
return state
def hi(self):
print(self.sounds[randrange(len(self.sounds))])
self.reduce_boredom()
def teach(self, word):
self.sounds.append(word)
self.reduce_boredom()
def feed(self):
self.reduce_hunger()
def reduce_hunger(self):
self.hunger = max(0, self.hunger - self.hunger_decrement)
def reduce_boredom(self):
self.boredom = max(0, self.boredom - self.boredom_decrement)
import sys
sys.setExecutionLimit(60000)
def whichone(petlist, name):
for pet in petlist:
if pet.name == name:
return pet
return None # no pet matched
def play():
animals = []
option = ""
base_prompt = """
Quit
Adopt <petname_with_no_spaces_please>
Greet <petname>
Teach <petname> <word>
Feed <petname>
Choice: """
feedback = ""
while True:
action = input(feedback + "\n" + base_prompt)
feedback = ""
words = action.split()
if len(words) > 0:
command = words[0]
else:
command = None
if command == "Quit":
print("Exiting...")
return
elif command == "Adopt" and len(words) > 1:
if whichone(animals, words[1]):
feedback += "You already have a pet with that name\n"
else:
animals.append(Pet(words[1]))
elif command == "Greet" and len(words) > 1:
pet = whichone(animals, words[1])
if not pet:
feedback += "I didn't recognize that pet name. Please try again.\n"
print()
else:
pet.hi()
elif command == "Teach" and len(words) > 2:
pet = whichone(animals, words[1])
if not pet:
feedback += "I didn't recognize that pet name. Please try again."
else:
pet.teach(words[2])
elif command == "Feed" and len(words) > 1:
pet = whichone(animals, words[1])
if not pet:
feedback += "I didn't recognize that pet name. Please try again."
else:
pet.feed()
else:
feedback+= "I didn't understand that. Please try again."
for pet in animals:
pet.clock_tick()
feedback += "\n" + pet.__str__()
play()
# course 4 Assessment 1
# Define a class called Bike that accepts a string and a float as input,
# and assigns those inputs respectively to two instance variables, color and price.
# Assign to the variable testOne an instance of Bike whose color is blue and whose price is 89.99.
# Assign to the variable testTwo an instance of Bike whose color is purple and whose price is 25.0.
class Bike :
def __init__(self, color, price) :
self.color = color
self.price = float(price)
testOne = Bike('blue', 89.99)
testTwo = Bike('purple', 25.0)
# Create a class called AppleBasket whose constructor accepts two inputs: a string representing a color, and a number representing a quantity of apples.
# The constructor should initialize two instance variables: apple_color and apple_quantity.
# Write a class method called increase that increases the quantity by 1 each time it is invoked.
# You should also write a __str__ method for this class that returns a string of the format: "A basket of [quantity goes here] [color goes here] apples."
# e.g. "A basket of 4 red apples." or "A basket of 50 blue apples."
# (Writing some test code that creates instances and assigns values to variables may help you solve this problem!)
class AppleBasket :
def __init__(self, apple_color, apple_quantity) :
self.color = apple_color
self.quantity = apple_quantity
def increase(self) :
self.quantity += 1
def __str__(self) :
return 'A basket of {} {} apples.'.format(self.quantity, self.color)
a1 = AppleBasket('red', 4)
print(a1)
a1.increase()
print(a1)
a1.increase()
print(a1)
# Define a class called BankAccount that accepts the name you want associated with your bank account in a string,
# and an integer that represents the amount of money in the account.
# The constructor should initialize two instance variables from those inputs: name and amt.
# Add a string method so that when you print an instance of BankAccount,
# you see "Your account, [name goes here], has [start_amt goes here] dollars."
# Create an instance of this class with "Bob" as the name and 100 as the amount.
# Save this to the variable t1.
class BankAccount :
def __init__(self, name, amt) :
self.name = name
self.amt = amt
def __str__(self) :
return 'Your account, {}, has {} dollars.'.format(self.name, self.amt)
t1 = BankAccount('Bob', 100)
print(t1)
###Output
Your account, Bob, has 100 dollars.
###Markdown
Week 2: Inheritance
###Code
CURRENT_YEAR = 2019
class Person :
def __init__(self, name, year_born) :
self.name = name
self.year_born = year_born
def getAge(self) :
return CURRENT_YEAR - self.year_born
def __str__(self) :
return '{} ({})'.format(self.name, self.getAge())
alice = Person('Alice Smith', 1990)
print(alice)
class Student(Person) :
def __init__(self, name, year_born) :
Person.__init__(self, name, year_born)
self.knowledge = 0
def study(self) :
self.knowledge += 1
alice = Student('Alice Smith', 1990)
alice.study()
print(alice.knowledge)
alice.study()
print(alice.knowledge)
class Book :
def __init__(self, title, author) :
self.title = title
self.author = author
def __str__(self) :
return '{} by {}'.format(self.title, self.author)
myBook = Book('The Odyssey', 'Homer')
print(myBook)
class PaperBook(Book) :
def __init__(self, title, author, numPages) :
Book.__init__(self, title, author)
self.numPages = numPages
class Ebook(Book) :
def __init__(self, title, author, size) :
Book.__init__(self, title, author)
self.size = size
class Library:
def __init__(self) :
self.books = []
def addBook(self, book) :
self.books.append(book)
def getNumBooks(self) :
return len(self.books)
myBook = Ebook('The Odyssey', 'Homer', 2)
myPaperBook = PaperBook('The Odyssey', 'Homer', 500)
#print(myBook.size)
#print(myPaperBook.numPages)
aadl = Library()
aadl.addBook(myBook)
aadl.addBook(myPaperBook)
print(aadl.getNumBooks())
from random import randrange
# Here's the original Pet class
class Pet():
boredom_decrement = 4
hunger_decrement = 6
boredom_threshold = 5
hunger_threshold = 10
sounds = ['Mrrp']
def __init__(self, name = "Kitty"):
self.name = name
self.hunger = randrange(self.hunger_threshold)
self.boredom = randrange(self.boredom_threshold)
self.sounds = self.sounds[:] # copy the class attribute, so that when we make changes to it, we won't affect the other Pets in the class
def clock_tick(self):
self.boredom += 1
self.hunger += 1
def mood(self):
if self.hunger <= self.hunger_threshold and self.boredom <= self.boredom_threshold:
return "happy"
elif self.hunger > self.hunger_threshold:
return "hungry"
else:
return "bored"
def __str__(self):
state = " I'm " + self.name + ". "
state += " I feel " + self.mood() + ". "
# state += "Hunger %d Boredom %d Words %s" % (self.hunger, self.boredom, self.sounds)
return state
def hi(self):
print(self.sounds[randrange(len(self.sounds))])
self.reduce_boredom()
def teach(self, word):
self.sounds.append(word)
self.reduce_boredom()
def feed(self):
self.reduce_hunger()
def reduce_hunger(self):
self.hunger = max(0, self.hunger - self.hunger_decrement)
def reduce_boredom(self):
self.boredom = max(0, self.boredom - self.boredom_decrement)
class Dog(Pet):
sounds = ['Woof', 'Ruff']
def feed(self):
Pet.feed(self)
print("Arf! Thanks!")
d1 = Dog("Astro")
d1.feed()
class Bird(Pet):
sounds = ["chirp"]
def __init__(self, name="Kitty", chirp_number=2):
Pet.__init__(self, name) # call the parent class's constructor
# basically, call the SUPER -- the parent version -- of the constructor, with all the parameters that it needs.
self.chirp_number = chirp_number # now, also assign the new instance variable
def hi(self):
for i in range(self.chirp_number):
print(self.sounds[randrange(len(self.sounds))])
self.reduce_boredom()
b1 = Bird('tweety', 5)
b1.teach("Polly wanna cracker")
b1.hi()
pip install sys
import sys
sys.setExecutionLimit(60000)
from random import randrange
class Pet(object):
boredom_decrement = 4
hunger_decrement = 6
boredom_threshold = 5
hunger_threshold = 10
sounds = ['Mrrp']
def __init__(self, name = "Kitty"):
self.name = name
self.hunger = randrange(self.hunger_threshold)
self.boredom = randrange(self.boredom_threshold)
self.sounds = self.sounds[:] # copy the class attribute, so that when we make changes to it, we won't affect the other Pets in the class
def clock_tick(self):
self.boredom += 1
self.hunger += 1
def mood(self):
if self.hunger <= self.hunger_threshold and self.boredom <= self.boredom_threshold:
return "happy"
elif self.hunger > self.hunger_threshold:
return "hungry"
else:
return "bored"
def __str__(self):
state = " I'm " + self.name + ". "
state += " I feel " + self.mood() + ". "
# state += "Hunger %d Boredom %d Words %s" % (self.hunger, self.boredom, self.sounds)
return state
def hi(self):
print(self.sounds[randrange(len(self.sounds))])
self.update_boredom()
def teach(self, word):
self.sounds.append(word)
self.update_boredom()
def feed(self):
self.update_hunger()
def update_hunger(self):
self.hunger = max(0, self.hunger - self.hunger_decrement)
def update_boredom(self):
self.boredom = max(0, self.boredom - self.boredom_decrement)
class Cat(Pet):
sounds = ['Meow']
def mood(self):
if self.hunger > self.hunger_threshold:
return "hungry"
if self.boredom <2:
return "grumpy; leave me alone"
elif self.boredom > self.boredom_threshold:
return "bored"
elif randrange(2) == 0:
return "randomly annoyed"
else:
return "happy"
class Dog(Pet):
sounds = ['Woof', 'Ruff']
def mood(self):
if (self.hunger > self.hunger_threshold) and (self.boredom > self.boredom_threshold):
return "bored and hungry"
else:
return "happy"
def feed(self):
Pet.feed(self)
print("Arf! Thanks!")
class Bird(Pet):
sounds = ["chirp"]
def __init__(self, name="Kitty", chirp_number=2):
Pet.__init__(self, name) # call the parent class's constructor
# basically, call the SUPER -- the parent version -- of the constructor, with all the parameters that it needs.
self.chirp_number = chirp_number # now, also assign the new instance variable
def hi(self):
for i in range(self.chirp_number):
print(self.sounds[randrange(len(self.sounds))])
self.update_boredom()
class Lab(Dog):
def fetch(self):
return "I found the tennis ball!"
def hi(self):
print(self.fetch())
print(self.sounds[randrange(len(self.sounds))])
class Poodle(Dog):
def dance(self):
return "Dancin' in circles like poodles do."
def hi(self):
print(self.dance())
Dog.hi(self)
def whichone(petlist, name):
for pet in petlist:
if pet.name == name:
return pet
return None # no pet matched
pet_types = {'dog': Dog, 'lab': Lab, 'poodle': Poodle, 'cat': Cat, 'bird': Bird}
def whichtype(adopt_type="general pet"):
return pet_types.get(adopt_type.lower(), Pet)
def play():
animals = []
option = ""
base_prompt = """
Quit
Adopt <petname_with_no_spaces> <pet_type - choose dog, cat, lab, poodle, bird, or another unknown pet type>
Greet <petname>
Teach <petname> <word>
Feed <petname>
Choice: """
feedback = ""
while True:
action = input(feedback + "\n" + base_prompt)
feedback = ""
words = action.split()
if len(words) > 0:
command = words[0]
else:
command = None
if command == "Quit":
print("Exiting...")
return
elif command == "Adopt" and len(words) > 1:
if whichone(animals, words[1]):
feedback += "You already have a pet with that name\n"
else:
# figure out which class it should be
if len(words) > 2:
Cl = whichtype(words[2])
else:
Cl = Pet
# Make an instance of that class and append it
animals.append(Cl(words[1]))
elif command == "Greet" and len(words) > 1:
pet = whichone(animals, words[1])
if not pet:
feedback += "I didn't recognize that pet name. Please try again.\n"
print()
else:
pet.hi()
elif command == "Teach" and len(words) > 2:
pet = whichone(animals, words[1])
if not pet:
feedback += "I didn't recognize that pet name. Please try again."
else:
pet.teach(words[2])
elif command == "Feed" and len(words) > 1:
pet = whichone(animals, words[1])
if not pet:
feedback += "I didn't recognize that pet name. Please try again."
else:
pet.feed()
else:
feedback+= "I didn't understand that. Please try again."
for pet in animals:
pet.clock_tick()
feedback += "\n" + pet.__str__()
play()
###Output
_____no_output_____
###Markdown
Week 2 Assessment 1
###Code
class Pokemon(object):
attack = 12
defense = 10
health = 15
p_type = "Normal"
def __init__(self, name, level = 5):
self.name = name
self.level = level
def train(self):
self.update()
self.attack_up()
self.defense_up()
self.health_up()
self.level = self.level + 1
if self.level%self.evolve == 0:
return self.level, "Evolved!"
else:
return self.level
def attack_up(self):
self.attack = self.attack + self.attack_boost
return self.attack
def defense_up(self):
self.defense = self.defense + self.defense_boost
return self.defense
def health_up(self):
self.health = self.health + self.health_boost
return self.health
def update(self):
self.health_boost = 5
self.attack_boost = 3
self.defense_boost = 2
self.evolve = 10
def __str__(self):
self.update()
return "Pokemon name: {}, Type: {}, Level: {}".format(self.name, self.p_type, self.level)
class Grass_Pokemon(Pokemon):
attack = 15
defense = 14
health = 12
def update(self):
self.health_boost = 6
self.attack_boost = 2
self.defense_boost = 3
self.evolve = 12
def moves(self):
self.p_moves = ["razor leaf", "synthesis", "petal dance"]
def action(self):
return '{} knows a lot of different moves!'.format(self.name)
p1 = Grass_Pokemon('Belle')
print(p1.action())
class Pokemon(object):
attack = 12
defense = 10
health = 15
p_type = "Normal"
def __init__(self, name, level = 5):
self.name = name
self.level = level
def train(self):
self.update()
self.attack_up()
self.defense_up()
self.health_up()
self.level = self.level + 1
if self.level%self.evolve == 0:
return self.level, "Evolved!"
else:
return self.level
def attack_up(self):
self.attack = self.attack + self.attack_boost
return self.attack
def defense_up(self):
self.defense = self.defense + self.defense_boost
return self.defense
def health_up(self):
self.health = self.health + self.health_boost
return self.health
def update(self):
self.health_boost = 5
self.attack_boost = 3
self.defense_boost = 2
self.evolve = 10
def __str__(self):
return "Pokemon name: {}, Type: {}, Level: {}".format(self.name, self.p_type, self.level)
class Grass_Pokemon(Pokemon):
attack = 15
defense = 14
health = 12
p_type = "Grass"
def update(self):
self.health_boost = 6
self.attack_boost = 2
self.defense_boost = 3
self.evolve = 12
def moves(self):
self.p_moves = ["razor leaf", "synthesis", "petal dance"]
def attack_up(self):
if self.level >= 10 :
self.attack = self.attack + self.attack_boost
return self.attack
p2 = Grass_Pokemon('Bulby')
p3 = Grass_Pokemon('Pika')
for pokemon in range(10) :
p3.train()
print('Current attack strength:', p3.attack)
print(p3)
class Pokemon():
attack = 12
defense = 10
health = 15
p_type = "Normal"
def __init__(self, name,level = 5):
self.name = name
self.level = level
self.weak = "Normal"
self.strong = "Normal"
def train(self):
self.update()
self.attack_up()
self.defense_up()
self.health_up()
self.level = self.level + 1
if self.level%self.evolve == 0:
return self.level, "Evolved!"
else:
return self.level
def attack_up(self):
self.attack = self.attack + self.attack_boost
return self.attack
def defense_up(self):
self.defense = self.defense + self.defense_boost
return self.defense
def health_up(self):
self.health = self.health + self.health_boost
return self.health
def update(self):
self.health_boost = 5
self.attack_boost = 3
self.defense_boost = 2
self.evolve = 10
def __str__(self):
self.update()
return "Pokemon name: {}, Type: {}, Level: {}".format(self.name, self.p_type, self.level)
def opponent(self) :
if self.p_type == 'Grass' :
return ('Fire', 'Water')
elif self.p_type == 'Ghost' :
return ('Dark', 'Psychic')
elif self.p_type == 'Fire' :
return ('Water', 'Grass')
elif self.p_type == 'Flying' :
return ('Electric', 'Fighting')
class Grass_Pokemon(Pokemon):
attack = 15
defense = 14
health = 12
p_type = "Grass"
def update(self):
self.health_boost = 6
self.attack_boost = 2
self.defense_boost = 3
self.evolve = 12
class Ghost_Pokemon(Pokemon):
p_type = "Ghost"
def update(self):
self.health_boost = 3
self.attack_boost = 4
self.defense_boost = 3
class Fire_Pokemon(Pokemon):
p_type = "Fire"
class Flying_Pokemon(Pokemon):
p_type = "Flying"
grass_p = Grass_Pokemon('Bulbasaur')
ghost_p = Ghost_Pokemon('Casper')
fire_p = Fire_Pokemon('Charamander')
fly_p = Flying_Pokemon('Pigeot')
print(grass_p.opponent())
print(ghost_p.opponent())
print(fire_p.opponent())
print(fly_p.opponent())
###Output
('Fire', 'Water')
('Dark', 'Psychic')
('Water', 'Grass')
('Electric', 'Fighting')
###Markdown
Course 4 Final Course Project
###Code
print('''This project will take you through the process of implementing a simplified version of the game Wheel of Fortune. Here are the rules of our game:
There are num_human human players and num_computer computer players.
Every player has some amount of money ($0 at the start of the game)
Every player has a set of prizes (none at the start of the game)
The goal is to guess a phrase within a category. For example:
Category: Artist & Song
Phrase: Whitney Houston’s I Will Always Love You
Players see the category and an obscured version of the phrase where every alphanumeric character in the phrase starts out as hidden (using underscores: _):
Category: Artist & Song
Phrase: _______ _______'_ _ ____ ______ ____ ___
Note that case (capitalization) does not matter
During their turn, every player spins the wheel to determine a prize amount and:
If the wheel lands on a cash square, players may do one of three actions:
Guess any letter that hasn’t been guessed by typing a letter (a-z)
Vowels (a, e, i, o, u) cost $250 to guess and can’t be guessed if the player doesn’t have enough money. All other letters are “free” to guess
The player can guess any letter that hasn’t been guessed and gets that cash amount for every time that letter appears in the phrase
If there is a prize, the user also gets that prize (in addition to any prizes they already had)
If the letter does appear in the phrase, the user keeps their turn. Otherwise, it’s the next player’s turn
Example: The user lands on $500 and guesses ‘W’
There are three W’s in the phrase, so the player wins $1500
Guess the complete phrase by typing a phrase (anything over one character that isn’t ‘pass’)
If they are correct, they win the game
If they are incorrect, it is the next player’s turn
Pass their turn by entering 'pass'
If the wheel lands on “lose a turn”, the player loses their turn and the game moves on to the next player
If the wheel lands on “bankrupt”, the player loses their turn and loses their money but they keep all of the prizes they have won so far.
The game continues until the entire phrase is revealed (or one player guesses the complete phrase)
—
First, let’s learn about a few functions and methods that we’ll use along the way to do this project. There are no questions to answer in the next four active code windows. They are just here to introduce you to some functions and methods that you may not be aware of. The active code window that starts with “Part A” is where you are first asked to complete code.
—
The time.sleep(s) function (from the time module) delays execution of the next line of code for s seconds. You’ll find that we can build a little suspense during gameplay with some well-placed delays. The game can also be easier for users to understand if not everything happens instantly.''')
import time
for x in range(2, 6):
print('Sleep {} seconds..'.format(x))
time.sleep(x) # "Sleep" for x seconds
print('Done!')
import random
rand_number = random.randint(1, 10)
print('Random number between 1 and 10: {}'.format(rand_number))
letters = [letter for letter in 'ABCDEFGHIJKLMNOPQRSTUVWXYZ']
rand_letter = random.choice(letters)
print('Random letter: {}'.format(rand_letter))
myString = 'Hello, World! 123'
print(myString.upper()) # HELLO, WORLD! 123
print(myString.lower()) # hello, world! 123
print(myString.count('l')) # 3
s = 'python is pythonic'
print(s.count('python')) # 2
import json
import random
import time
LETTERS = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
# Repeatedly asks the user for a number between min & max (inclusive)
def getNumberBetween(prompt, min, max):
userinp = input(prompt) # ask the first time
while True:
try:
n = int(userinp) # try casting to an integer
if n < min:
errmessage = 'Must be at least {}'.format(min)
elif n > max:
errmessage = 'Must be at most {}'.format(max)
else:
return n
except ValueError: # The user didn't enter a number
errmessage = '{} is not a number.'.format(userinp)
# If we haven't gotten a number yet, add the error message
# and ask again
userinp = input('{}\n{}'.format(errmessage, prompt))
# Spins the wheel of fortune wheel to give a random prize
# Examples:
# { "type": "cash", "text": "$950", "value": 950, "prize": "A trip to Ann Arbor!" },
# { "type": "bankrupt", "text": "Bankrupt", "prize": false },
# { "type": "loseturn", "text": "Lose a turn", "prize": false }
def spinWheel():
with open("wheel.json", 'r') as f:
wheel = json.loads(f.read())
return random.choice(wheel)
# Returns a category & phrase (as a tuple) to guess
# Example:
# ("Artist & Song", "Whitney Houston's I Will Always Love You")
def getRandomCategoryAndPhrase():
with open("phrases.json", 'r') as f:
phrases = json.loads(f.read())
category = random.choice(list(phrases.keys()))
phrase = random.choice(phrases[category])
return (category, phrase.upper())
# Given a phrase and a list of guessed letters, returns an obscured version
# Example:
# guessed: ['L', 'B', 'E', 'R', 'N', 'P', 'K', 'X', 'Z']
# phrase: "GLACIER NATIONAL PARK"
# returns> "_L___ER N____N_L P_RK"
def obscurePhrase(phrase, guessed):
rv = ''
for s in phrase:
if (s in LETTERS) and (s not in guessed):
rv = rv+'_'
else:
rv = rv+s
return rv
# Returns a string representing the current state of the game
def showBoard(category, obscuredPhrase, guessed):
return """
Category: {}
Phrase: {}
Guessed: {}""".format(category, obscuredPhrase, ', '.join(sorted(guessed)))
category, phrase = getRandomCategoryAndPhrase()
guessed = []
for x in range(random.randint(10, 20)):
randomLetter = random.choice(LETTERS)
if randomLetter not in guessed:
guessed.append(randomLetter)
print("getRandomCategoryAndPhrase()\n -> ('{}', '{}')".format(category, phrase))
print("\n{}\n".format("-"*5))
print("obscurePhrase('{}', [{}])\n -> {}".format(phrase, ', '.join(["'{}'".format(c) for c in guessed]), obscurePhrase(phrase, guessed)))
print("\n{}\n".format("-"*5))
obscured_phrase = obscurePhrase(phrase, guessed)
print("showBoard('{}', '{}', [{}])\n -> {}".format(phrase, obscured_phrase, ','.join(["'{}'".format(c) for c in guessed]), showBoard(phrase, obscured_phrase, guessed)))
print("\n{}\n".format("-"*5))
num_times_to_spin = random.randint(2, 5)
print('Spinning the wheel {} times (normally this would just be done once per turn)'.format(num_times_to_spin))
for x in range(num_times_to_spin):
print("\n{}\n".format("-"*2))
print("spinWheel()")
print(spinWheel())
print("\n{}\n".format("-"*5))
print("In 2 seconds, will run getNumberBetween('Testing getNumberBetween(). Enter a number between 1 and 10', 1, 10)")
time.sleep(2)
print(getNumberBetween('Testing getNumberBetween(). Enter a number between 1 and 10', 1, 10))
VOWEL_COST = 250
LETTERS = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
VOWELS = 'AEIOU'
# Write the WOFPlayer class definition (part A) here
class WOFPlayer :
def __init__(self, name) :
self.name = name
self.prizeMoney = 0
self.prizes = []
def addMoney(self, amt) :
self.prizeMoney += amt
def goBankrupt(self) :
self.prizeMoney = 0
def addPrize(self, prize) :
self.prizes.append(prize)
def __str__(self) :
return '{} (${})'.format(self.name, self.prizeMoney)
# Write the WOFHumanPlayer class definition (part B) here
class WOFHumanPlayer(WOFPlayer) :
def getMove(self, category, obscured_phrase, guessed) :
prompt_txt = '{} has ${} \n Category: {} \n Phrase: {} \n Guessed: {} \n\n Guess a letter, phrase, or type "exit" or "pass":'
prompt_input = input(prompt_txt.format(self.name, self.prizeMoney, category, obscured_phrase, guessed))
return prompt_input
# Write the WOFComputerPlayer class definition (part C) here
class WOFComputerPlayer(WOFPlayer) :
SORTED_FREQUENCIES = 'ZQXJKVBPYGFWMUCLDRHSNIOATE'
def __init__(self, name, difficulty) :
WOFPlayer.__init__(self, name)
self.difficulty = difficulty
def smartCoinFlip(self) : # decide semi-randomly whether to make a “good” or “bad” move.
rand_coin_flip = random.randint(1, 10)
if rand_coin_flip <= self.difficulty :
return True
else :
return False
def getPossibleLetters(self, guessed) : #method returns a list of letters that can be guesssed
possible_letters = []
for letter in LETTERS :
if letter not in guessed :
if self.prizeMoney > VOWEL_COST :
possible_letters.append(letter)
return possible_letters
def getMove(self, category, obscured_phrase, guessed) :
move_lst = self.getPossibleLetters(guessed)
flip_result = self.smartCoinFlip()
if len(move_lst) == 0 :
return 'pass'
else:
if flip_result == True :
for let in self.SORTED_FREQUENCIES :
if let in move_lst :
return let
elif flip_result == False :
return random.choice(move_lst)
# Write the WOFPlayer class definition (part A) here
class WOFPlayer :
def __init__(self, name) :
self.name = name
self.prizeMoney = 0
self.prizes = []
def addMoney(self, amt) :
self.prizeMoney += amt
def goBankrupt(self) :
self.prizeMoney = 0
def addPrize(self, prize) :
self.prizes.append(prize)
def __str__(self) :
return '{} (${})'.format(self.name, self.prizeMoney)
# Write the WOFHumanPlayer class definition (part B) here
class WOFHumanPlayer(WOFPlayer) :
def getMove(self, category, obscured_phrase, guessed) :
prompt_txt = '{} has ${} \n Category: {} \n Phrase: {} \n Guessed: {} \n\n Guess a letter, phrase, or type "exit" or "pass":'
prompt_input = input(prompt_txt.format(self.name, self.prizeMoney, category, obscured_phrase, guessed))
return prompt_input
# Write the WOFComputerPlayer class definition (part C) here
class WOFComputerPlayer(WOFPlayer) :
SORTED_FREQUENCIES = 'ZQXJKVBPYGFWMUCLDRHSNIOATE'
def __init__(self, name, difficulty) :
WOFPlayer.__init__(self, name)
self.difficulty = difficulty
def smartCoinFlip(self) : # decide semi-randomly whether to make a “good” or “bad” move.
rand_coin_flip = random.randint(1, 10)
if rand_coin_flip <= self.difficulty :
return True
else :
return False
def getPossibleLetters(self, guessed) : #method returns a list of letters that can be guesssed
possible_letters = []
for letter in LETTERS :
if letter not in guessed :
if self.prizeMoney > VOWEL_COST :
possible_letters.append(letter)
return possible_letters
def getMove(self, category, obscured_phrase, guessed) :
move_lst = self.getPossibleLetters(guessed)
flip_result = self.smartCoinFlip()
if len(move_lst) == 0 :
return 'pass'
else:
if flip_result == True :
for let in self.SORTED_FREQUENCIES :
if let in move_lst :
return let
elif flip_result == False :
return random.choice(move_lst)
import sys
sys.setExecutionLimit(600000) # let this take up to 10 minutes
import json
import random
import time
LETTERS = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
VOWELS = 'AEIOU'
VOWEL_COST = 250
# Repeatedly asks the user for a number between min & max (inclusive)
def getNumberBetween(prompt, min, max):
userinp = input(prompt) # ask the first time
while True:
try:
n = int(userinp) # try casting to an integer
if n < min:
errmessage = 'Must be at least {}'.format(min)
elif n > max:
errmessage = 'Must be at most {}'.format(max)
else:
return n
except ValueError: # The user didn't enter a number
errmessage = '{} is not a number.'.format(userinp)
# If we haven't gotten a number yet, add the error message
# and ask again
userinp = input('{}\n{}'.format(errmessage, prompt))
# Spins the wheel of fortune wheel to give a random prize
# Examples:
# { "type": "cash", "text": "$950", "value": 950, "prize": "A trip to Ann Arbor!" },
# { "type": "bankrupt", "text": "Bankrupt", "prize": false },
# { "type": "loseturn", "text": "Lose a turn", "prize": false }
def spinWheel():
with open("wheel.json", 'r') as f:
wheel = json.loads(f.read())
return random.choice(wheel)
# Returns a category & phrase (as a tuple) to guess
# Example:
# ("Artist & Song", "Whitney Houston's I Will Always Love You")
def getRandomCategoryAndPhrase():
with open("phrases.json", 'r') as f:
phrases = json.loads(f.read())
category = random.choice(list(phrases.keys()))
phrase = random.choice(phrases[category])
return (category, phrase.upper())
# Given a phrase and a list of guessed letters, returns an obscured version
# Example:
# guessed: ['L', 'B', 'E', 'R', 'N', 'P', 'K', 'X', 'Z']
# phrase: "GLACIER NATIONAL PARK"
# returns> "_L___ER N____N_L P_RK"
def obscurePhrase(phrase, guessed):
rv = ''
for s in phrase:
if (s in LETTERS) and (s not in guessed):
rv = rv+'_'
else:
rv = rv+s
return rv
# Returns a string representing the current state of the game
def showBoard(category, obscuredPhrase, guessed):
return """
Category: {}
Phrase: {}
Guessed: {}""".format(category, obscuredPhrase, ', '.join(sorted(guessed)))
# GAME LOGIC CODE
print('='*15)
print('WHEEL OF PYTHON')
print('='*15)
print('')
num_human = getNumberBetween('How many human players?', 0, 10)
# Create the human player instances
human_players = [WOFHumanPlayer(input('Enter the name for human player #{}'.format(i+1))) for i in range(num_human)]
num_computer = getNumberBetween('How many computer players?', 0, 10)
# If there are computer players, ask how difficult they should be
if num_computer >= 1:
difficulty = getNumberBetween('What difficulty for the computers? (1-10)', 1, 10)
# Create the computer player instances
computer_players = [WOFComputerPlayer('Computer {}'.format(i+1), difficulty) for i in range(num_computer)]
players = human_players + computer_players
# No players, no game :(
if len(players) == 0:
print('We need players to play!')
raise Exception('Not enough players')
# category and phrase are strings.
category, phrase = getRandomCategoryAndPhrase()
# guessed is a list of the letters that have been guessed
guessed = []
# playerIndex keeps track of the index (0 to len(players)-1) of the player whose turn it is
playerIndex = 0
# will be set to the player instance when/if someone wins
winner = False
def requestPlayerMove(player, category, guessed):
while True: # we're going to keep asking the player for a move until they give a valid one
time.sleep(0.1) # added so that any feedback is printed out before the next prompt
move = player.getMove(category, obscurePhrase(phrase, guessed), guessed)
move = move.upper() # convert whatever the player entered to UPPERCASE
if move == 'EXIT' or move == 'PASS':
return move
elif len(move) == 1: # they guessed a character
if move not in LETTERS: # the user entered an invalid letter (such as @, #, or $)
print('Guesses should be letters. Try again.')
continue
elif move in guessed: # this letter has already been guessed
print('{} has already been guessed. Try again.'.format(move))
continue
elif move in VOWELS and player.prizeMoney < VOWEL_COST: # if it's a vowel, we need to be sure the player has enough
print('Need ${} to guess a vowel. Try again.'.format(VOWEL_COST))
continue
else:
return move
else: # they guessed the phrase
return move
while True:
player = players[playerIndex]
wheelPrize = spinWheel()
print('')
print('-'*15)
print(showBoard(category, obscurePhrase(phrase, guessed), guessed))
print('')
print('{} spins...'.format(player.name))
time.sleep(2) # pause for dramatic effect!
print('{}!'.format(wheelPrize['text']))
time.sleep(1) # pause again for more dramatic effect!
if wheelPrize['type'] == 'bankrupt':
player.goBankrupt()
elif wheelPrize['type'] == 'loseturn':
pass # do nothing; just move on to the next player
elif wheelPrize['type'] == 'cash':
move = requestPlayerMove(player, category, guessed)
if move == 'EXIT': # leave the game
print('Until next time!')
break
elif move == 'PASS': # will just move on to next player
print('{} passes'.format(player.name))
elif len(move) == 1: # they guessed a letter
guessed.append(move)
print('{} guesses "{}"'.format(player.name, move))
if move in VOWELS:
player.prizeMoney -= VOWEL_COST
count = phrase.count(move) # returns an integer with how many times this letter appears
if count > 0:
if count == 1:
print("There is one {}".format(move))
else:
print("There are {} {}'s".format(count, move))
# Give them the money and the prizes
player.addMoney(count * wheelPrize['value'])
if wheelPrize['prize']:
player.addPrize(wheelPrize['prize'])
# all of the letters have been guessed
if obscurePhrase(phrase, guessed) == phrase:
winner = player
break
continue # this player gets to go again
elif count == 0:
print("There is no {}".format(move))
else: # they guessed the whole phrase
if move == phrase: # they guessed the full phrase correctly
winner = player
# Give them the money and the prizes
player.addMoney(wheelPrize['value'])
if wheelPrize['prize']:
player.addPrize(wheelPrize['prize'])
break
else:
print('{} was not the phrase'.format(move))
# Move on to the next player (or go back to player[0] if we reached the end)
playerIndex = (playerIndex + 1) % len(players)
if winner:
# In your head, you should hear this as being announced by a game show host
print('{} wins! The phrase was {}'.format(winner.name, phrase))
print('{} won ${}'.format(winner.name, winner.prizeMoney))
if len(winner.prizes) > 0:
print('{} also won:'.format(winner.name))
for prize in winner.prizes:
print(' - {}'.format(prize))
else:
print('Nobody won. The phrase was {}'.format(phrase))
###Output
_____no_output_____ |
module1-regression-1/LS_DS_211_assignment.ipynb | ###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*--- Regression 1 AssignmentYou'll use another **New York City** real estate dataset. But now you'll **predict how much it costs to rent an apartment**, instead of how much it costs to buy a condo.The data comes from renthop.com, an apartment listing website.- [ ] Look at the data. Choose a feature, and plot its relationship with the target.- [ ] Use scikit-learn for linear regression with one feature. You can follow the [5-step process from Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.htmlBasics-of-the-API).- [ ] Define a function to make new predictions and explain the model coefficient.- [ ] Organize and comment your code.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.If your **Plotly** visualizations aren't working:- You must have JavaScript enabled in your browser- You probably want to use Chrome or Firefox- You may need to turn off ad blockers- [If you're using Jupyter Lab locally, you need to install some "extensions"](https://plot.ly/python/getting-started/jupyterlab-support-python-35) Stretch Goals- [ ] Do linear regression with two or more features.- [ ] Read [The Discovery of Statistical Regression](https://priceonomics.com/the-discovery-of-statistical-regression/)- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 2.1: What Is Statistical Learning?
###Code
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Read New York City apartment rental listing data
import pandas as pd
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv')
assert df.shape == (49352, 34)
# Remove outliers:
# the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= 1375) & (df['price'] <= 15500) &
(df['latitude'] >=40.57) & (df['latitude'] < 40.99) &
(df['longitude'] >= -74.1) & (df['longitude'] <= -73.38)]
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*--- Regression 1 AssignmentYou'll use another **New York City** real estate dataset. But now you'll **predict how much it costs to rent an apartment**, instead of how much it costs to buy a condo.The data comes from renthop.com, an apartment listing website.- [ ] Look at the data. Choose a feature, and plot its relationship with the target.- [ ] Use scikit-learn for linear regression with one feature. You can follow the [5-step process from Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.htmlBasics-of-the-API).- [ ] Define a function to make new predictions and explain the model coefficient.- [ ] Organize and comment your code.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.If your **Plotly** visualizations aren't working:- You must have JavaScript enabled in your browser- You probably want to use Chrome or Firefox- You may need to turn off ad blockers- [If you're using Jupyter Lab locally, you need to install some "extensions"](https://plot.ly/python/getting-started/jupyterlab-support-python-35) Stretch Goals- [ ] Do linear regression with two or more features.- [ ] Read [The Discovery of Statistical Regression](https://priceonomics.com/the-discovery-of-statistical-regression/)- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 2.1: What Is Statistical Learning?
###Code
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Read New York City apartment rental listing data
import pandas as pd
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv',
parse_dates=['created'],
index_col='created')
# assert df.shape == (49352, 34)
# dtype_dict = {'ZIP_CODE': 'object',
# 'YEAR_BUILT': int}
# df = pd.read_csv(DATA_PATH+'condos/tribeca.csv',
# dtype=dtype_dict,
# parse_dates=['SALE_DATE'],
# index_col='SALE_DATE')
# Remove outliers:
# the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= 1375) & (df['price'] <= 15500) &
(df['latitude'] >=40.57) & (df['latitude'] < 40.99) &
(df['longitude'] >= -74.1) & (df['longitude'] <= -73.38)]
df.head()
df.info()
# plotting histogram of the target variable
df['price'].plot(kind='hist')
# the data is positively skewed
df['elevator'].value_counts()
df['bedrooms'].value_counts()
import matplotlib.pyplot as plt
# style
plt.style.use('seaborn')
plt.scatter(df['bedrooms'], df['price']);
plt.xlabel('Number of Bedrooms')
plt.ylabel('Price')
plt.show()
# style
plt.style.use('seaborn')
plt.scatter(df['latitude'], df['price']);
plt.xlabel('Latitude')
plt.ylabel('Price')
plt.show()
plt.style.use('seaborn')
plt.scatter(df['longitude'], df['price']);
plt.xlabel('longitude')
plt.ylabel('Price')
plt.show()
# working with total bedrooms and price
X = df[['bedrooms']]
y = df['price']
# # convert the created from an object to datetime so we can split our dataset using created
# df["created"] = pd.to_datetime(df["created"])
# df["date_created"] = df["created"].dt.date
# df["date_created"]
# # make the new column date_created as the index
# df = df.set_index('created')
# df.head()
# date created ranges from April to June. We will make month of June as our test set
# now we split the dataset in train and test
cutoff = '2016-06-01'
# applying the filter
filt = X.index < cutoff
X_train, y_train = X.loc[filt], y.loc[filt]
X_test, y_test = X.loc[~filt], y.loc[~filt]
# baseline guess
plt.hist(y_train);
baseline_guess = y_train.mean()
moe = abs(baseline_guess - y_train).mean()
print(f'prediction of a baseline model: ${round(baseline_guess,2)}, with a margin of error: ${round(moe,2)}')
# need to make a model that is more accurate than the baseline model above
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
import numpy as np
lin_reg = LinearRegression()
lin_reg.fit(X,y);
lin_reg.coef_[0]
lin_reg.intercept_
###Output
_____no_output_____
###Markdown
we could write the line equation to solve for the rent as follows:rent = 853.25 * (total_bedrooms) + 2267.97
###Code
X_model = np.linspace(0, X_train['bedrooms'].max(), 50).reshape(-1,1)
# Note how we use the .predict() method with our model
rent_pred = lin_reg.predict(X_model)
# Plot our data
plt.scatter(X_train, y_train)
# Plot the regression line
plt.plot(X_model, rent_pred , color='red', label='Our Model')
plt.xlabel('Bedroom')
plt.ylabel('Rent Price')
plt.title('linear Regression')
plt.legend()
# Calculating RMSE score
rent_predictions = lin_reg.predict(X)
rmse_scores = np.sqrt(mean_squared_error(rent_predictions, y))
rmse_scores
# validating the score using cross_val
from sklearn.model_selection import cross_val_score
score = cross_val_score(lin_reg, X, y, scoring= 'neg_mean_squared_error', cv=10)
rmse = np.sqrt(-score)
std_rmse = np.std(rmse)
print(rmse)
print(rmse.mean())
std_rmse
# Scores on the test and validation set are almost the same. this model is low on bias and low on variance. Hence, a perfect generalized model
###Output
_____no_output_____
###Markdown
we should be able to make the score better by adding polynomial regression as the reation is not exactly linear.
###Code
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import PolynomialFeatures
poly_reg = make_pipeline(PolynomialFeatures(degree=9), LinearRegression())
poly_reg.fit(X,y);
X_model = np.linspace(0, X_train['bedrooms'].max(), 50).reshape(-1,1)
# Note how we use the .predict() method with our model
rent_pred = poly_reg.predict(X_model)
# Plot our data
plt.scatter(X_train, y_train)
# Plot the regression line
plt.plot(X_model, rent_pred , color='red', label='Our Model')
plt.xlabel('Bedroom')
plt.ylabel('Rent Price')
plt.title('using Polynomial Regression')
plt.legend()
plt.show()
# Calculating RMSE score
rent_pred1 = poly_reg.predict(X)
rmse_scores = np.sqrt(mean_squared_error(rent_pred1, y))
rmse_scores
# validating the score using cross_val
score = cross_val_score(poly_reg, X, y, scoring= 'neg_mean_squared_error', cv=10)
rmse = np.sqrt(-score)
std_rmse = np.std(rmse)
print(rmse)
print(rmse.mean())
std_rmse
# we got the best model possible with polynomial features degrees set to 9
###Output
_____no_output_____
###Markdown
Lets do it using Ridge Regression and see what we get. Don't accept to better resulst because model was not overfitting.
###Code
from sklearn.linear_model import Ridge
# use Random Search to find the best value of alpha for ridge regression
from sklearn.model_selection import RandomizedSearchCV
ridge = Ridge(normalize=True, random_state=42)
parameters = {'alpha':[1e-15, 1e-10, 1e-8, 1e-5, 1e-2, 1, 5,10, 20,30,40, 50, 60, 100, 110]}
ridge_reg = RandomizedSearchCV(ridge, parameters, scoring= 'neg_mean_squared_error', cv=10, random_state=42)
ridge_reg.fit(X,y)
print(ridge_reg.best_params_)
print(ridge_reg.best_estimator_)
# using the best value of alpha for ridge lets calculate the rmse and see if we get any better results
ridge = ridge_reg.best_estimator_
# Calculating RMSE score
rent_pred2 = ridge.predict(X)
rmse_scores = np.sqrt(mean_squared_error(rent_pred2, y))
rmse_scores
X_model = np.linspace(0, X_train['bedrooms'].max(), 50).reshape(-1,1)
# Note how we use the .predict() method with our model
rent_pred = ridge.predict(X_model)
# Plot our data
plt.scatter(X_train, y_train)
# Plot the regression line
plt.plot(X_model, rent_pred , color='red', label='Our Model')
plt.xlabel('Bedroom')
plt.ylabel('Rent Price')
plt.title('Using Ridge Regression')
plt.legend()
plt.show()
# validating the score using cross_val
score = cross_val_score(ridge, X, y, scoring= 'neg_mean_squared_error', cv=10)
rmse = np.sqrt(-score)
std_rmse = np.std(rmse)
print(rmse)
print(rmse.mean())
std_rmse
###Output
[1513.61744289 1442.79760998 1499.81967906 1467.77577537 1472.22972372
1532.00247255 1516.93963049 1505.28687873 1454.30489411 1477.66023195]
1488.243433883667
###Markdown
Lets try usin Stochastic Gradient Descent with ridge
###Code
from sklearn.linear_model import SGDRegressor
sgd_reg = SGDRegressor(random_state=42)
# lets start by finding out the best value of the learing rate
parameters1 = {'eta0': [0.001, 0.005, 0.01, 0.03, 0.06, 0.09, 1, 1.05]}
sgd_regressor = RandomizedSearchCV(sgd_reg, parameters1, cv=10, scoring = 'neg_mean_squared_error', random_state=42)
sgd_regressor.fit(X,y)
print(sgd_regressor.best_params_)
print(sgd_regressor.best_estimator_)
sgd_reg = sgd_regressor.best_estimator_
rent_pred4 = sgd_reg.predict(X)
rmse = np.sqrt(mean_squared_error(rent_pred4, y))
rmse
X_model = np.linspace(0, X_train['bedrooms'].max(), 50).reshape(-1,1)
# Note how we use the .predict() method with our model
rent_pred5 = sgd_reg.predict(X_model)
# Plot our data
plt.scatter(X_train, y_train)
# Plot the regression line
plt.plot(X_model, rent_pred5 , color='red', label='Our Model')
plt.xlabel('Bedroom')
plt.ylabel('Rent Price')
plt.title('Using SGD regression with Ridge')
plt.legend()
plt.show()
# validating the score using cross_val
# shouldn't be any different than ridge regression
score = cross_val_score(sgd_reg, X, y, scoring= 'neg_mean_squared_error', cv=10)
rmse = np.sqrt(-score)
std_rmse = np.std(rmse)
print(rmse)
print(rmse.mean())
std_rmse
###Output
[1513.57503353 1443.298727 1499.67209299 1466.98882474 1472.31476895
1532.83228657 1516.84217954 1504.98170739 1455.4029412 1477.19573594]
1488.310429785467
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*---
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
###Output
_____no_output_____
###Markdown
Module Project: Regression I
During the guided project, we predicted how much it would cost to buy a condo in Tribecca. For the module project, your goal will be similar: predict how much it costs to rent an apartment in New York City.
Dataset source: [renthop.com](https://www.renthop.com/).
Directions
> Do Not Copy-Paste. You must *type* each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.
>
> — Zed Shaw, [Learn Python the Hard Way](https://learnpythonthehardway.org/)
The tasks for this project are as follows:
- **Task 1:** Import `csv` file using wrangle function.
- **Task 2:** Conduct exploratory data analysis (EDA) and plot the relationship between one feature and the target `'price'`.
- **Task 3:** Split data into feature matrix `X` and target vector `y`.
- **Task 4:** Establish the baseline mean absolute error for your dataset.
- **Task 5:** Build and train a `Linearregression` model.
- **Task 6:** Check the mean absolute error of our model on the training data.
- **Task 7:** Extract and print the intercept and coefficient from your `LinearRegression` model.
**Note**
You should limit yourself to the following libraries for this project:
- `matplotlib`
- `numpy`
- `pandas`
- `sklearn` I. Wrangle Data
###Code
from sklearn.metrics import mean_absolute_error as mae
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
def wrangle(filepath):
df = pd.read_csv(filepath)
# Remove the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= np.percentile(df['price'], 0.5)) &
(df['price'] <= np.percentile(df['price'], 99.5)) &
(df['latitude'] >= np.percentile(df['latitude'], 0.05)) &
(df['latitude'] < np.percentile(df['latitude'], 99.95)) &
(df['longitude'] >= np.percentile(df['longitude'], 0.05)) &
(df['longitude'] <= np.percentile(df['longitude'], 99.95))]
return df
filepath = DATA_PATH + 'apartments/renthop-nyc.csv'
###Output
_____no_output_____
###Markdown
**Task 1:** Use the above `wrangle` function to import the `renthop-nyc.csv` file into a DataFrame named `df`.
###Code
df = wrangle(filepath)
###Output
_____no_output_____
###Markdown
**Task 2:** Use your `pandas` and dataviz skills to explore the dataset. As part of this process, make a scatter plot that shows the relationship between one of the numerical features in the dataset and the target `'price'`.**Remember:** You should plot your feature on the `X` axis and your target on the `y` axis.
###Code
#access the dataframe and apply a mask that is == 'low' and set it equal to one
#access the dataframe and apply a mask that is == 'medium' and set it equal to two
#access the dataframe and apply a mask that is == 'high' and set it equal to three
#create a function with a single parameter
#if the parameter is equal to low return 1
#if the parameter is equal to medium return 2
#if the parameter is equal to high return 3
#else return 0 because its a value that we were not expecting
#method using loc
df.loc[df['interest_level'] == 'low', 'interest_level'] = 1
df.loc[df['interest_level'] == 'medium', 'interest_level'] = 2
df.loc[df['interest_level'] == 'high', 'interest_level'] = 3
#method using masks
df['interest_level' == 'low'] = 1
df['interest_level' == 'medium'] = 2
df['interest_level' == 'high'] = 3
#apply the function to the dataframe column youre interested in modifying
#method using functions
def interest_lvl_to_num(string):
if string == 'low':
return 1
elif string == 'medium':
return 2
elif string == 'high':
return 3
#return 0 in the event that string is null.
return 0
df['interest_level'] = df['interest_level'].apply(interest_lvl_to_num)
df.head()
# prints int columns with more than 2 unique values
for col in df.columns:
if (df[col].dtype == ('int64' or 'float64')) and (df[col].nunique() > 2):
print(df[col].value_counts())
plt.scatter(df['bathrooms'], df['price'])
###Output
_____no_output_____
###Markdown
II. Split Data**Task 3:** Choose one feature from the dataset and assign it to your feature matrix `X`. Then assign the column `'price'` to the target vector `y`.**Remember:** Your feature matrix needs to be two-dimensional, but your feature matrix must be one-dimensional.
###Code
X = df[['bathrooms']]
y = df['price']
###Output
_____no_output_____
###Markdown
III. Establish Baseline**Task 4:** Since this is a **regression** problem, you need to calculate the baseline the mean absolute error for your model. First, calculate the mean of `y`. Next, create a list `y_pred` that has the same length as `y` and where every item in the list is the mean. Finally, use `mean_absolute_error` to calculate your baseline.
###Code
y_pred = [y.mean()] * len(df)
baseline_mae = mae(y, y_pred)
print('Baseline MAE:', baseline_mae)
###Output
Baseline MAE: 1201.532252154329
###Markdown
IV. Build Model**Task 5:** Build and train a `LinearRegression` model named `model` using your feature matrix `X` and your target vector `y`.
###Code
# Step 1: Import predictor class
from sklearn.linear_model import LinearRegression as lr
# Step 2: Instantiate predictor
model = lr()
# Step 3: Fit predictor on the (training) data
model.fit(X,y)
###Output
_____no_output_____
###Markdown
V. Check Metrics**Task 6:** How does your model perform in comparison to your baseline? Calculate the mean absolute error for your model's predictions.
###Code
training_mae = mae(y, model.predict(X))
print('Training MAE:', training_mae)
###Output
Training MAE: 889.763187741364
###Markdown
VI. Communicate Results You've just created a linear model. That means that your model makes predictions using an equation that looks like $\texttt{apt price} = \texttt{intercept}~+~\texttt{coefficient}~\times~\texttt{your feature}$. But what are the values of the intercept and coefficient that your model is using? **Task 7:** Print out the intercept and coefficient associated with `model`.
###Code
print(model.intercept_)
print(model.coef_)
###Output
486.9330801932765
[2573.36198309]
###Markdown
BloomTech Data Science*Unit 2, Sprint 1, Module 1*---
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
###Output
_____no_output_____
###Markdown
Module Project: Regression IDuring the guided project, we predicted how much it would cost to buy a condo in Tribecca. For the module project, your goal will be similar: predict how much it costs to rent an apartment in New York City.Dataset source: [renthop.com](https://www.renthop.com/). Directions> Do Not Copy-Paste. You must *type* each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.>> — Zed Shaw, [Learn Python the Hard Way](https://learnpythonthehardway.org/)The tasks for this project are as follows:- **Task 1:** Import `csv` file using wrangle function.- **Task 2:** Conduct exploratory data analysis (EDA) and plot the relationship between one feature and the target `'price'`.- **Task 3:** Split data into feature matrix `X` and target vector `y`.- **Task 4:** Establish the baseline mean absolute error for your dataset.- **Task 5:** Build and train a `Linearregression` model.- **Task 6:** Check the mean absolute error of our model on the training data.- **Task 7:** Extract and print the intercept and coefficient from your `LinearRegression` model.**Note**You should limit yourself to the following libraries for this project:- `matplotlib`- `numpy`- `pandas`- `sklearn` I. Wrangle Data
###Code
def wrangle(filepath):
df = pd.read_csv(filepath)
# Remove the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= np.percentile(df['price'], 0.5)) &
(df['price'] <= np.percentile(df['price'], 99.5)) &
(df['latitude'] >= np.percentile(df['latitude'], 0.05)) &
(df['latitude'] < np.percentile(df['latitude'], 99.95)) &
(df['longitude'] >= np.percentile(df['longitude'], 0.05)) &
(df['longitude'] <= np.percentile(df['longitude'], 99.95))]
return df
filepath = DATA_PATH + 'apartments/renthop-nyc.csv'
###Output
_____no_output_____
###Markdown
**Task 1:** Use the above `wrangle` function to import the `renthop-nyc.csv` file into a DataFrame named `df`.
###Code
df = ...
###Output
_____no_output_____
###Markdown
**Task 2:** Use your `pandas` and dataviz skills to explore the dataset. As part of this process, make a scatter plot that shows the relationship between one of the numerical features in the dataset and the target `'price'`.**Remember:** You should plot your feature on the `X` axis and your target on the `y` axis. II. Split Data**Task 3:** Choose one feature from the dataset and assign it to your feature matrix `X`. Then assign the column `'price'` to the target vector `y`.**Remember:** Your feature matrix needs to be two-dimensional, but your feature matrix must be one-dimensional.
###Code
X = ...
y = ...
###Output
_____no_output_____
###Markdown
III. Establish Baseline**Task 4:** Since this is a **regression** problem, you need to calculate the baseline the mean absolute error for your model. First, calculate the mean of `y`. Next, create a list `y_pred` that has the same length as `y` and where every item in the list is the mean. Finally, use `mean_absolute_error` to calculate your baseline.
###Code
baseline_mae = ...
print('Baseline MAE:', baseline_mae)
###Output
_____no_output_____
###Markdown
IV. Build Model**Task 5:** Build and train a `LinearRegression` model named `model` using your feature matrix `X` and your target vector `y`.
###Code
# Step 1: Import predictor class
# Step 2: Instantiate predictor
model = ...
# Step 3: Fit predictor on the (training) data
###Output
_____no_output_____
###Markdown
V. Check Metrics**Task 6:** How does your model perform in comparison to your baseline? Calculate the mean absolute error for your model's predictions.
###Code
training_mae = ...
print('Training MAE:', training_mae)
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*--- Regression 1 AssignmentYou'll use another **New York City** real estate dataset. But now you'll **predict how much it costs to rent an apartment**, instead of how much it costs to buy a condo.The data comes from renthop.com, an apartment listing website.- [ ] Look at the data. Choose a feature, and plot its relationship with the target.- [ ] Use scikit-learn for linear regression with one feature. You can follow the [5-step process from Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.htmlBasics-of-the-API).- [ ] Define a function to make new predictions and explain the model coefficient.- [ ] Organize and comment your code.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.If your **Plotly** visualizations aren't working:- You must have JavaScript enabled in your browser- You probably want to use Chrome or Firefox- You may need to turn off ad blockers- [If you're using Jupyter Lab locally, you need to install some "extensions"](https://plot.ly/python/getting-started/jupyterlab-support-python-35) Stretch Goals- [ ] Do linear regression with two or more features.- [ ] Read [The Discovery of Statistical Regression](https://priceonomics.com/the-discovery-of-statistical-regression/)- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 2.1: What Is Statistical Learning?
###Code
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Read New York City apartment rental listing data
import pandas as pd
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv', parse_dates=["created"])
assert df.shape == (49352, 34)
# Remove outliers:
# the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= 1375) & (df['price'] <= 15500) &
(df['latitude'] >=40.57) & (df['latitude'] < 40.99) &
(df['longitude'] >= -74.1) & (df['longitude'] <= -73.38)]
# Explore df
df.head(8)
df.describe()
df.info()
# year created?
df.created.dt.year.value_counts()
import matplotlib.pyplot as plt
from sklearn.metrics import mean_absolute_error
from sklearn.linear_model import LinearRegression
plt.scatter(df.bedrooms, df.price)
plt.xlabel("No. of bedrooms")
plt.ylabel("Price")
plt.show()
###Output
_____no_output_____
###Markdown
I'm going to use bedrooms to see the relationship between price and number of bedrooms
###Code
(df.price.head().mean()) / (df.bedrooms.head().mean())
# Baseline function
def base_price_return(rooms):
rent_price = rooms * 1631
return rent_price
df["est_price"] = df["bedrooms"].apply(base_price_return)
df["error"] = abs(df.price - df["est_price"])
df[["price", "bedrooms", "est_price", "error"]].head()
# Look at plot
plt.scatter(df.bedrooms, df.price)
plt.plot(df["bedrooms"], df["est_price"], color = "r", label = "my basic model")
plt.legend()
plt.xlabel("No. of bedrooms")
plt.ylabel("Rent Price ($)")
plt.show()
# Data Split
target = "price"
y = df[target]
X = df[["bedrooms"]]
print(y.shape)
print(X.shape)
y.mean()
print(f" Baseline mean absolute error: {mean_absolute_error(y, [y.mean()]*len(y))}", )
# Perfrom Linear Regression
model = LinearRegression()
model.fit(X, y)
y_pred = model.predict(X)
# Model mean absolute error
print(f"Model mean absolute error is {mean_absolute_error(y, y_pred)}")
# View regression equation
print(f"Can estimate rent price with the formula {model.intercept_.round(2)} + {model.coef_[0].round(2)} * no. of rooms")
# View plot
plt.scatter(X, y, alpha=0.25)
plt.plot(X, y_pred, color = "r", label = "sklearn model")
plt.legend()
plt.xlabel("No. of Bedrooms")
plt.ylabel("Rent Price ($)")
plt.show()
###Output
_____no_output_____
###Markdown
Let's add bathrooms to the linear regression coefficients
###Code
X = df[["bedrooms", "bathrooms"]]
mult_linear_regression = LinearRegression()
mult_linear_regression.fit(X, y)
y_predict = mult_linear_regression.predict(X)
# Mean absolute error
print(f"This model's mean absolute error is {mean_absolute_error(y, y_predict)}")
###Output
This model's mean absolute error is 821.9657608064343
###Markdown
The MAE reduced with an extra feature. Seems like the extra feature is not redundant
###Code
# Linear regression equation
print(f"Can estimate the rent price with the linear regression equation: Rent price = {mult_linear_regression.intercept_.round(2)} + {mult_linear_regression.coef_[0].round(2)} * no. of bedrooms + {mult_linear_regression.coef_[1].round(2)} * no. of bathrooms")
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*---
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
###Output
_____no_output_____
###Markdown
Module Project: Regression IDuring the guided project, we predicted how much it would cost to buy a condo in Tribecca. For the module project, your goal will be similar: predict how much it costs to rent an apartment in New York City.Dataset source: [renthop.com](https://www.renthop.com/). Directions> Do Not Copy-Paste. You must *type* each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.>> — Zed Shaw, [Learn Python the Hard Way](https://learnpythonthehardway.org/)The tasks for this project are as follows:- **Task 1:** Import `csv` file using wrangle function.- **Task 2:** Conduct exploratory data analysis (EDA) and plot the relationship between one feature and the target `'price'`.- **Task 3:** Split data into feature matrix `X` and target vector `y`.- **Task 4:** Establish the baseline mean absolute error for your dataset.- **Task 5:** Build and train a `Linearregression` model.- **Task 6:** Check the mean absolute error of our model on the training data.- **Task 7:** Extract and print the intercept and coefficient from your `LinearRegression` model.**Note**You should limit yourself to the following libraries for this project:- `matplotlib`- `numpy`- `pandas`- `sklearn` I. Wrangle Data
###Code
def wrangle(filepath):
df = pd.read_csv(filepath)
# Remove the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= np.percentile(df['price'], 0.5)) &
(df['price'] <= np.percentile(df['price'], 99.5)) &
(df['latitude'] >= np.percentile(df['latitude'], 0.05)) &
(df['latitude'] < np.percentile(df['latitude'], 99.95)) &
(df['longitude'] >= np.percentile(df['longitude'], 0.05)) &
(df['longitude'] <= np.percentile(df['longitude'], 99.95))]
return df
filepath = DATA_PATH + 'apartments/renthop-nyc.csv'
###Output
_____no_output_____
###Markdown
**Task 1:** Use the above `wrangle` function to import the `renthop-nyc.csv` file into a DataFrame named `df`.
###Code
import pandas as pd
df1 = pd.read_csv(DATA_PATH + 'apartments/renthop-nyc.csv')
print(df1.shape)
#df1.head()
import pandas as pd
import numpy as np
#df = wrangle(DATA_PATH + 'apartments/renthop-nyc.csv')
df = wrangle(filepath)
print(df.shape)
df.head()
###Output
(48817, 34)
###Markdown
**Task 2:** Use your `pandas` and dataviz skills to explore the dataset. As part of this process, make a scatter plot that shows the relationship between one of the numerical features in the dataset and the target `'price'`.**Remember:** You should plot your feature on the `X` axis and your target on the `y` axis.
###Code
df.shape
df.head()
df.isnull().sum()
#df['created'].head()
# the date is represented as object data set
#let us work on it to convert it to date
df['created'] = pd.to_datetime(df['created'], infer_datetime_format=True)
df['created'].head()
import matplotlib.pyplot as plt
df['price'].hist()
plt.xlabel('counts')
plt.ylabel('price')
plt.title('Distcribution of house prices')
import seaborn as sns
sns.catplot(x='interest_level',data= df,kind='count')
plt.ylabel('Frequency')
plt.xlabel('Interest level')
plt.title('how the interest level moves')
#plt.xticks(ticks=[0,1], labels=['Lifespan >=30 years', 'Lifespan < 30 years'])
plt.show()
df['interest_level'].value_counts()
#sns.lmplot(x = 'interest_level', y = 'price', data = df, ci = None )
###Output
_____no_output_____
###Markdown
II. Split Data**Task 3:** Choose one feature from the dataset and assign it to your feature matrix `X`. Then assign the column `'price'` to the target vector `y`.**Remember:** Your feature matrix needs to be two-dimensional, but your feature matrix must be one-dimensional.
###Code
X = df[['latitude']]
target = 'price'
y = df[target]
print(X.shape)
print(y.shape)
import numpy as np
df['latitude2']= np.log(df['latitude'])
sns.lmplot(x = 'latitude', y = 'price', data = df, ci = None)
###Output
_____no_output_____
###Markdown
III. Establish Baseline**Task 4:** Since this is a **regression** problem, you need to calculate the baseline the mean absolute error for your model. First, calculate the mean of `y`. Next, create a list `y_pred` that has the same length as `y` and where every item in the list is the mean. Finally, use `mean_absolute_error` to calculate your baseline.
###Code
#calculate the y-mean
y_mean = y.mean()
print(y_mean)
#creating the baseline
y_pred = [y_mean] * len(y)
y_pred[:10]
from sklearn.metrics import mean_absolute_error
baseline_mae = mean_absolute_error(y, y_pred)
print('Baseline MAE:', baseline_mae)
###Output
Baseline MAE: 1549.6424487275
###Markdown
IV. Build Model**Task 5:** Build and train a `LinearRegression` model named `model` using your feature matrix `X` and your target vector `y`.
###Code
# Step 1: Import predictor class
from sklearn.linear_model import LinearRegression
# Step 2: Instantiate predictor
model = LinearRegression()
# Step 3: Fit predictor on the (training) data
model.fit(X, y)
model.predict([[40.7145]])
###Output
_____no_output_____
###Markdown
V. Check Metrics**Task 6:** How does your model perform in comparison to your baseline? Calculate the mean absolute error for your model's predictions.
###Code
training_mae = mean_absolute_error(y, model.predict(X))
print('Training MAE:', training_mae)
###Output
Training MAE: 1549.6523263064641
###Markdown
VI. Communicate Results You've just created a linear model. That means that your model makes predictions using an equation that looks like $\texttt{apt price} = \texttt{intercept}~+~\texttt{coefficient}~\times~\texttt{your feature}$. But what are the values of the intercept and coefficient that your model is using? **Task 7:** Print out the intercept and coefficient associated with `model`.
###Code
f'price = {model.intercept_} + {model.coef_[0]} * Latitude'
from statsmodels.formula.api import ols
apt_price = ols('latitude ~ price', data = df).fit()
apt_price.params
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*--- Regression 1 AssignmentYou'll use another **New York City** real estate dataset. But now you'll **predict how much it costs to rent an apartment**, instead of how much it costs to buy a condo.The data comes from renthop.com, an apartment listing website.- [ ] Look at the data. Choose a feature, and plot its relationship with the target.- [ ] Use scikit-learn for linear regression with one feature. You can follow the [5-step process from Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.htmlBasics-of-the-API).- [ ] Define a function to make new predictions and explain the model coefficient.- [ ] Organize and comment your code.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.If your **Plotly** visualizations aren't working:- You must have JavaScript enabled in your browser- You probably want to use Chrome or Firefox- You may need to turn off ad blockers- [If you're using Jupyter Lab locally, you need to install some "extensions"](https://plot.ly/python/getting-started/jupyterlab-support-python-35) Stretch Goals- [ ] Do linear regression with two or more features.- [ ] Read [The Discovery of Statistical Regression](https://priceonomics.com/the-discovery-of-statistical-regression/)- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 2.1: What Is Statistical Learning?
###Code
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Read New York City apartment rental listing data
import pandas as pd
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv',
parse_dates=['created'],
index_col='created')
# Remove outliers:
# the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= 1375) & (df['price'] <= 15500) &
(df['latitude'] >=40.57) & (df['latitude'] < 40.99) &
(df['longitude'] >= -74.1) & (df['longitude'] <= -73.38)]
print(df.shape)
df.head()
df.info()
df.isnull().sum()
df['price'].describe()
import matplotlib.pyplot as plt
# Choosing bedrooms as my feature. Here is the plot of its relationship with
# the target (rent price).
plt.scatter(df['bedrooms'], df['price'])
plt.xlabel('number of bedrooms')
plt.ylabel('monthly price of rent')
plt.show()
# Splitting target from feature matrix
target = 'price'
y = df[target]
X = df[['bedrooms']]
# y is one-dimensional
y.shape
# X is two-dimensional
X.shape
# Establishing baseline
y.mean()
from sklearn.metrics import mean_absolute_error
print('BASELINE MAE:', mean_absolute_error(y, [y.mean()]*len(y)))
# Now, building the model:
# 1) Importing my predictor
from sklearn.linear_model import LinearRegression
# 2) Instantiating my predictor
model = LinearRegression()
# 3) Training my model with the data
model.fit(X,y)
# 4) Predicting using my model
y_pred = model.predict(X)
# Checking metrics
print('TRAINING MAE:', mean_absolute_error(y, y_pred))
# better than my baseline, woohoo!
# Interpreting my model
model.coef_
# The constant that the number of bedrooms is multiplied by in the model.
model.intercept_
# The monthly price at which number of bedrooms is equal to zero.
plt.scatter(X, y)
plt.plot(X, y_pred, color='r', label='our model')
plt.legend()
plt.xlabel('number of bedrooms')
plt.ylabel('monthly price of rent')
plt.show()
# A function to predict new data:
def predict (bedrooms):
y_pred = model.predict([[bedrooms]])
estimate = y_pred[0]
coefficient = model.coef_[0]
print(int(estimate), 'is our estimated monthly rent price for an apartment with', int(bedrooms), 'bedrooms in New York City.')
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*--- Regression 1 AssignmentYou'll use another **New York City** real estate dataset. But now you'll **predict how much it costs to rent an apartment**, instead of how much it costs to buy a condo.The data comes from renthop.com, an apartment listing website.- [ ] Look at the data. Choose a feature, and plot its relationship with the target.- [ ] Use scikit-learn for linear regression with one feature. You can follow the [5-step process from Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.htmlBasics-of-the-API).- [ ] Define a function to make new predictions and explain the model coefficient.- [ ] Organize and comment your code.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.If your **Plotly** visualizations aren't working:- You must have JavaScript enabled in your browser- You probably want to use Chrome or Firefox- You may need to turn off ad blockers- [If you're using Jupyter Lab locally, you need to install some "extensions"](https://plot.ly/python/getting-started/jupyterlab-support-python-35) Stretch Goals- [ ] Do linear regression with two or more features.- [ ] Read [The Discovery of Statistical Regression](https://priceonomics.com/the-discovery-of-statistical-regression/)- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 2.1: What Is Statistical Learning?
###Code
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Read New York City apartment rental listing data
import pandas as pd
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv')
assert df.shape == (49352, 34)
# Remove outliers:
# the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= 1375) & (df['price'] <= 15500) &
(df['latitude'] >=40.57) & (df['latitude'] < 40.99) &
(df['longitude'] >= -74.1) & (df['longitude'] <= -73.38)]
df.head()
## This is bare bone for 1 feature
# 1 import
from sklearn.linear_model import LinearRegression
# 2 Instantiate
model = LinearRegression()
# 3 Arrange x,y
df['bathrooms'], df['price']
# Decide to choose every features so I don't have to test each one (:
# And for stretch
features = df.columns[df.dtypes!='object']
features = features.drop(target)
target = 'price'
x_train = df[features]
y_train = df[target]
# 4 fit the model
model.fit(x_train, y_train)
## This is for all the features
def predict(feature):
model = LinearRegression()
model.fit(df[feature], df[target])
y_pred = model.predict(df[feature])
estimate = y_pred[0]
coefficient = model.coef_[0]
return coefficient
for f in features:
coef = predict([f])
print(f'- {target} changes by {coef} as {f} increases.\n')
###Output
- price changes by 2573.3743950844187 as bathrooms increases.
- price changes by 853.2541675274308 as bedrooms increases.
- price changes by -1638.1419024698357 as latitude increases.
- price changes by -15315.115869638641 as longitude increases.
- price changes by 731.1727012447627 as elevator increases.
- price changes by 181.57690054355194 as cats_allowed increases.
- price changes by 358.1576594795972 as hardwood_floors increases.
- price changes by 214.12026672756465 as dogs_allowed increases.
- price changes by 984.8244108624798 as doorman increases.
- price changes by 800.8688538020601 as dishwasher increases.
- price changes by 483.5511389794647 as no_fee increases.
- price changes by -153.0410672680397 as laundry_in_building increases.
- price changes by 909.8606767742145 as fitness_center increases.
- price changes by -131.96961298579427 as pre-war increases.
- price changes by 1255.37386991892 as laundry_in_unit increases.
- price changes by 638.519358391544 as roof_deck increases.
- price changes by 725.5110423351198 as outdoor_space increases.
- price changes by 1409.483037847241 as dining_room increases.
- price changes by 563.9150825796559 as high_speed_internet increases.
- price changes by 1028.8235496781783 as balcony increases.
- price changes by 1038.057067282824 as swimming_pool increases.
- price changes by 567.5025783643717 as new_construction increases.
- price changes by 1225.6586053094607 as terrace increases.
- price changes by -114.7124671596686 as exclusive increases.
- price changes by 61.90853010014893 as loft increases.
- price changes by 940.0044770699176 as garden_patio increases.
- price changes by 785.3719374021288 as wheelchair_access increases.
- price changes by 127.00348063648099 as common_outdoor_space increases.
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*---
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
###Output
_____no_output_____
###Markdown
Module Project: Regression IDuring the guided project, we predicted how much it would cost to buy a condo in Tribecca. For the module project, your goal will be similar: predict how much it costs to rent an apartment in New York City.Dataset source: [renthop.com](https://www.renthop.com/). Directions> Do Not Copy-Paste. You must *type* each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.>> — Zed Shaw, [Learn Python the Hard Way](https://learnpythonthehardway.org/)The tasks for this project are as follows:- **Task 1:** Import `csv` file using wrangle function.- **Task 2:** Conduct exploratory data analysis (EDA) and plot the relationship between one feature and the target `'price'`.- **Task 3:** Split data into feature matrix `X` and target vector `y`.- **Task 4:** Establish the baseline mean absolute error for your dataset.- **Task 5:** Build and train a `Linearregression` model.- **Task 6:** Check the mean absolute error of our model on the training data.- **Task 7:** Extract and print the intercept and coefficient from your `LinearRegression` model.**Note**You should limit yourself to the following libraries for this project:- `matplotlib`- `numpy`- `pandas`- `sklearn` I. Wrangle Data
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
pd.options.display.float_format = '{:,.0f}'.format
def wrangle(filepath):
df = pd.read_csv(filepath,
parse_dates=['created'])
# Remove the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= np.percentile(df['price'], 0.5)) &
(df['price'] <= np.percentile(df['price'], 99.5)) &
(df['latitude'] >= np.percentile(df['latitude'], 0.05)) &
(df['latitude'] < np.percentile(df['latitude'], 99.95)) &
(df['longitude'] >= np.percentile(df['longitude'], 0.05)) &
(df['longitude'] <= np.percentile(df['longitude'], 99.95))]
# We need to drop a few columns and clean up the dataframe based on
# Too high or too low cardinality
# Also taking out categorical variable
df.drop(columns = ['display_address', 'street_address', 'interest_level',
'elevator', 'cats_allowed', 'hardwood_floors', 'dogs_allowed',
'doorman', 'dishwasher', 'no_fee', 'laundry_in_building', 'fitness_center',
'pre-war', 'laundry_in_unit', 'roof_deck', 'outdoor_space', 'dining_room',
'high_speed_internet', 'balcony', 'swimming_pool', 'new_construction', 'terrace',
'exclusive', 'loft', 'garden_patio', 'wheelchair_access', 'common_outdoor_space'], inplace= True)
return df
pd.options.display.float_format = '{:,.0f}'.format
filepath = DATA_PATH + 'apartments/renthop-nyc.csv'
###Output
_____no_output_____
###Markdown
**Task 1:** Use the above `wrangle` function to import the `renthop-nyc.csv` file into a DataFrame named `df`.
###Code
df = wrangle(filepath)
df.head()
###Output
_____no_output_____
###Markdown
**Task 2:** Use your `pandas` and dataviz skills to explore the dataset. As part of this process, make a scatter plot that shows the relationship between one of the numerical features in the dataset and the target `'price'`.**Remember:** You should plot your feature on the `X` axis and your target on the `y` axis.
###Code
df.info()
# Look to see what data types are used. It may be important to make created into a datetime format
df.dtypes
# Unique value counts may be important in looking for what type of variables exist and Cardinality
df.nunique()
# Check for null values and look to see if there are any 0.0 values that are non-entrys
df.isna().sum()
sns.pairplot(df)
df['price'].describe()
###Output
_____no_output_____
###Markdown
II. Split Data**Task 3:** Choose one feature from the dataset and assign it to your feature matrix `X`. Then assign the column `'price'` to the target vector `y`.**Remember:** Your feature matrix needs to be two-dimensional, but your feature matrix must be one-dimensional.
###Code
df['price'].hist()
df['bedrooms'].hist()
X = df[['bedrooms']]
y = df['price']
print(X.shape)
print(y.shape)
###Output
(48817, 1)
(48817,)
###Markdown
III. Establish Baseline**Task 4:** Since this is a **regression** problem, you need to calculate the baseline the mean absolute error for your model. First, calculate the mean of `y`. Next, create a list `y_pred` that has the same length as `y` and where every item in the list is the mean. Finally, use `mean_absolute_error` to calculate your baseline.
###Code
from sklearn.metrics import mean_absolute_error
y_pred = [y.mean()]*len(y)
baseline_mae = mean_absolute_error(y, y_pred)
print('Baseline MAE:', baseline_mae)
###Output
Baseline MAE: 1201.532252154329
###Markdown
IV. Build Model**Task 5:** Build and train a `LinearRegression` model named `model` using your feature matrix `X` and your target vector `y`.
###Code
# Step 1: Import predictor class
from sklearn.linear_model import LinearRegression
# Step 2: Instantiate predictor
model = LinearRegression()
# Step 3: Fit predictor on the (training) data
model.fit(X, y)
#dir(model)
model_slope = model.coef_
model_intercept = model.intercept_
print("The slope of the linear regression model is:", model_slope)
print("The y-intercept of the linear regression model is:", model_intercept)
plt.scatter(X, y)
plt.plot(X, y_pred, label = 'Baseline', color= 'red')
plt.xlabel('Bedrooms')
plt.ylabel('Price')
plt.legend()
###Output
_____no_output_____
###Markdown
V. Check Metrics**Task 6:** How does your model perform in comparison to your baseline? Calculate the mean absolute error for your model's predictions.
###Code
print(baseline_mae)
# Call the predict function on your model to find the linear regression equation
y_pred_lr = model.predict(X)
# Check the first 20 entries in the Array
y_pred_lr[:20]
training_mae = mean_absolute_error(y, y_pred_lr)
print('Training MAE:', training_mae)
###Output
Training MAE: 975.6496767374764
###Markdown
VI. Communicate Results You've just created a linear model. That means that your model makes predictions using an equation that looks like $\texttt{apt price} = \texttt{intercept}~+~\texttt{coefficient}~\times~\texttt{your feature}$. But what are the values of the intercept and coefficient that your model is using? **Task 7:** Print out the intercept and coefficient associated with `model`.
###Code
# This was done above in task 6
# Will show plot and final Linear Regression equation below
plt.scatter(X, y)
plt.plot(X, y_pred, label= 'Baseline', color = 'red')
plt.plot(X, y_pred_lr, label= 'Linear Regression', color = 'green')
plt.xlabel('Bedrooms')
plt.ylabel('Price')
plt.legend()
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*--- Regression 1 AssignmentYou'll use another **New York City** real estate dataset. But now you'll **predict how much it costs to rent an apartment**, instead of how much it costs to buy a condo.The data comes from renthop.com, an apartment listing website.- [ ] Look at the data. Choose a feature, and plot its relationship with the target.- [ ] Use scikit-learn for linear regression with one feature. You can follow the [5-step process from Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.htmlBasics-of-the-API).- [ ] Define a function to make new predictions and explain the model coefficient.- [ ] Organize and comment your code.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.If your **Plotly** visualizations aren't working:- You must have JavaScript enabled in your browser- You probably want to use Chrome or Firefox- You may need to turn off ad blockers- [If you're using Jupyter Lab locally, you need to install some "extensions"](https://plot.ly/python/getting-started/jupyterlab-support-python-35) Stretch Goals- [ ] Do linear regression with two or more features.- [ ] Read [The Discovery of Statistical Regression](https://priceonomics.com/the-discovery-of-statistical-regression/)- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 2.1: What Is Statistical Learning?
###Code
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Read New York City apartment rental listing data
import pandas as pd
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv')
assert df.shape == (49352, 34)
# Remove outliers:
# the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= 1375) & (df['price'] <= 15500) &
(df['latitude'] >=40.57) & (df['latitude'] < 40.99) &
(df['longitude'] >= -74.1) & (df['longitude'] <= -73.38)]
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*--- Regression 1 AssignmentYou'll use another **New York City** real estate dataset. But now you'll **predict how much it costs to rent an apartment**, instead of how much it costs to buy a condo.The data comes from renthop.com, an apartment listing website.- [ ] Look at the data. Choose a feature, and plot its relationship with the target.- [ ] Use scikit-learn for linear regression with one feature. You can follow the [5-step process from Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.htmlBasics-of-the-API).- [ ] Define a function to make new predictions and explain the model coefficient.- [ ] Organize and comment your code.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons. Stretch Goals- [ ] Do linear regression with two or more features.- [ ] Read [The Discovery of Statistical Regression](https://priceonomics.com/the-discovery-of-statistical-regression/)- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 2.1: What Is Statistical Learning?
###Code
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Read New York City apartment rental listing data
import pandas as pd
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv')
assert df.shape == (49352, 34)
# Remove outliers:
# the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= 1375) & (df['price'] <= 15500) &
(df['latitude'] >=40.57) & (df['latitude'] < 40.99) &
(df['longitude'] >= -74.1) & (df['longitude'] <= -73.38)]
df.head()
from sklearn.linear_model import LinearRegression as LinReg
from sklearn.metrics import mean_absolute_error
import matplotlib.pyplot as plt
target = 'price'
features = ['bedrooms']
model = LinReg()
X_train = df[features]
y_train = df[target]
plt.scatter(X_train, y_train)
model.fit(X_train, y_train)
beds = 3
X_test = [[beds]]
y_pred = model.predict(X_test)
y_pred
def pred(test_val, input_model):
if (type(test_val) == type([])): #If the test_val is a list, for multi-feature testing
return input_model.predict([test_val])
else: #If test_val is not a list, for single feature testing
return input_model.predict([[test_val]])
pred(beds, model)
model.coef_
###Output
_____no_output_____
###Markdown
The coefficient in the model represents the amount of target units per 1 unit of the features.
###Code
from sklearn.linear_model import LinearRegression as LinReg
def gen_regression(target, features, input_dataframe):
new_model = LinReg()
new_model.fit(input_dataframe[features], input_dataframe[target])
return new_model
new_mod = gen_regression('price', ['bedrooms', 'bathrooms'], df) #Generate a model from two features, and the target being the price
pred([3,3], new_mod) #For a 3 bedroom, 3 bathroom apartment, rent ~= 8k
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*--- Regression 1 AssignmentYou'll use another **New York City** real estate dataset. But now you'll **predict how much it costs to rent an apartment**, instead of how much it costs to buy a condo.The data comes from renthop.com, an apartment listing website.- [ ] Look at the data. Choose a feature, and plot its relationship with the target.- [ ] Use scikit-learn for linear regression with one feature. You can follow the [5-step process from Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.htmlBasics-of-the-API).- [ ] Define a function to make new predictions and explain the model coefficient.- [ ] Organize and comment your code.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.If your **Plotly** visualizations aren't working:- You must have JavaScript enabled in your browser- You probably want to use Chrome or Firefox- You may need to turn off ad blockers- [If you're using Jupyter Lab locally, you need to install some "extensions"](https://plot.ly/python/getting-started/jupyterlab-support-python-35) Stretch Goals- [ ] Do linear regression with two or more features.- [ ] Read [The Discovery of Statistical Regression](https://priceonomics.com/the-discovery-of-statistical-regression/)- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 2.1: What Is Statistical Learning?
###Code
# import scikit-learn LR
from sklearn.linear_model import LinearRegression
from sklearn import metrics
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = 'C:/Users/ryanh/DS-Unit-2-Linear-Models/data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Read New York City apartment rental listing data
import pandas as pd
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv')
assert df.shape == (49352, 34)
# Remove outliers:
# the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= 1375) & (df['price'] <= 15500) &
(df['latitude'] >=40.57) & (df['latitude'] < 40.99) &
(df['longitude'] >= -74.1) & (df['longitude'] <= -73.38)]
# inspect head
df.head()
# instantiate the class
model = LinearRegression()
model
# create features, target, and train data
features = ['bedrooms']
target = 'price'
X_train = df[features]
y_train = df[target]
# fit the model
model.fit(X_train, y_train)
# apply the model to new data
bedrooms = 3
X_test = [[bedrooms]]
y_pred = model.predict(X_test)
y_pred
# defeniton to use model to predict the price of rent per bedroom in Tribeca
def predict(bedrooms):
y_pred = model.predict([[bedrooms]])
estimate = y_pred[0]
coefficient = model.coef_[0]
result = f'${estimate:,.0f} estimated rent for {bedrooms:,.0f} bedroom condo in Tribeca.'
explanation = f'In this linear regression, each additional bedroom adds ${coefficient:,.0f}.'
return result + '\n' + explanation
print(predict(3))
# stretch goal: 2 features
# let's do it with interest level
df["interest_level"].value_counts()
interest_map = {"low": 0, "medium": 1, "high": 2}
df = df.replace(interest_map)
df.head()
model = LinearRegression()
features = ["bedrooms", "interest_level"]
target = "price"
model.fit(df[features], df[target])
from sklearn.metrics import mean_absolute_error
# try to determine mae for 3 bedrooms and medium interest
prices = df[(df["bedrooms"] == 3) & (df["interest_level"] == 1)]["price"].mean()
mean_absolute_error(model.predict([[3, 1]]), [prices])
# determine mae in general
mean_absolute_error(model.predict(df[["bedrooms", "interest_level"]]), df["price"])
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*--- Regression 1 AssignmentYou'll use another **New York City** real estate dataset. But now you'll **predict how much it costs to rent an apartment**, instead of how much it costs to buy a condo.The data comes from renthop.com, an apartment listing website.- [ ] Look at the data. Choose a feature, and plot its relationship with the target.- [ ] Use scikit-learn for linear regression with one feature. You can follow the [5-step process from Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.htmlBasics-of-the-API).- [ ] Define a function to make new predictions and explain the model coefficient.- [ ] Organize and comment your code.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.If your **Plotly** visualizations aren't working:- You must have JavaScript enabled in your browser- You probably want to use Chrome or Firefox- You may need to turn off ad blockers- [If you're using Jupyter Lab locally, you need to install some "extensions"](https://plot.ly/python/getting-started/jupyterlab-support-python-35) Stretch Goals- [ ] Do linear regression with two or more features.- [ ] Read [The Discovery of Statistical Regression](https://priceonomics.com/the-discovery-of-statistical-regression/)- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 2.1: What Is Statistical Learning?
###Code
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Read New York City apartment rental listing data
import pandas as pd
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv')
assert df.shape == (49352, 34)
# Remove outliers:
# the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= 1375) & (df['price'] <= 15500) &
(df['latitude'] >=40.57) & (df['latitude'] < 40.99) &
(df['longitude'] >= -74.1) & (df['longitude'] <= -73.38)]
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*---
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
###Output
_____no_output_____
###Markdown
Module Project: Regression IDuring the guided project, we predicted how much it would cost to buy a condo in Tribecca. For the module project, your goal will be similar: predict how much it costs to rent an apartment in New York City.Dataset source: [renthop.com](https://www.renthop.com/). Directions> Do Not Copy-Paste. You must *type* each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.>> — Zed Shaw, [Learn Python the Hard Way](https://learnpythonthehardway.org/)The tasks for this project are as follows:- **Task 1:** Import `csv` file using wrangle function.- **Task 2:** Conduct exploratory data analysis (EDA) and plot the relationship between one feature and the target `'price'`.- **Task 3:** Split data into feature matrix `X` and target vector `y`.- **Task 4:** Establish the baseline mean absolute error for your dataset.- **Task 5:** Build and train a `Linearregression` model.- **Task 6:** Check the mean absolute error of our model on the training data.- **Task 7:** Extract and print the intercept and coefficient from your `LinearRegression` model.**Note**You should limit yourself to the following libraries for this project:- `matplotlib`- `numpy`- `pandas`- `sklearn` I. Wrangle Data
###Code
def wrangle(filepath):
df = pd.read_csv(filepath)
# Remove the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= np.percentile(df['price'], 0.5)) &
(df['price'] <= np.percentile(df['price'], 99.5)) &
(df['latitude'] >= np.percentile(df['latitude'], 0.05)) &
(df['latitude'] < np.percentile(df['latitude'], 99.95)) &
(df['longitude'] >= np.percentile(df['longitude'], 0.05)) &
(df['longitude'] <= np.percentile(df['longitude'], 99.95))]
return df
filepath = DATA_PATH + 'apartments/renthop-nyc.csv'
###Output
_____no_output_____
###Markdown
**Task 1:** Use the above `wrangle` function to import the `renthop-nyc.csv` file into a DataFrame named `df`.
###Code
df = wrangle
###Output
_____no_output_____
###Markdown
**Task 2:** Use your `pandas` and dataviz skills to explore the dataset. As part of this process, make a scatter plot that shows the relationship between one of the numerical features in the dataset and the target `'price'`.**Remember:** You should plot your feature on the `X` axis and your target on the `y` axis. II. Split Data**Task 3:** Choose one feature from the dataset and assign it to your feature matrix `X`. Then assign the column `'price'` to the target vector `y`.**Remember:** Your feature matrix needs to be two-dimensional, but your feature matrix must be one-dimensional.
###Code
X = ...
y = ...
###Output
_____no_output_____
###Markdown
III. Establish Baseline**Task 4:** Since this is a **regression** problem, you need to calculate the baseline the mean absolute error for your model. First, calculate the mean of `y`. Next, create a list `y_pred` that has the same length as `y` and where every item in the list is the mean. Finally, use `mean_absolute_error` to calculate your baseline.
###Code
baseline_mae = ...
print('Baseline MAE:', baseline_mae)
###Output
_____no_output_____
###Markdown
IV. Build Model**Task 5:** Build and train a `LinearRegression` model named `model` using your feature matrix `X` and your target vector `y`.
###Code
# Step 1: Import predictor class
# Step 2: Instantiate predictor
model = ...
# Step 3: Fit predictor on the (training) data
###Output
_____no_output_____
###Markdown
V. Check Metrics**Task 6:** How does your model perform in comparison to your baseline? Calculate the mean absolute error for your model's predictions.
###Code
training_mae = ...
print('Training MAE:', training_mae)
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*--- Regression 1 AssignmentYou'll use another **New York City** real estate dataset. But now you'll **predict how much it costs to rent an apartment**, instead of how much it costs to buy a condo.The data comes from renthop.com, an apartment listing website.- [ ] Look at the data. Choose a feature, and plot its relationship with the target.- [ ] Use scikit-learn for linear regression with one feature. You can follow the [5-step process from Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.htmlBasics-of-the-API).- [ ] Define a function to make new predictions and explain the model coefficient.- [ ] Organize and comment your code.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.If your **Plotly** visualizations aren't working:- You must have JavaScript enabled in your browser- You probably want to use Chrome or Firefox- You may need to turn off ad blockers- [If you're using Jupyter Lab locally, you need to install some "extensions"](https://plot.ly/python/getting-started/jupyterlab-support-python-35) Stretch Goals- [ ] Do linear regression with two or more features.- [ ] Read [The Discovery of Statistical Regression](https://priceonomics.com/the-discovery-of-statistical-regression/)- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 2.1: What Is Statistical Learning?
###Code
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Read New York City apartment rental listing data
import pandas as pd
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv')
assert df.shape == (49352, 34)
# Remove outliers:
# the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= 1375) & (df['price'] <= 15500) &
(df['latitude'] >=40.57) & (df['latitude'] < 40.99) &
(df['longitude'] >= -74.1) & (df['longitude'] <= -73.38)]
# looking at the head of my data and also the info to make sure all my data is
# what it is supposed to be
df.head()
df.info()
# For my first target to be the rent price, and my feature to be the amount of bedrooms each apartments have
# Importing matplotlib
import matplotlib.pyplot as plt
# Plotting my target and feature.
plt.scatter(df['bedrooms'], df['price'])
# Setting the X and y labels of my plot
plt.xlabel('Bedrooms')
plt.ylabel('Price')
# Displaying my plot
plt.show()
# Time to use scikit-learn to get a linear regression!
# Importing the class from scikit-learn
from sklearn.linear_model import LinearRegression
# Time to Instantiate the class
model = LinearRegression()
# Assigning my target
y = df['price']
# Assigning my feature
X = df[['bedrooms']]
# Setting my Train and validation split
# First off setting up my mask
mask = X.index < 48000
X_train, y_train = X.loc[mask], y.loc[mask]
X_val, y_val = X.loc[~mask], y.loc[~mask]
# Time to get my baseline.
baseline = y_train.mean()
MAE = abs(y_train - baseline).mean()
print(f'''If my baseline model always predicts {baseline},
on average, the prediction will be of by {MAE}.''')
# Now it is time to build my model.
model.fit(X_train, y_train);
# Grabbing my coeffient
model.coef_[0]
model.intercept_
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*--- Regression 1 AssignmentYou'll use another **New York City** real estate dataset. But now you'll **predict how much it costs to rent an apartment**, instead of how much it costs to buy a condo.The data comes from renthop.com, an apartment listing website.- [ ] Look at the data. Choose a feature, and plot its relationship with the target.- [ ] Use scikit-learn for linear regression with one feature. You can follow the [5-step process from Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.htmlBasics-of-the-API).- [ ] Define a function to make new predictions and explain the model coefficient.- [ ] Organize and comment your code.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons. Stretch Goals- [ ] Do linear regression with two or more features.- [ ] Read [The Discovery of Statistical Regression](https://priceonomics.com/the-discovery-of-statistical-regression/)- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 2.1: What Is Statistical Learning?
###Code
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Read New York City apartment rental listing data
import pandas as pd
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv')
assert df.shape == (49352, 34)
# Remove outliers:
# the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= 1375) & (df['price'] <= 15500) &
(df['latitude'] >=40.57) & (df['latitude'] < 40.99) &
(df['longitude'] >= -74.1) & (df['longitude'] <= -73.38)]
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*--- Regression 1 AssignmentYou'll use another **New York City** real estate dataset. But now you'll **predict how much it costs to rent an apartment**, instead of how much it costs to buy a condo.The data comes from renthop.com, an apartment listing website.- [ ] Look at the data. Choose a feature, and plot its relationship with the target.- [ ] Use scikit-learn for linear regression with one feature. You can follow the [5-step process from Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.htmlBasics-of-the-API).- [ ] Define a function to make new predictions and explain the model coefficient.- [ ] Organize and comment your code.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons. Stretch Goals- [ ] Do linear regression with two or more features.- [ ] Read [The Discovery of Statistical Regression](https://priceonomics.com/the-discovery-of-statistical-regression/)- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 2.1: What Is Statistical Learning?
###Code
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Read New York City apartment rental listing data
import pandas as pd
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv')
assert df.shape == (49352, 34)
# Remove outliers:
# the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= 1375) & (df['price'] <= 15500) &
(df['latitude'] >=40.57) & (df['latitude'] < 40.99) &
(df['longitude'] >= -74.1) & (df['longitude'] <= -73.38)]
# first, we're doing linear regression,
# so let's get rid of boolean/categorical
# features (in the name of doing the assignment)
coolcols = ['bathrooms', 'bedrooms', 'latitude', 'longitude', 'price']
df = df[coolcols]
df.head()
# just make sure everything is
# on the up and up
df.dtypes
df.isnull().sum()
# plot bedrooms against price
import matplotlib.pyplot as plt
plt.scatter(df['bedrooms'], df['price'])
plt.show()
# plot latitude against price
plt.scatter(df['latitude'], df['price'])
plt.show()
# for some reason, I chose this
# split our data for training/testing/validating purposes
from sklearn.model_selection import train_test_split
X_cols = ['latitude']
y_col = 'price'
X_train, X_test, y_train, y_test = train_test_split(df[X_cols], df[y_col])
X_test, X_val, y_test, y_val = train_test_split(X_test, y_test)
# create the thing that does the predicting
# just for fun, let's perform some
# "hypertuning" in "parallel"
from sklearn.linear_model import LinearRegression
m1 = LinearRegression(fit_intercept=True, normalize=False, copy_X=True)
m2 = LinearRegression(fit_intercept=True, normalize=True, copy_X=True)
m3 = LinearRegression(fit_intercept=False, copy_X=True)
# fit the models
m1.fit(X_train, y_train)
m2.fit(X_train, y_train)
m3.fit(X_train, y_train)
print(X_train.shape)
print(X_test.shape)
print(X_val.shape)
print(y_train.shape)
print(y_test.shape)
print(y_val.shape)
# get the predictions
m1_y = m1.predict(X_test)
m2_y = m2.predict(X_test)
m3_y = m3.predict(X_test)
# rank the predictions
from sklearn.metrics import r2_score
s1 = r2_score(y_test, m1_y)
s2 = r2_score(y_test, m2_y)
s3 = r2_score(y_test, m3_y)
# how'd we do?
results = [('model 1', m1.coef_[0], s1), ('model 2', m2.coef_[0], s2), ('model 3', m3.coef_[0], s3)]
print(results[0])
print(results[1])
print(results[2])
# let's take a look at these models
# actually, 1 & 2 are the same, and render
# the plot unrecognizable; we'll just plot model 3
plt.scatter(X_train, y_train, label='Training data')
plt.scatter(X_test, y_test, color='darkblue', label='Testing Data')
plt.plot(df[X_cols], df[X_cols] * m3.coef_[0], color="red", linewidth=3, label='Model Predictions')
plt.title('Our Model vs Actual Values')
plt.legend()
plt.show()
# using this model, let's make a function
# for new predictions going forward
def what_does_the_model_think(X, y):
plt.scatter(X, y, label='True data')
plt.plot(X, X * m3.coef_[0], color='red', label='Predictions')
plt.title('Our Model vs Actual Values')
plt.legend()
plt.show()
# let's test it out with our validation data
what_does_the_model_think(X_val, y_val)
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*--- Regression 1 AssignmentYou'll use another **New York City** real estate dataset. But now you'll **predict how much it costs to rent an apartment**, instead of how much it costs to buy a condo.The data comes from renthop.com, an apartment listing website.- [ ] Look at the data. Choose a feature, and plot its relationship with the target.- [ ] Use scikit-learn for linear regression with one feature. You can follow the [5-step process from Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.htmlBasics-of-the-API).- [ ] Define a function to make new predictions and explain the model coefficient.- [ ] Organize and comment your code.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.If your **Plotly** visualizations aren't working:- You must have JavaScript enabled in your browser- You probably want to use Chrome or Firefox- You may need to turn off ad blockers- [If you're using Jupyter Lab locally, you need to install some "extensions"](https://plot.ly/python/getting-started/jupyterlab-support-python-35) Stretch Goals- [ ] Do linear regression with two or more features.- [ ] Read [The Discovery of Statistical Regression](https://priceonomics.com/the-discovery-of-statistical-regression/)- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 2.1: What Is Statistical Learning?
###Code
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Read New York City apartment rental listing data
import pandas as pd
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv')
assert df.shape == (49352, 34)
# Remove outliers:
# the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= 1375) & (df['price'] <= 15500) &
(df['latitude'] >=40.57) & (df['latitude'] < 40.99) &
(df['longitude'] >= -74.1) & (df['longitude'] <= -73.38)]
df.head()
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*--- Regression 1 AssignmentYou'll use another **New York City** real estate dataset. But now you'll **predict how much it costs to rent an apartment**, instead of how much it costs to buy a condo.The data comes from renthop.com, an apartment listing website.- [ ] Look at the data. Choose a feature, and plot its relationship with the target.- [ ] Use scikit-learn for linear regression with one feature. You can follow the [5-step process from Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.htmlBasics-of-the-API).- [ ] Define a function to make new predictions and explain the model coefficient.- [ ] Organize and comment your code.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.If your **Plotly** visualizations aren't working:- You must have JavaScript enabled in your browser- You probably want to use Chrome or Firefox- You may need to turn off ad blockers- [If you're using Jupyter Lab locally, you need to install some "extensions"](https://plot.ly/python/getting-started/jupyterlab-support-python-35) Stretch Goals- [ ] Do linear regression with two or more features.- [ ] Read [The Discovery of Statistical Regression](https://priceonomics.com/the-discovery-of-statistical-regression/)- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 2.1: What Is Statistical Learning?
###Code
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Read New York City apartment rental listing data
import pandas as pd
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv')
assert df.shape == (49352, 34)
# Remove outliers:
# the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= 1375) & (df['price'] <= 15500) &
(df['latitude'] >=40.57) & (df['latitude'] < 40.99) &
(df['longitude'] >= -74.1) & (df['longitude'] <= -73.38)]
#Taking a loot at the dataframe
print(df.shape)
df.head()
import plotly.express as px
#plotting the price against the longitude
px.scatter(df, x='longitude', y='price')
from sklearn.linear_model import LinearRegression
model = LinearRegression()
#selecting the features to be used for the prediction
features = ['longitude']
#selecting what we want to predict based on the features
target = ['price']
#assigning them as variables
x_train = df[features]
y_train = df[target]
#fitting the model
model.fit(x_train, y_train)
#testing the model on a new longitude
new_long = -73.8
#turning the new_long variable into a 2d array
x_test = [[new_long]]
#running the test on the new data
y_pred = model.predict(x_test)
#displaying the resulting price prediction of an apartment at -73.8 longitude
y_pred
def price_prediction(longitude):
#getting the prediction based on the input longitude
y_pred = model.predict([[longitude]])
estimate = y_pred[0]
print(model.coef_[0])
print(int(estimate), 'dollars is the predicted price for and apartment at', int(longitude), 'longitude in New York.')
price_prediction(-74)
###Output
[-15315.11586964]
3996 dollars is the predicted price for and apartment at -74 longitude in New York.
###Markdown
The coefficient of the model is essentially the slope of the line of best fit. In this case meaning that for every increase of 1 in longitude, the estimated price will change by coefficient which in this case is ~-15,315.116. Which isn't very good since it quickly changes from interpolating to extrapolating.
###Code
#stretch goals(making a regression model with 2 features)
#selecting the features and target for new model
feature2 = ['longitude', 'latitude']
target2 = ['price']
X_train = df[feature2]
y_train2 = df[target2]
model.fit(X_train, y_train2)
#creating some new data variables to test
new_longitude = -73.9
new_latitude = 40.751
#assigning them to an array variable to put into the model prediction
x_test2 = [[new_longitude, new_latitude]]
#running the prediction to test the price prediction
y_pred2 = model.predict(x_test2)
y_pred2
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*--- Regression 1 AssignmentYou'll use another **New York City** real estate dataset. But now you'll **predict how much it costs to rent an apartment**, instead of how much it costs to buy a condo.The data comes from renthop.com, an apartment listing website.- [ ] Look at the data. Choose a feature, and plot its relationship with the target.- [ ] Use scikit-learn for linear regression with one feature. You can follow the [5-step process from Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.htmlBasics-of-the-API).- [ ] Define a function to make new predictions and explain the model coefficient.- [ ] Organize and comment your code.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.If your **Plotly** visualizations aren't working:- You must have JavaScript enabled in your browser- You probably want to use Chrome or Firefox- You may need to turn off ad blockers- [If you're using Jupyter Lab locally, you need to install some "extensions"](https://plot.ly/python/getting-started/jupyterlab-support-python-35) Stretch Goals- [ ] Do linear regression with two or more features.- [ ] Read [The Discovery of Statistical Regression](https://priceonomics.com/the-discovery-of-statistical-regression/)- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 2.1: What Is Statistical Learning?
###Code
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Read New York City apartment rental listing data
import pandas as pd
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv')
assert df.shape == (49352, 34)
# Remove outliers:
# the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= 1375) & (df['price'] <= 15500) &
(df['latitude'] >=40.57) & (df['latitude'] < 40.99) &
(df['longitude'] >= -74.1) & (df['longitude'] <= -73.38)]
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*--- Regression 1 AssignmentYou'll use another **New York City** real estate dataset. But now you'll **predict how much it costs to rent an apartment**, instead of how much it costs to buy a condo.The data comes from renthop.com, an apartment listing website.- [ ] Look at the data. Choose a feature, and plot its relationship with the target.- [ ] Use scikit-learn for linear regression with one feature. You can follow the [5-step process from Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.htmlBasics-of-the-API).- [ ] Define a function to make new predictions and explain the model coefficient.- [ ] Organize and comment your code.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.If your **Plotly** visualizations aren't working:- You must have JavaScript enabled in your browser- You probably want to use Chrome or Firefox- You may need to turn off ad blockers- [If you're using Jupyter Lab locally, you need to install some "extensions"](https://plot.ly/python/getting-started/jupyterlab-support-python-35) Stretch Goals- [ ] Do linear regression with two or more features.- [ ] Read [The Discovery of Statistical Regression](https://priceonomics.com/the-discovery-of-statistical-regression/)- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 2.1: What Is Statistical Learning?
###Code
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Read New York City apartment rental listing data
import pandas as pd
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv')
assert df.shape == (49352, 34)
# Remove outliers:
# the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= 1375) & (df['price'] <= 15500) &
(df['latitude'] >=40.57) & (df['latitude'] < 40.99) &
(df['longitude'] >= -74.1) & (df['longitude'] <= -73.38)]
df.info()
import matplotlib.pyplot as plt
df.head()
df['price'].plot(kind='hist')
plt.xlabel('price');
df['bedrooms'].plot(kind='hist')
plt.xlabel('bedrooms');
df['bathrooms'].plot(kind='hist')
plt.xlabel('bathrooms');
# I wanted boxplots of price by bedrooms
# This gave an error and I'm not sure why
plt.box(df['price'][df['bedrooms']==1])
plt.xlabel('bedrooms')
plt.ylabel('price');
df['price'][df['bedrooms']==1]
# This only allows me to plot one boxplot at a time
# otherwise they stack on top of each other
df[df['bedrooms']==1]['price'].plot(kind='box')
df[df['bedrooms']==2]['price'].plot(kind='box');
# This is what I wanted!
df.boxplot('price', by='bedrooms');
###Output
_____no_output_____
###Markdown
Let's use bedrooms as our feature and price is the target.
###Code
# y is our target and X is the feature matrix
target = 'price'
y = df[target]
X = df[['bedrooms']]
y.shape
y.head()
X.shape
# Use the mean of all rents as the baseline.
from sklearn.metrics import mean_absolute_error
y_baseline = [y.mean()] * len(y)
print('Baseline MAE = ', mean_absolute_error(y, y_baseline))
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(X,y)
y_pred = model.predict(X)
y_pred
print('Training MAE = ', mean_absolute_error(y, y_pred))
###Output
Training MAE = 975.6559731054491
###Markdown
Decent improvement in MAE from the naive baseline.
###Code
plt.scatter(X,y)
plt.plot(X, y_pred, color='green', label='Best Fit Line')
plt.xlabel('bedrooms')
plt.ylabel('price')
plt.legend();
model.intercept_
model.coef_[0]
def rent_predict(X, y, br):
# Takes training data X and Y,
# a number of bedrooms br, and returns the predicted rent
from sklearn.linear_model import LinearRegression
import matplotlib.pyplot as plt
model = LinearRegression()
model.fit(X,y)
y_pred = model.predict(X)
plt.scatter(X,y)
plt.plot(X, y_pred, color='green', label='Best Fit Line')
plt.xlabel('bedrooms')
plt.ylabel('price')
plt.legend();
predicted_rent = model.intercept_ + model.coef_[0] * br
print(f'The predicted rent for {br} bedrooms is', round(predicted_rent, 2), '\n')
#print(f'The predicted rent for {br} bedrooms is ' + "{:.2f}".format(predicted_rent))
rent_predict(df[['bedrooms']], df['price'], 4)
###Output
The predicted rent for 4 bedrooms is 5680.99
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*---
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
###Output
_____no_output_____
###Markdown
Module Project: Regression IDuring the guided project, we predicted how much it would cost to buy a condo in Tribecca. For the module project, your goal will be similar: predict how much it costs to rent an apartment in New York City.Dataset source: [renthop.com](https://www.renthop.com/). Directions> Do Not Copy-Paste. You must *type* each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.>> — Zed Shaw, [Learn Python the Hard Way](https://learnpythonthehardway.org/)The tasks for this project are as follows:- **Task 1:** Import `csv` file using wrangle function.- **Task 2:** Conduct exploratory data analysis (EDA) and plot the relationship between one feature and the target `'price'`.- **Task 3:** Split data into feature matrix `X` and target vector `y`.- **Task 4:** Establish the baseline mean absolute error for your dataset.- **Task 5:** Build and train a `Linearregression` model.- **Task 6:** Check the mean absolute error of our model on the training data.- **Task 7:** Extract and print the intercept and coefficient from your `LinearRegression` model.**Note**You should limit yourself to the following libraries for this project:- `matplotlib`- `numpy`- `pandas`- `sklearn` I. Wrangle Data
###Code
def wrangle(filepath):
df = pd.read_csv(filepath)
# Remove the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= np.percentile(df['price'], 0.5)) &
(df['price'] <= np.percentile(df['price'], 99.5)) &
(df['latitude'] >= np.percentile(df['latitude'], 0.05)) &
(df['latitude'] < np.percentile(df['latitude'], 99.95)) &
(df['longitude'] >= np.percentile(df['longitude'], 0.05)) &
(df['longitude'] <= np.percentile(df['longitude'], 99.95))]
return df
filepath = DATA_PATH + 'apartments/renthop-nyc.csv'
###Output
_____no_output_____
###Markdown
**Task 1:** Use the above `wrangle` function to import the `renthop-nyc.csv` file into a DataFrame named `df`.
###Code
df = ...
###Output
_____no_output_____
###Markdown
**Task 2:** Use your `pandas` and dataviz skills to explore the dataset. As part of this process, make a scatter plot that shows the relationship between one of the numerical features in the dataset and the target `'price'`.**Remember:** You should plot your feature on the `X` axis and your target on the `y` axis. II. Split Data**Task 3:** Choose one feature from the dataset and assign it to your feature matrix `X`. Then assign the column `'price'` to the target vector `y`.**Remember:** Your feature matrix needs to be two-dimensional, but your feature matrix must be one-dimensional.
###Code
X = ...
y = ...
###Output
_____no_output_____
###Markdown
III. Establish Baseline**Task 4:** Since this is a **regression** problem, you need to calculate the baseline the mean absolute error for your model. First, calculate the mean of `y`. Next, create a list `y_pred` that has the same length as `y` and where every item in the list is the mean. Finally, use `mean_absolute_error` to calculate your baseline.
###Code
baseline_mae = ...
print('Baseline MAE:', baseline_mae)
###Output
_____no_output_____
###Markdown
IV. Build Model**Task 5:** Build and train a `LinearRegression` model named `model` using your feature matrix `X` and your target vector `y`.
###Code
# Step 1: Import predictor class
# Step 2: Instantiate predictor
model = ...
# Step 3: Fit predictor on the (training) data
###Output
_____no_output_____
###Markdown
V. Check Metrics**Task 6:** How does your model perform in comparison to your baseline? Calculate the mean absolute error for your model's predictions.
###Code
training_mae = ...
print('Training MAE:', training_mae)
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*--- Regression 1 AssignmentYou'll use another **New York City** real estate dataset. But now you'll **predict how much it costs to rent an apartment**, instead of how much it costs to buy a condo.The data comes from renthop.com, an apartment listing website.- [ ] Look at the data. Choose a feature, and plot its relationship with the target.- [ ] Use scikit-learn for linear regression with one feature. You can follow the [5-step process from Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.htmlBasics-of-the-API).- [ ] Define a function to make new predictions and explain the model coefficient.- [ ] Organize and comment your code.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.If your **Plotly** visualizations aren't working:- You must have JavaScript enabled in your browser- You probably want to use Chrome or Firefox- You may need to turn off ad blockers- [If you're using Jupyter Lab locally, you need to install some "extensions"](https://plot.ly/python/getting-started/jupyterlab-support-python-35) Stretch Goals- [ ] Do linear regression with two or more features.- [ ] Read [The Discovery of Statistical Regression](https://priceonomics.com/the-discovery-of-statistical-regression/)- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 2.1: What Is Statistical Learning?
###Code
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Read New York City apartment rental listing data
import pandas as pd
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv')
assert df.shape == (49352, 34)
# Remove outliers:
# the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= 1375) & (df['price'] <= 15500) &
(df['latitude'] >=40.57) & (df['latitude'] < 40.99) &
(df['longitude'] >= -74.1) & (df['longitude'] <= -73.38)]
# looking at columns to see contenders for X (independent) variable
# see that "price" is out y (dependent) variable
df.describe()
df.plot.scatter(x='bedrooms', y='price');
# engineering a "room" feature to see if it creates a better scatter plot
def add_bedrooms_bathrooms(cell):
return cell['bedrooms'] + cell['bathrooms']
df['rooms'] = df.apply(add_bedrooms_bathrooms, axis=1)
df.head()
df.plot.scatter(x="rooms", y="price");
###Output
_____no_output_____
###Markdown
Creating "Baseline" Model with "rooms" as X and "price" as y
###Code
# importing and instantiating model class
from sklearn.linear_model import LinearRegression
model = LinearRegression()
# declaring data that will train/test our model
# going to shuffle the df since splitting into train/test
# .sample returns a randomly ordered sample of a DF, frac=1 tells the function
# to return 100% of the DF randomly ordered
df = df.sample(frac=1).reset_index(drop=True)
features = ["rooms"]
target = "price"
X = df[features]
y = df[target]
X_train = X[:40000]
y_train = y[:40000]
X_test = X[40000:]
y_test = y[40000:]
print("# train observations:", X_train.shape[0])
print("# test observations:", X_test.shape[0])
# fit the model on training set
model.fit(X_train, y_train)
from sklearn.metrics import mean_absolute_error
predicted = model.predict(X_test)
mean_absolute_error(predicted, y_test)
###Output
_____no_output_____
###Markdown
Our model that uses "rooms" as its independent variable has a mean absolute error of $911. Lets see if "bedrooms" as the independent variable gives better results.
###Code
# creating a predict function to get some output for coef_
def predict(curr_model, curr_input, dependent_variable):
prediction = curr_model.predict(curr_input)
print(f"This model adds ${curr_model.coef_[0]:.01f} for every " + dependent_variable + " of an observation.")
print(f"Prediction for {curr_input[0][0]} {dependent_variable}s is ${prediction[0]:,.01f}")
predict(model, [[5]], 'room')
# reinstantiating a LinearRegression class object
model = LinearRegression()
# declaring our data
# not shuffling since we did that last time
features = ["bedrooms"]
target = "price"
X = df[features]
y = df[target]
X_train = X[:40000]
y_train = y[:40000]
X_test = X[40000:]
y_test = y[40000:]
print("# train observations:", X_train.shape[0])
print("# test observations:", X_test.shape[0])
model.fit(X_train, y_train)
predicted = model.predict(X_test)
mean_absolute_error(predicted, y_test)
###Output
_____no_output_____
###Markdown
This model produced slightly worse than the model that used our engineered feature (rooms) as the independent variable. Now lets try multiple features.
###Code
# declaring model
model = LinearRegression()
# these are the 3 features we will use for our predictions this time around
features = ["bedrooms", "bathrooms", "elevator"]
target = "price"
X = df[features]
y = df[target]
X_train = X[:40000]
y_train = y[:40000]
X_test = X[40000:]
y_test = y[40000:]
print("# train observations:", X_train.shape[0])
print("# test observations:", X_test.shape[0])
# fitting our model to the training data
model.fit(X_train, y_train)
# seeing how our model performs on the test data
predicted = model.predict(X_test)
mean_absolute_error(predicted, y_test)
# checking coef_ of this model that uses multiple features
print(model.coef_)
# adds $412 per bedroom
# adds $2002 per bathrrom
# adds 500 if has elevator
###Output
[ 412.80703099 2002.12503722 499.91662345]
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*--- Regression 1 AssignmentYou'll use another **New York City** real estate dataset. But now you'll **predict how much it costs to rent an apartment**, instead of how much it costs to buy a condo.The data comes from renthop.com, an apartment listing website.- [ ] Look at the data. Choose a feature, and plot its relationship with the target.- [ ] Use scikit-learn for linear regression with one feature. You can follow the [5-step process from Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.htmlBasics-of-the-API).- [ ] Define a function to make new predictions and explain the model coefficient.- [ ] Organize and comment your code.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.If your **Plotly** visualizations aren't working:- You must have JavaScript enabled in your browser- You probably want to use Chrome or Firefox- You may need to turn off ad blockers- [If you're using Jupyter Lab locally, you need to install some "extensions"](https://plot.ly/python/getting-started/jupyterlab-support-python-35) Stretch Goals- [ ] Do linear regression with two or more features.- [ ] Read [The Discovery of Statistical Regression](https://priceonomics.com/the-discovery-of-statistical-regression/)- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 2.1: What Is Statistical Learning?
###Code
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Read New York City apartment rental listing data
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from sklearn.metrics import mean_absolute_error
from sklearn.linear_model import LinearRegression
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv')
assert df.shape == (49352, 34)
# Remove outliers:
# the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= 1375) & (df['price'] <= 15500) &
(df['latitude'] >=40.57) & (df['latitude'] < 40.99) &
(df['longitude'] >= -74.1) & (df['longitude'] <= -73.38)]
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*--- Regression 1 AssignmentYou'll use another **New York City** real estate dataset. But now you'll **predict how much it costs to rent an apartment**, instead of how much it costs to buy a condo.The data comes from renthop.com, an apartment listing website.- [ ] Look at the data. Choose a feature, and plot its relationship with the target.- [ ] Use scikit-learn for linear regression with one feature. You can follow the [5-step process from Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.htmlBasics-of-the-API).- [ ] Define a function to make new predictions and explain the model coefficient.- [ ] Organize and comment your code.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.If your **Plotly** visualizations aren't working:- You must have JavaScript enabled in your browser- You probably want to use Chrome or Firefox- You may need to turn off ad blockers- [If you're using Jupyter Lab locally, you need to install some "extensions"](https://plot.ly/python/getting-started/jupyterlab-support-python-35) Stretch Goals- [ ] Do linear regression with two or more features.- [ ] Read [The Discovery of Statistical Regression](https://priceonomics.com/the-discovery-of-statistical-regression/)- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 2.1: What Is Statistical Learning?
###Code
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Read New York City apartment rental listing data
import pandas as pd
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv')
assert df.shape == (49352, 34)
# Remove outliers:
# the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= 1375) & (df['price'] <= 15500) &
(df['latitude'] >=40.57) & (df['latitude'] < 40.99) &
(df['longitude'] >= -74.1) & (df['longitude'] <= -73.38)]
# Start EDA
df.info()
df.describe()
df.head()
# Our target is 'price', but which feature to use as the independant feature.
import matplotlib.pyplot as plt
df['price'].plot(kind='hist')
plt.xlabel('Price');
# Bathrooms show a relationship to price
df['bathrooms'].plot(kind='hist')
plt.xlabel('Bathrooms');
# Bedrooms might have a closer relationship to price
df['bedrooms'].plot(kind='hist')
plt.xlabel('Bedrooms');
# Scatter plots for comparison
plt.scatter(df['bathrooms'], df['price'])
plt.xlabel('Bathrooms')
plt.ylabel('Price of Rent');
# Bedrooms has a better spread of data
plt.scatter(df['bedrooms'], df['price'])
plt.xlabel('Bedrooms')
plt.ylabel('Price of Rent');
# Time to split data
# Split the 'target' from the 'feature matrix'
target = 'price'
y = df[target]
X = df[['bedrooms']]
# Establish a baseline using sklearn
from sklearn.metrics import mean_absolute_error
y_pred = [y.mean()] * len(y)
print('Baseline MAE:', mean_absolute_error(y, y_pred))
# Build a model
from sklearn.linear_model import LinearRegression
# Instantiate the model
model = LinearRegression()
# Fit model to training data
model.fit(X,y)
# Make predictions with our model
y_pred = model.predict(X)
# Retest metrics
print('Training MAE:', mean_absolute_error(y, y_pred))
plt.scatter(X, y)
plt.plot(X, y_pred, color='red', label='Linear Model')
plt.xlabel('Bedrooms')
plt.ylabel('Price of Rent ($)')
plt.legend();
model.coef_
model.intercept_
# Rent starts at a baseline of $2268, increasing by $853 per bedroom
# Function to return rent price based on bedrooms
def rent_per_bedrooms(beds):
rent = model.intercept_ + (model.coef_ * beds)
return print(f'A {beds} bedroom apartment will likely cost ${rent} per month.')
# Testing
rent_per_bedrooms(3)
rent_per_bedrooms(1.5)
###Output
A 1.5 bedroom apartment will likely cost $[3547.85540047] per month.
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*--- Regression 1 AssignmentYou'll use another **New York City** real estate dataset. But now you'll **predict how much it costs to rent an apartment**, instead of how much it costs to buy a condo.The data comes from renthop.com, an apartment listing website.- [ ] Look at the data. Choose a feature, and plot its relationship with the target.- [ ] Use scikit-learn for linear regression with one feature. You can follow the [5-step process from Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.htmlBasics-of-the-API).- [ ] Define a function to make new predictions and explain the model coefficient.- [ ] Organize and comment your code.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.If your **Plotly** visualizations aren't working:- You must have JavaScript enabled in your browser- You probably want to use Chrome or Firefox- You may need to turn off ad blockers- [If you're using Jupyter Lab locally, you need to install some "extensions"](https://plot.ly/python/getting-started/jupyterlab-support-python-35) Stretch Goals- [ ] Do linear regression with two or more features.- [ ] Read [The Discovery of Statistical Regression](https://priceonomics.com/the-discovery-of-statistical-regression/)- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 2.1: What Is Statistical Learning?
###Code
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Read New York City apartment rental listing data
import pandas as pd
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv')
assert df.shape == (49352, 34)
# Remove outliers:
# the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= 1375) & (df['price'] <= 15500) &
(df['latitude'] >=40.57) & (df['latitude'] < 40.99) &
(df['longitude'] >= -74.1) & (df['longitude'] <= -73.38)]
df.head()
df.columns
import plotly.express as px
px.scatter(df, x='bedrooms', y='price')
from sklearn.linear_model import LinearRegression
model = LinearRegression()
features = ['bedrooms']
target = 'price'
X_train = df[features]
y_train = df[target]
model.fit(X_train, y_train)
X_train.shape
y_train.shape
bedrooms = 3
X_test = [[bedrooms]]
y_pred = model.predict(X_test)
y_pred
m = model.coef_[0]
b = model.intercept_
print('y = mx + b')
print(f'y = {m:,.0f}*x + {b:,.0f}')
print(f'price = {m:,.0f}*bedrroms + {b:,.0f}')
def predict(bedrooms):
y_pred = model.predict([[bedrooms]])
estimate = y_pred[0]
coefficient = model.coef_[0]
result = f'${estimate:,.0f} estimated price for {bedrooms:,.0f} bedroom condo'
explanation = f'In this linear regression, each additional bedroom adds ${coefficient:,.0f}.'
return result + '\n' + explanation
print(predict(4))
from ipywidgets import interact
interact(predict, bedrooms=(1,8));
#using bedrooms as a feature isn't very good for predicting with this data set
#As seen in the next two lines, the condos with three bedrooms in the train
#set vary a ton.
three_beds_price = df.loc[df['bedrooms']==3]
three_beds_price.price.describe()
features_two = ['latitude', 'longitude']
target = ['price']
X_train = df[features_two]
y_train = df[target]
model.fit(X_train, y_train)
latitude = 40.7145
longitude = -73.9425
X_test =[[latitude, longitude]]
y_pred = model.predict(X_test)
y_pred
def predict_bylocation(lat, long):
X_test =[[lat, long]]
y_pred = model.predict(X_test)
estimate = y_pred[0]
result = f'${estimate} estimated price for condo'
return result
predict_bylocation(40.7388, -74.0018
iris = sns.load_dataset('iris')
iris.head()
%matplotlib inline
import seaborn as sns; sns.set()
sns.pairplot(iris, hue='species', height=1.5);
X_iris = iris.drop('species', axis=1)
X_iris.shape
y_iris = iris['species']
y_iris.shape
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*--- Regression 1 AssignmentYou'll use another **New York City** real estate dataset. But now you'll **predict how much it costs to rent an apartment**, instead of how much it costs to buy a condo.The data comes from renthop.com, an apartment listing website.- [ ] Look at the data. Choose a feature, and plot its relationship with the target.- [ ] Use scikit-learn for linear regression with one feature. You can follow the [5-step process from Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.htmlBasics-of-the-API).- [ ] Define a function to make new predictions and explain the model coefficient.- [ ] Organize and comment your code.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.If your **Plotly** visualizations aren't working:- You must have JavaScript enabled in your browser- You probably want to use Chrome or Firefox- You may need to turn off ad blockers- [If you're using Jupyter Lab locally, you need to install some "extensions"](https://plot.ly/python/getting-started/jupyterlab-support-python-35) Stretch Goals- [ ] Do linear regression with two or more features.- [ ] Read [The Discovery of Statistical Regression](https://priceonomics.com/the-discovery-of-statistical-regression/)- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 2.1: What Is Statistical Learning?
###Code
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Read New York City apartment rental listing data
import pandas as pd
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv')
assert df.shape == (49352, 34)
# Remove outliers:
# the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= 1375) & (df['price'] <= 15500) &
(df['latitude'] >=40.57) & (df['latitude'] < 40.99) &
(df['longitude'] >= -74.1) & (df['longitude'] <= -73.38)]
###Output
_____no_output_____
###Markdown
my work Look at the data. Choose a feature, and plot its relationship with the target.
###Code
df
df.dtypes
df.describe()
df.describe(exclude='number')
df.isnull().sum()
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
target = 'price'
feature = 'interest_level'
plt.scatter(df[feature], df[target], alpha=0.3)
plt.ylabel(target)
plt.xlabel(feature)
plt.title(f'{target} by {feature}')
plt.show()
###Output
_____no_output_____
###Markdown
looks like 'interest_level' will not be a good predictor of price
###Code
for feature in df.select_dtypes(include='number').columns:
if feature == target:
continue
plt.scatter(df[feature], df[target], alpha=0.3)
plt.xlabel(feature)
plt.ylabel(target)
plt.title(f'{target} by {feature}')
plt.show()
###Output
_____no_output_____
###Markdown
'latitude' seems like it would offer the best prediction if using a single feature begin with a baselineusing the mean as a baseline prediction
###Code
y_true = df[target]
y_pred = [y_true.mean()] * len(y_true)
from sklearn.metrics import mean_absolute_error
mae = mean_absolute_error(y_true, y_pred)
print(f'Baseline MAE: {mae:.2f} dollars')
###Output
Baseline MAE: 1201.53 dollars
###Markdown
Use scikit-learn for linear regression with one feature. You can follow the 5-step process from Jake VanderPlas. 1. Import the appropriate estimator class from Scikit-Learn
###Code
from sklearn.linear_model import LinearRegression
###Output
_____no_output_____
###Markdown
2. Instantiate this class
###Code
model = LinearRegression(n_jobs=-1)
###Output
_____no_output_____
###Markdown
3. Arrange X features matrix & y target vector
###Code
target = 'price'
features = ['latitude']
X_ = df[features]
y_true = df[target]
###Output
_____no_output_____
###Markdown
4. Fit the model
###Code
model.fit(X_, y_true)
###Output
_____no_output_____
###Markdown
5. Apply the model to new data
###Code
new_data = [[40.7283]]
model.predict(new_data)
###Output
_____no_output_____
###Markdown
Define a function to make new predictions and explain the model coefficient.
###Code
def predict(dataframe, target:str, feature:str, value):
assert feature in dataframe.columns, f'feature {feature} not in dataframe'
assert target in dataframe.columns, f'target {target} not in dataframe'
y_true = dataframe[target]
X = dataframe[[feature]]
model = LinearRegression(n_jobs=-1)
model.fit(X, y_true)
prediction = model.predict([[value]])[0]
coefficient = model.coef_[0]
return f'{prediction} predicted {target} for {feature}={value}.\nIn this linear regression, each additional unit of {feature} adds {coefficient} to the {target}.'
print(predict(df, 'price', 'bedrooms', 5))
print(predict(df, 'price', 'bathrooms', 3.5))
print(predict(df, 'bathrooms', 'latitude', 40.7539))
###Output
1.2023009344888012 predicted bathrooms for latitude=40.7539.
In this linear regression, each additional unit of latitude adds 0.16243037318682954 to the bathrooms.
###Markdown
Do linear regression with two or more features
###Code
target = 'price'
features = ['latitude', 'longitude', 'bedrooms', 'bathrooms']
X = df[features]
y_true = df[target]
model = LinearRegression(n_jobs=-1)
model.fit(X, y_true)
model.predict([[40.7539, -73.9677, 1, 1.0]])[0]
model.predict([df.loc[2542, features]])[0]
df.loc[2542, 'price']
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*--- Regression 1 AssignmentYou'll use another **New York City** real estate dataset. But now you'll **predict how much it costs to rent an apartment**, instead of how much it costs to buy a condo.The data comes from renthop.com, an apartment listing website.- [ ] Look at the data. Choose a feature, and plot its relationship with the target.- [ ] Use scikit-learn for linear regression with one feature. You can follow the [5-step process from Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.htmlBasics-of-the-API).- [ ] Define a function to make new predictions and explain the model coefficient.- [ ] Organize and comment your code.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.If your **Plotly** visualizations aren't working:- You must have JavaScript enabled in your browser- You probably want to use Chrome or Firefox- You may need to turn off ad blockers- [If you're using Jupyter Lab locally, you need to install some "extensions"](https://plot.ly/python/getting-started/jupyterlab-support-python-35) Stretch Goals- [ ] Do linear regression with two or more features.- [ ] Read [The Discovery of Statistical Regression](https://priceonomics.com/the-discovery-of-statistical-regression/)- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 2.1: What Is Statistical Learning?
###Code
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Read New York City apartment rental listing data
import pandas as pd
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv',
parse_dates=['created'])
assert df.shape == (49352, 34)
df.head()
df.info()
# Remove outliers:
# the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= 1375) & (df['price'] <= 15500) &
(df['latitude'] >=40.57) & (df['latitude'] < 40.99) &
(df['longitude'] >= -74.1) & (df['longitude'] <= -73.38)]
###Output
_____no_output_____
###Markdown
EDA
###Code
#Choose a feature, and plot its relationship with the target.
df['price'].plot(kind='hist')
df.describe()['price']
df['bathrooms'].plot(kind='hist')
df.describe()['bathrooms']
pd.crosstab(df['price'],df['bathrooms'])
###Output
_____no_output_____
###Markdown
split data
###Code
y = df['price']
X = df[['bathrooms']]
import matplotlib.pyplot as plt
plt.scatter(X, y)
plt.xlabel('Bathrooms')
plt.ylabel('Price')
#line of best fit
###Output
_____no_output_____
###Markdown
train validation split
###Code
X.info()
X.shape
#create a mask to split data into 20% and 80% data
mask = X.index < (48818*0.2)
mask
X_train, y_train = X.loc[mask], y.loc[mask]
X_val, y_val = X.loc[~mask], y.loc[~mask]
print(X_train.shape)
print(X_val.shape)
print(y_train.shape)
print(y_val.shape)
#creating a baseline
baseline_guess = y_train.mean()
MAE = abs(y_train - baseline_guess).mean()
print(f'''if baseline model always predicts {baseline_guess},
on average, the prediction will be off by {MAE}.''')
#Use scikit-learn for linear regression with one feature.
from sklearn.linear_model import LinearRegression
#instantiate predictor
lr = LinearRegression()
#train predictor using training data
lr.fit(X_train, y_train)
#always need the zeroth index of lr.coef bc trained on one feature
lr.coef_[0]
lr.intercept_
#Define a function to make new predictions and explain the model coefficient.
#rent price = # of bathrooms * 2454.36 + 612.41
def rent_price(bath):
rent = bath * lr.coef_[0] + lr.intercept_
return rent
rent_price(1)
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*---
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
###Output
_____no_output_____
###Markdown
Module Project: Regression IDuring the guided project, we predicted how much it would cost to buy a condo in Tribecca. For the module project, your goal will be similar: predict how much it costs to rent an apartment in New York City.Dataset source: [renthop.com](https://www.renthop.com/). Directions> Do Not Copy-Paste. You must *type* each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.>> — Zed Shaw, [Learn Python the Hard Way](https://learnpythonthehardway.org/)The tasks for this project are as follows:- **Task 1:** Import `csv` file using wrangle function.- **Task 2:** Conduct exploratory data analysis (EDA) and plot the relationship between one feature and the target `'price'`.- **Task 3:** Split data into feature matrix `X` and target vector `y`.- **Task 4:** Establish the baseline mean absolute error for your dataset.- **Task 5:** Build and train a `Linearregression` model.- **Task 6:** Check the mean absolute error of our model on the training data.- **Task 7:** Extract and print the intercept and coefficient from your `LinearRegression` model.**Note**You should limit yourself to the following libraries for this project:- `matplotlib`- `numpy`- `pandas`- `sklearn` I. Wrangle Data
###Code
import pandas as pd
import numpy as np
def wrangle(filepath):
df = pd.read_csv(filepath)
# Remove the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= np.percentile(df['price'], 0.5)) &
(df['price'] <= np.percentile(df['price'], 99.5)) &
(df['latitude'] >= np.percentile(df['latitude'], 0.05)) &
(df['latitude'] < np.percentile(df['latitude'], 99.95)) &
(df['longitude'] >= np.percentile(df['longitude'], 0.05)) &
(df['longitude'] <= np.percentile(df['longitude'], 99.95))]
return df
filepath = DATA_PATH + 'apartments/renthop-nyc.csv'
###Output
_____no_output_____
###Markdown
**Task 1:** Use the above `wrangle` function to import the `renthop-nyc.csv` file into a DataFrame named `df`.
###Code
df = wrangle(filepath)
df.head()
###Output
_____no_output_____
###Markdown
**Task 2:** Use your `pandas` and dataviz skills to explore the dataset. As part of this process, make a scatter plot that shows the relationship between one of the numerical features in the dataset and the target `'price'`.**Remember:** You should plot your feature on the `X` axis and your target on the `y` axis.
###Code
import matplotlib.pyplot as plt
df.nunique()
df.describe()
df.info()
plt.scatter(df['bathrooms'], df['price'])
plt.xlabel('Bathrooms')
plt.ylabel('Price')
plt.scatter(df['bedrooms'], df['price'])
plt.xlabel('Bedrooms')
plt.ylabel('Price')
plt.scatter(df['latitude'], df['price'])
plt.scatter(df['longitude'], df['price'])
###Output
_____no_output_____
###Markdown
II. Split Data**Task 3:** Choose one feature from the dataset and assign it to your feature matrix `X`. Then assign the column `'price'` to the target vector `y`.**Remember:** Your feature matrix needs to be two-dimensional, but your feature matrix must be one-dimensional.
###Code
x = df[['bathrooms']]
y = df[['price']]
print(f'Shape of x is {x.shape} and shape of y is {y.shape}.')
###Output
Shape of x is (48817, 1) and shape of y is (48817, 1).
###Markdown
III. Establish Baseline**Task 4:** Since this is a **regression** problem, you need to calculate the baseline the mean absolute error for your model. First, calculate the mean of `y`. Next, create a list `y_pred` that has the same length as `y` and where every item in the list is the mean. Finally, use `mean_absolute_error` to calculate your baseline.
###Code
from sklearn.metrics import mean_absolute_error
y_pred = [y.mean()]*len(y)
baseline_mae = mean_absolute_error(y, y_pred)
print('Baseline MAE:', baseline_mae)
###Output
Baseline MAE: 1201.532252154329
###Markdown
IV. Build Model**Task 5:** Build and train a `LinearRegression` model named `model` using your feature matrix `X` and your target vector `y`.
###Code
# Step 1: Import predictor class
from sklearn.linear_model import LinearRegression
# Step 2: Instantiate predictor
model = LinearRegression()
# Step 3: Fit predictor on the (training) data
model.fit(x,y)
###Output
_____no_output_____
###Markdown
V. Check Metrics**Task 6:** How does your model perform in comparison to your baseline? Calculate the mean absolute error for your model's predictions.
###Code
training_mae = mean_absolute_error(x,y)
print('Training MAE:', training_mae)
###Output
Training MAE: 3578.3834524858144
###Markdown
VI. Communicate Results You've just created a linear model. That means that your model makes predictions using an equation that looks like $\texttt{apt price} = \texttt{intercept}~+~\texttt{coefficient}~\times~\texttt{your feature}$. But what are the values of the intercept and coefficient that your model is using? **Task 7:** Print out the intercept and coefficient associated with `model`.
###Code
slope = model.coef_[0]
intercept = model.intercept_
print(f'Using a linear regression to predict the price from the number of bathrooms, the intercept is {intercept} and the slope is {slope}.')
print(f'The mean error is {training_mae}.')
print(f'Since this error is greater than {baseline_mae}, we can see that bathrooms is not a great predictor of price - we get a better prediction by just using the mean price.')
df.drop(columns = ['created', 'description'], inplace = True)
bathrooms = df[['bathrooms']]
bedrooms = df[['bedrooms']]
latitude = df[['latitude']]
longitude = df[['longitude']]
elevator = df[['elevator']]
cats = df[['cats_allowed']]
dogs = df[['dogs_allowed']]
floors = df[['hardwood_floors']]
doorman = df[['doorman']]
dishwasher = df[['dishwasher']]
fee = df[['no_fee']]
laundry_b = df[['laundry_in_building']]
gym = df[['fitness_center']]
laundry_r = df[['laundry_in_unit']]
roof = df[['roof_deck']]
outdoor = df[['outdoor_space']]
dining = df[['dining_room']]
internet = df[['high_speed_internet']]
balcony = df[['balcony']]
pool = df[['swimming_pool']]
terrace = df[['terrace']]
exclusive = df[['exclusive']]
loft = df[['loft']]
garden = df[['garden_patio']]
wheelchair = df[['wheelchair_access']]
common = df[['common_outdoor_space']]
#I want to test other variables but I got an error, will try to fix later.
df.head()
df.nunique()
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*---
###Code
# change
###Output
_____no_output_____
###Markdown
Regression 1 AssignmentYou'll use another **New York City** real estate dataset. But now you'll **predict how much it costs to rent an apartment**, instead of how much it costs to buy a condo.The data comes from renthop.com, an apartment listing website.- [ ] Look at the data. Choose a feature, and plot its relationship with the target.- [ ] Use scikit-learn for linear regression with one feature. You can follow the [5-step process from Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.htmlBasics-of-the-API).- [ ] Define a function to make new predictions and explain the model coefficient.- [ ] Organize and comment your code.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons. Stretch Goals- [ ] Do linear regression with two or more features.- [ ] Read [The Discovery of Statistical Regression](https://priceonomics.com/the-discovery-of-statistical-regression/)- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 2.1: What Is Statistical Learning?
###Code
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Read New York City apartment rental listing data
import pandas as pd
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv')
assert df.shape == (49352, 34)
# Remove outliers:
# the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= 1375) & (df['price'] <= 15500) &
(df['latitude'] >=40.57) & (df['latitude'] < 40.99) &
(df['longitude'] >= -74.1) & (df['longitude'] <= -73.38)]
# Look at the data. Choose a feature, and plot its relationship with the target.
df.head()
df.plot.scatter('bedrooms', 'price')
# Use scikit-learn for linear regression with one feature.
from sklearn.linear_model import LinearRegression
model = LinearRegression() # instantiation
features = ['bedrooms'] # 'x' variables
target = 'price' # 'y' variable
X_train = df[features] # extracting from the dataframe
y_train = df[target] # extracting from the dataframe
model.fit(X_train, y_train) # producing the linear model
# Define a function to make new predictions and explain the model coefficient.
def rent_predictor(my_model, bedrooms):
"""
Function that takes a sklearn linear model and number of bedrooms as inputs\
and outputs a descriptive statement about what the model predicts as the\
rental price.
"""
estimate = my_model.predict([[bedrooms]])[0]
coeff = my_model.coef_[0]
message = 'Accordion to my linear model the estimated price of a '
print(message + '{0} bedroom condo is ${1:.2f}'.format(bedrooms, estimate))
print('Each bedroom adds ${:.2f} according to the model.'.format(coeff))
rent_predictor(model, 3)
###Output
Accordion to my linear model the estimated price of a 3 bedroom condo is $4827.74
Each bedroom adds $853.25 according to the model.
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*---
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
###Output
_____no_output_____
###Markdown
Module Project: Regression IDuring the guided project, we predicted how much it would cost to buy a condo in Tribecca. For the module project, your goal will be similar: predict how much it costs to rent an apartment in New York City.Dataset source: [renthop.com](https://www.renthop.com/). Directions> Do Not Copy-Paste. You must *type* each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.>> — Zed Shaw, [Learn Python the Hard Way](https://learnpythonthehardway.org/)The tasks for this project are as follows:- **Task 1:** Import `csv` file using wrangle function.- **Task 2:** Conduct exploratory data analysis (EDA) and plot the relationship between one feature and the target `'price'`.- **Task 3:** Split data into feature matrix `X` and target vector `y`.- **Task 4:** Establish the baseline mean absolute error for your dataset.- **Task 5:** Build and train a `Linearregression` model.- **Task 6:** Check the mean absolute error of our model on the training data.- **Task 7:** Extract and print the intercept and coefficient from your `LinearRegression` model.**Note**You should limit yourself to the following libraries for this project:- `matplotlib`- `numpy`- `pandas`- `sklearn` I. Wrangle Data
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import mean_absolute_error
def wrangle(filepath):
df = pd.read_csv(filepath)
# Remove the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= np.percentile(df['price'], 0.5)) &
(df['price'] <= np.percentile(df['price'], 99.5)) &
(df['latitude'] >= np.percentile(df['latitude'], 0.05)) &
(df['latitude'] < np.percentile(df['latitude'], 99.95)) &
(df['longitude'] >= np.percentile(df['longitude'], 0.05)) &
(df['longitude'] <= np.percentile(df['longitude'], 99.95))]
return df
filepath = DATA_PATH + 'apartments/renthop-nyc.csv'
###Output
_____no_output_____
###Markdown
**Task 1:** Use the above `wrangle` function to import the `renthop-nyc.csv` file into a DataFrame named `df`.
###Code
df = wrangle('../data/apartments/renthop-nyc.csv')
print(df.shape)
print(df.info())
df.head
###Output
(48817, 34)
<class 'pandas.core.frame.DataFrame'>
Int64Index: 48817 entries, 0 to 49351
Data columns (total 34 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 bathrooms 48817 non-null float64
1 bedrooms 48817 non-null int64
2 created 48817 non-null object
3 description 47392 non-null object
4 display_address 48684 non-null object
5 latitude 48817 non-null float64
6 longitude 48817 non-null float64
7 price 48817 non-null int64
8 street_address 48807 non-null object
9 interest_level 48817 non-null object
10 elevator 48817 non-null int64
11 cats_allowed 48817 non-null int64
12 hardwood_floors 48817 non-null int64
13 dogs_allowed 48817 non-null int64
14 doorman 48817 non-null int64
15 dishwasher 48817 non-null int64
16 no_fee 48817 non-null int64
17 laundry_in_building 48817 non-null int64
18 fitness_center 48817 non-null int64
19 pre-war 48817 non-null int64
20 laundry_in_unit 48817 non-null int64
21 roof_deck 48817 non-null int64
22 outdoor_space 48817 non-null int64
23 dining_room 48817 non-null int64
24 high_speed_internet 48817 non-null int64
25 balcony 48817 non-null int64
26 swimming_pool 48817 non-null int64
27 new_construction 48817 non-null int64
28 terrace 48817 non-null int64
29 exclusive 48817 non-null int64
30 loft 48817 non-null int64
31 garden_patio 48817 non-null int64
32 wheelchair_access 48817 non-null int64
33 common_outdoor_space 48817 non-null int64
dtypes: float64(3), int64(26), object(5)
memory usage: 13.0+ MB
None
###Markdown
**Task 2:** Use your `pandas` and dataviz skills to explore the dataset. As part of this process, make a scatter plot that shows the relationship between one of the numerical features in the dataset and the target `'price'`.**Remember:** You should plot your feature on the `X` axis and your target on the `y` axis.
###Code
plt.scatter(df['bedrooms'], df['price'])
plt.xlabel('Number of bedrooms')
plt.ylabel('Sale Price')
###Output
_____no_output_____
###Markdown
II. Split Data**Task 3:** Choose one feature from the dataset and assign it to your feature matrix `X`. Then assign the column `'price'` to the target vector `y`.**Remember:** Your feature matrix needs to be two-dimensional, but your feature matrix must be one-dimensional.
###Code
X = df[['bedrooms']]
y = df['price']
print(y.shape)
print(X.shape)
###Output
(48817,)
(48817, 1)
###Markdown
III. Establish Baseline**Task 4:** Since this is a **regression** problem, you need to calculate the baseline the mean absolute error for your model. First, calculate the mean of `y`. Next, create a list `y_pred` that has the same length as `y` and where every item in the list is the mean. Finally, use `mean_absolute_error` to calculate your baseline.
###Code
y_mean = y.mean()
print(y_mean)
y_pred = [y_mean] * len(y)
# print(y_pred[0:5])
baseline_mae = mean_absolute_error(y,y_pred)
print('Baseline MAE:', baseline_mae)
###Output
3579.5852469426636
Baseline MAE: 1201.532252154329
###Markdown
IV. Build Model**Task 5:** Build and train a `LinearRegression` model named `model` using your feature matrix `X` and your target vector `y`.
###Code
# Step 1: Import predictor class
from sklearn.linear_model import LinearRegression
# Step 2: Instantiate predictor
model = LinearRegression()
# Step 3: Fit predictor on the (training) data
model.fit(X,y)
###Output
_____no_output_____
###Markdown
V. Check Metrics**Task 6:** How does your model perform in comparison to your baseline? Calculate the mean absolute error for your model's predictions.
###Code
training_mae = mean_absolute_error(y, model.predict(X))
print('Training MAE:', training_mae)
###Output
Training MAE: 975.6496767374764
###Markdown
VI. Communicate Results You've just created a linear model. That means that your model makes predictions using an equation that looks like $\texttt{apt price} = \texttt{intercept}~+~\texttt{coefficient}~\times~\texttt{your feature}$. But what are the values of the intercept and coefficient that your model is using? **Task 7:** Print out the intercept and coefficient associated with `model`.
###Code
intercept = model.intercept_
print(intercept)
coefficient = model.coef_[0]
print(coefficient)
###Output
2267.987688178934
853.266408483175
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*--- Regression 1 AssignmentYou'll use another **New York City** real estate dataset. But now you'll **predict how much it costs to rent an apartment**, instead of how much it costs to buy a condo.The data comes from renthop.com, an apartment listing website.- [ ] Look at the data. Choose a feature, and plot its relationship with the target.- [ ] Use scikit-learn for linear regression with one feature. You can follow the [5-step process from Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.htmlBasics-of-the-API).- [ ] Define a function to make new predictions and explain the model coefficient.- [ ] Organize and comment your code.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.If your **Plotly** visualizations aren't working:- You must have JavaScript enabled in your browser- You probably want to use Chrome or Firefox- You may need to turn off ad blockers- [If you're using Jupyter Lab locally, you need to install some "extensions"](https://plot.ly/python/getting-started/jupyterlab-support-python-35) Stretch Goals- [ ] Do linear regression with two or more features.- [ ] Read [The Discovery of Statistical Regression](https://priceonomics.com/the-discovery-of-statistical-regression/)- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 2.1: What Is Statistical Learning?
###Code
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Read New York City apartment rental listing data
import pandas as pd
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv')
assert df.shape == (49352, 34)
# Remove outliers:
# the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= 1375) & (df['price'] <= 15500) &
(df['latitude'] >=40.57) & (df['latitude'] < 40.99) &
(df['longitude'] >= -74.1) & (df['longitude'] <= -73.38)]
df.head()
# Add a column counting how many "features" an apartment has.
#Isolate DF to boolean columns
df2 = df.iloc[:,10:60]
#Count sum of row
sum = df2.sum(axis=1)
#Add sum to original df
df['sum'] = sum
df.head()
#Longitude + Latitude price map for fun
import matplotlib.pyplot as plt
plt.scatter(df['longitude'], df['latitude'], c=df['price'], cmap=plt.cm.hsv,s=1)
plt.xlabel('longitude')
plt.ylabel('latitude');
#Features vs Price
import matplotlib.pyplot as plt
plt.scatter(df['sum'], df['price']);
from sklearn.linear_model import LinearRegression
#Instantiate
model = LinearRegression(fit_intercept=True)
model
#Create
target = 'price'
y = df[target]
X = df[['sum']]
#Mean Absolute Error
from sklearn.metrics import mean_absolute_error
y_pred = [y.mean()] * len(y)
print('Baseline MAE:', mean_absolute_error(y, y_pred))
#Predictive linear model
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(X,y)
y_pred = model.predict(X)
print('Training MAE', mean_absolute_error(y, y_pred))
#Plot
plt.scatter(X, y)
plt.plot(X, y_pred, color='red', label='Linear Model')
plt.xlabel('Bedrooms')
plt.ylabel('Price [$]')
plt.legend();
print(f'PRICE = {model.intercept_} + {model.coef_[0]} x Features')
#Base price is $2843.40, and every additional feature costs $78.78
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*--- Regression 1 AssignmentYou'll use another **New York City** real estate dataset. But now you'll **predict how much it costs to rent an apartment**, instead of how much it costs to buy a condo.The data comes from renthop.com, an apartment listing website.- [ ] Look at the data. Choose a feature, and plot its relationship with the target.- [ ] Use scikit-learn for linear regression with one feature. You can follow the [5-step process from Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.htmlBasics-of-the-API).- [ ] Define a function to make new predictions and explain the model coefficient.- [ ] Organize and comment your code.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.If your **Plotly** visualizations aren't working:- You must have JavaScript enabled in your browser- You probably want to use Chrome or Firefox- You may need to turn off ad blockers- [If you're using Jupyter Lab locally, you need to install some "extensions"](https://plot.ly/python/getting-started/jupyterlab-support-python-35) Stretch Goals- [ ] Do linear regression with two or more features.- [ ] Read [The Discovery of Statistical Regression](https://priceonomics.com/the-discovery-of-statistical-regression/)- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 2.1: What Is Statistical Learning?
###Code
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Read New York City apartment rental listing data
import pandas as pd
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv')
assert df.shape == (49352, 34)
# Remove outliers:
# the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= 1375) & (df['price'] <= 15500) &
(df['latitude'] >=40.57) & (df['latitude'] < 40.99) &
(df['longitude'] >= -74.1) & (df['longitude'] <= -73.38)]
## BEGIN
df.head()
## Drop all NaN values
df = df.drop(df.columns[9:], axis=1)
df.head()
## Try out and visualize different features plotted against the target
y = df['price']
X = df[['latitude']]
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.scatter(X, y)
ax.set_xlabel('Latitude')
ax.set_ylabel('Rent')
plt.show()
## Instantiate our class
from sklearn.linear_model import LinearRegression
model = LinearRegression()
## Fit the data
model.fit(X, y)
print(model.coef_[0]) # For ever single unit (in this case .1 degree) increase in latitude, the rent price decreases by about $1600
print(model.intercept_)
## Create our prediction data and regression line function
y_pred = model.predict(X)
y_line = model.intercept_ + (model.coef_[0]*X)
fig, ax = plt.subplots()
ax.scatter(x=df['latitude'], y=df['price'])
ax.set_xlabel('Number of Bathrooms')
ax.set_ylabel('Rent')
ax.plot(X, y_pred, color='r', label='Regression Line')
ax.legend()
plt.show()
## STRETCH GOAL
X2 = df[['latitude', 'longitude']]
model2 = LinearRegression()
model2.fit(X2, y)
print(model2.coef_)
print(model2.intercept_)
y_pred = model2.predict(X2)
y_line = model2.intercept_ + model2.coef_[0]*df['latitude'] + model2.coef_[1]*df['longitude']
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*--- Regression 1 AssignmentYou'll use another **New York City** real estate dataset. But now you'll **predict how much it costs to rent an apartment**, instead of how much it costs to buy a condo.The data comes from renthop.com, an apartment listing website.- [x] Look at the data. Choose a feature, and plot its relationship with the target.- [x] Use scikit-learn for linear regression with one feature. You can follow the [5-step process from Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.htmlBasics-of-the-API).- [x] Define a function to make new predictions and explain the model coefficient.- [x] Organize and comment your code.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.If your **Plotly** visualizations aren't working:- You must have JavaScript enabled in your browser- You probably want to use Chrome or Firefox- You may need to turn off ad blockers- [If you're using Jupyter Lab locally, you need to install some "extensions"](https://plot.ly/python/getting-started/jupyterlab-support-python-35) Stretch Goals- [x] Do linear regression with two or more features.- [ ] Read [The Discovery of Statistical Regression](https://priceonomics.com/the-discovery-of-statistical-regression/)- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 2.1: What Is Statistical Learning?
###Code
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Read New York City apartment rental listing data
import pandas as pd
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv')
assert df.shape == (49352, 34)
# Remove outliers:
# the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= 1375) & (df['price'] <= 15500) &
(df['latitude'] >=40.57) & (df['latitude'] < 40.99) &
(df['longitude'] >= -74.1) & (df['longitude'] <= -73.38)]
df.head()
df.corr()
import plotly.express as px
px.scatter(df, x='bathrooms', y='price')
from sklearn.linear_model import LinearRegression
model = LinearRegression() #instaniate object
features = ['bathrooms'] #define features
target = ['price'] #define our target variable
x_train = df[features]
y_train = df[target]
model.fit(x_train, y_train) #feed training data into linear regression model
def predict_apartment_price(bathrooms):
y_pred = model.predict([[bathrooms]])
estimate = y_pred[0]
coefficient = model.coef_[0]
print("The estimated price of a " + str(bathrooms) + " bathroom apartment in New York City is: " + str(estimate) + ". The model coefficient is: " + str(coefficient))
predict_apartment_price(3)
###Output
The estimated price of a 3 bathroom apartment in New York City is: [8207.02772077]. The model coefficient is: [2573.37439508]
###Markdown
Stretch Goal: Multi-Feature regression
###Code
from sklearn.model_selection import train_test_split
train, test = train_test_split(df, random_state=0) #keep same split each time code is run.
model = LinearRegression() #instaniate object
features = ['bathrooms', 'bedrooms', 'fitness_center', 'dining_room', 'dishwasher', 'laundry_in_unit', 'balcony', 'cats_allowed', 'elevator', 'doorman'] #define features
target = ['price'] #define our target variable
model.fit(train[features], train[target]) #feed training data into linear regression model
from sklearn.metrics import mean_absolute_error #use MAE to calculate accuracy on our whole training/test datasets.
print("Training accuracy: ", mean_absolute_error(train[target], model.predict(train[features])))
print("Testing accuracy: ", mean_absolute_error(test[target], model.predict(test[features])))
def predict_apartment_price(bathrooms, bedrooms, fitness_center, dining_room, dishwasher, laundry_in_unit, balcony, cats_allowed, elevator, doorman):
features = [bathrooms, bedrooms, fitness_center, dining_room, dishwasher, laundry_in_unit, balcony, cats_allowed, elevator, doorman]
y_pred = model.predict([features])
estimate = y_pred[0]
coefficient = model.coef_[0]
print("The estimated price of an apartment with entered arguements in New York City is: " + str(estimate))
predict_apartment_price(bathrooms=4, bedrooms=3, fitness_center=1, dining_room=1, dishwasher=1, laundry_in_unit=1, balcony=1, cats_allowed=1, elevator=1, doorman=1)
###Output
The estimated price of an apartment with entered arguements in New York City is: [10236.05965656]
###Markdown
Now we are taking the longitude/latitude data and applying a dbscan clustering model to it. I considered kmeans, but kmeans groups based on euclidean distance which is not what we want for geographical data.
###Code
import pandas as pd, numpy as np, matplotlib.pyplot as plt
from sklearn.cluster import DBSCAN
from geopy.distance import great_circle
from shapely.geometry import MultiPoint
coords = df[['latitude','longitude']]
miles_per_radian = 3959
epsilon = 0.10 / miles_per_radian
db = DBSCAN(eps=epsilon, min_samples=5, algorithm='ball_tree', metric='haversine').fit(np.radians(coords))
cluster_labels = db.labels_
num_clusters = len(set(cluster_labels))
print(num_clusters)
df['labels'] = cluster_labels
train, test = train_test_split(df, random_state=0) #seed value kept the same as above, so we know we're working with the same test/train data.
model = LinearRegression() #instaniate object
features = ['bathrooms', 'bedrooms', 'fitness_center', 'dining_room', 'dishwasher', 'laundry_in_unit', 'balcony', 'cats_allowed', 'elevator', 'doorman', 'labels'] #define features, with added labels column
target = ['price'] #define our target variable
model.fit(train[features], train[target]) #feed training data into linear regression model
print("Training accuracy: ", mean_absolute_error(train[target], model.predict(train[features])))
print("Testing accuracy: ", mean_absolute_error(test[target], model.predict(test[features])))
###Output
Training accuracy: 742.6278090385553
Testing accuracy: 755.9800347692847
###Markdown
By adding even an imperfect dbscan clustering model, we improved our linear regression out of sample score by 20. With a more fine tuned model, clustering within individual burroughs for example, we could improve much more.
###Code
df.corr()
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor()
model.fit(train[features], train['price'])
print("Training accuracy: ", mean_absolute_error(train['price'], model.predict(train[features])))
print("Testing accuracy: ", mean_absolute_error(test['price'], model.predict(test[features])))
###Output
Training accuracy: 484.10060782362007
Testing accuracy: 569.0526273360023
###Markdown
Out of curiousity, I added a random forest model for comparison. As we can see it is a massive improvement over the linear regression model with improving our score by nearly 186(no surprise). I also imagine the clustering of coordinates gets us a larger net gain with the random forest vs the linear regression model.
###Code
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*--- Regression 1 AssignmentYou'll use another **New York City** real estate dataset. But now you'll **predict how much it costs to rent an apartment**, instead of how much it costs to buy a condo.The data comes from renthop.com, an apartment listing website.- [ ] Look at the data. Choose a feature, and plot its relationship with the target.- [ ] Use scikit-learn for linear regression with one feature. You can follow the [5-step process from Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.htmlBasics-of-the-API).- [ ] Define a function to make new predictions and explain the model coefficient.- [ ] Organize and comment your code.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.If your **Plotly** visualizations aren't working:- You must have JavaScript enabled in your browser- You probably want to use Chrome or Firefox- You may need to turn off ad blockers- [If you're using Jupyter Lab locally, you need to install some "extensions"](https://plot.ly/python/getting-started/jupyterlab-support-python-35) Stretch Goals- [ ] Do linear regression with two or more features.- [ ] Read [The Discovery of Statistical Regression](https://priceonomics.com/the-discovery-of-statistical-regression/)- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 2.1: What Is Statistical Learning?
###Code
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Read New York City apartment rental listing data
import pandas as pd
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv',
parse_dates=['created'])
assert df.shape == (49352, 34)
# Remove outliers:
# the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= 1375) & (df['price'] <= 15500) &
(df['latitude'] >=40.57) & (df['latitude'] < 40.99) &
(df['longitude'] >= -74.1) & (df['longitude'] <= -73.38)]
#checking the head of the data set to see what we're working with
df.head()
#checking the types of data and number of non-null entries in the data set
df.info()
#confirming that the majority of variables are categorical
df.describe()
###Output
_____no_output_____
###Markdown
EDA
###Code
#graphing the price
df['price'].plot(kind='hist')
#checking the average and quartile values of price
df.describe()['price']
#graphing the number of bedrooms
df['bedrooms'].plot(kind='hist')
#checking the average number of bedrooms, as well as the quartile values
df.describe()['bedrooms']
#graphing bedrooms against price
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
plt.scatter(df['bedrooms'], df['price'])
fig.set_facecolor('w')
plt.show()
###Output
_____no_output_____
###Markdown
Split the data
###Code
#setting the target variable
y = df['price']
#creating the feature matrix
X = df[['bedrooms']]
#confirming X has the correct shape
X.shape
#checking the graph of X vs y
fig, ax = plt.subplots()
plt.scatter(X, y)
plt.xlabel('Bedrooms')
plt.ylabel('Price')
fig.set_facecolor('w')
plt.show()
#creating the mask
mask = X.index < 39482
#creating the training data
X_train, y_train = X.loc[mask], y.loc[mask]
#creating the validation data
X_val, y_val = X.loc[~mask], y.loc[~mask]
#checking the shapes of the data sets to confirm the number of entries
print(X_train.shape)
X_val.shape
###Output
(39054, 1)
###Markdown
Establish Baseline
###Code
baseline_guess = y_train.mean()
MAE = abs(y_train - baseline_guess).mean()
print(f'''If my baseline model always predicts {baseline_guess}, on average, the prediction will be off by {MAE}''')
###Output
If my baseline model always predicts 3582.706918625493, on average, the prediction will be off by 1204.6730406617414
###Markdown
Build the Model
###Code
#importing the linear regression function
from sklearn.linear_model import LinearRegression
#instantiating the predictor
lr = LinearRegression()
#training the predictor
lr.fit(X_train, y_train);
#linear regression outputs
print(lr.coef_[0])
lr.intercept_
###Output
855.4859868285281
###Markdown
From the output of the linear regression model, the price goes up by 855 dollars and 49 cents for every bedroom, with a base price of $2268.35 with 0 bedrooms. Stretch Goal Version
###Code
#defining the target vector
y = df['price']
#defining the feature matrix
X = df[['bathrooms', 'bedrooms']]
#confirming the shape of the feature matrix
X.shape
fig, ax = plt.subplots()
plt.scatter(X['bedrooms'], y)
plt.xlabel('Bedrooms')
plt.ylabel('Price')
fig.set_facecolor('w')
plt.show()
fig, ax = plt.subplots()
plt.scatter(X['bathrooms'], y)
plt.xlabel('Bathrooms')
plt.ylabel('Price')
fig.set_facecolor('w')
plt.show()
#creating the mask for the division of the data
mask = X.index < 39482
#defining the training data
X_train, y_train = X.loc[mask], y.loc[mask]
#defining the validation data
X_val, y_val = X.loc[~mask], y.loc[~mask]
#confirming the correct number of observations
print(X_train.shape)
X_val.shape
X_train.head()
#Establishing the Baseline
baseline_guess = y_train.mean()
MAE = abs(y_train - baseline_guess).mean()
print(f'''If my baseline model always predicts {baseline_guess}, on average, the prediction will be off by {MAE}''')
#instantiating the predictor
lr=LinearRegression()
#training the predictor
lr.fit(X_train, y_train);
#regression outputs
print(lr.coef_)
lr.intercept_
###Output
[2102.56954013 384.34345037]
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*--- Regression 1 AssignmentYou'll use another **New York City** real estate dataset. But now you'll **predict how much it costs to rent an apartment**, instead of how much it costs to buy a condo.The data comes from renthop.com, an apartment listing website.- [ ] Look at the data. Choose a feature, and plot its relationship with the target.- [ ] Use scikit-learn for linear regression with one feature. You can follow the [5-step process from Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.htmlBasics-of-the-API).- [ ] Define a function to make new predictions and explain the model coefficient.- [ ] Organize and comment your code.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.If your **Plotly** visualizations aren't working:- You must have JavaScript enabled in your browser- You probably want to use Chrome or Firefox- You may need to turn off ad blockers- [If you're using Jupyter Lab locally, you need to install some "extensions"](https://plot.ly/python/getting-started/jupyterlab-support-python-35) Stretch Goals- [ ] Do linear regression with two or more features.- [ ] Read [The Discovery of Statistical Regression](https://priceonomics.com/the-discovery-of-statistical-regression/)- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 2.1: What Is Statistical Learning?
###Code
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Read New York City apartment rental listing data
import pandas as pd
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv',
parse_dates=['created'],
index_col='created')
# Remove outliers:
# the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= 1375) & (df['price'] <= 15500) &
(df['latitude'] >=40.57) & (df['latitude'] < 40.99) &
(df['longitude'] >= -74.1) & (df['longitude'] <= -73.38)]
df.head()
df.info()
df.isnull().any()
df['price'].describe()
###Output
_____no_output_____
###Markdown
Some EDA
###Code
df['price'].plot(kind='hist')
df.describe()['price']
df['bedrooms'].plot(kind='hist')
#spliting my data. I will use bedrooms to predict rental price
# Target
y = df['price']
# Feature Matrix
X = df[['bedrooms']]
import matplotlib.pyplot as plt
plt.scatter(X, y)
plt.xlabel('Bedrooms')
plt.ylabel('Rent Price')
plt.show()
###Output
_____no_output_____
###Markdown
Train- Validation Split
###Code
df.sort_index()
cutoff = '2016-06-10'
mask = X.index < cutoff
X_train, y_train = X.loc[mask], y.loc[mask]
x_val, y_val = X.loc[~mask], y.loc[~mask]
###Output
_____no_output_____
###Markdown
Establishing Baselines
###Code
## Establishing Baseline
y_train
y_train.plot(kind='hist')
baseline_guess = y_train.mean()
MAE = abs(y_train - baseline_guess).mean()
print(baseline_guess)
print(MAE)
###Output
3578.6241538914305
1200.6979788392466
###Markdown
Building a Model
###Code
from sklearn.linear_model import LinearRegression
# defining the predictor
lr = LinearRegression()
# training predictor
lr.fit(X_train, y_train);
lr.coef_[0]
lr.intercept_
###Output
_____no_output_____
###Markdown
Listed here is the model coefficient (860.15). This value is the slope of the line of best fit. The intercept is where the line of best fit intercepts the Y-axis. Defining the formula for line of best fit (not neccasary according to Nicholas)
###Code
## rentpricepredicton = bedrooms*860.15 - 2258.03
###Output
_____no_output_____
###Markdown
Stretch GOALS (Linear Regression with 2+ features)
###Code
df.describe()
#spliting my data. I will use bedrooms, bathrooms, and whether it was prewar to predict price
# Target
y = df['price']
# Feature Matrix
X = df[['bedrooms', 'bathrooms', 'pre-war']]
## train - validation split
cutoff = '2016-06-10'
mask = X.index < cutoff
X_train, y_train = X.loc[mask], y.loc[mask]
x_val, y_val = X.loc[~mask], y.loc[~mask]
#establishing baselines
y_train
y_train.plot(kind='hist')
baseline_guess = y_train.mean()
MAE = abs(y_train - baseline_guess).mean()
print(baseline_guess)
print(MAE)
# defining the predictor
regressor = LinearRegression()
# training predictor
regressor.fit(X_train, y_train);
regressor.coef_
regressor.intercept_
coeff_df = pd.DataFrame(regressor.coef_, X.columns, columns=['Coefficient'])
coeff_df
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*--- Regression 1 AssignmentYou'll use another **New York City** real estate dataset. But now you'll **predict how much it costs to rent an apartment**, instead of how much it costs to buy a condo.The data comes from renthop.com, an apartment listing website.- [x] Look at the data. Choose a feature, and plot its relationship with the target.- [x] Use scikit-learn for linear regression with one feature. You can follow the [5-step process from Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.htmlBasics-of-the-API).- [x] Define a function to make new predictions and explain the model coefficient.- [x] Organize and comment your code.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.If your **Plotly** visualizations aren't working:- You must have JavaScript enabled in your browser- You probably want to use Chrome or Firefox- You may need to turn off ad blockers- [If you're using Jupyter Lab locally, you need to install some "extensions"](https://plot.ly/python/getting-started/jupyterlab-support-python-35) Stretch Goals- [x] Do linear regression with two or more features.- [x] Read [The Discovery of Statistical Regression](https://priceonomics.com/the-discovery-of-statistical-regression/)- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 2.1: What Is Statistical Learning? Assignment Model using Bedroom Number Wrangle Data
###Code
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '/Users/keila/Documents/Lambda/Units_Git/DS-Unit-2-Linear-Models/data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Read New York City apartment rental listing data
import pandas as pd
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv')
assert df.shape == (49352, 34)
# Remove outliers:
# the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= 1375) & (df['price'] <= 15500) &
(df['latitude'] >=40.57) & (df['latitude'] < 40.99) &
(df['longitude'] >= -74.1) & (df['longitude'] <= -73.38)]
# Take a look at what some of the data looks like
df.head()
# Let's see if there are any missing values and the dtype of each feature
df.info()
# There are a few null values. I want to drop those, set the index to creation time of entry,
# and only keep the features I will be using.
def wrangle(df):
df.dropna(inplace = True)
df.set_index(['created'], inplace = True)
df = df[['price','bathrooms', 'bedrooms', 'dishwasher', 'longitude', 'latitude', 'laundry_in_unit']]
return df
# Apply the function to df
df = wrangle(df)
# Let's take a look at what df looks like now
df.head()
# Want to double check shape of df
print(df.shape)
###Output
_____no_output_____
###Markdown
Split Data
###Code
# Arrange X features matrix & y target vector
X = df[['bedrooms']]
y = df['price']
###Output
_____no_output_____
###Markdown
Establish Baseline
###Code
# Import the needed estimator class from Scikit-Learn
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_absolute_error
# Set up my baseline
print('Mean rent price:', y.mean())
y_pred = [y.mean()] * len(y)
print('Baseline MAE:', mean_absolute_error(y, y_pred))
###Output
Mean rent price: 3578.0165040942848
Baseline MAE: 1203.1825021400039
###Markdown
Build Model
###Code
# Instantiate class
model = LinearRegression()
# Fit model
model.fit(X, y)
# import matplotlib.pylot
import matplotlib.pyplot as plt
# visualize model
plt.scatter(df['bedrooms'], df['price'])
plt.plot(df['bedrooms'], y_pred, label = 'Baseline Model', color = 'brown')
plt.plot(X, model.predict(X), label = 'Linear Model', color = 'green')
plt.xlabel('Bedrooms')
plt.ylabel('Rent Price')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Check Metrics
###Code
# look at MAE using the model
print('Training MAE:', mean_absolute_error(y, model.predict(X)))
###Output
Training MAE: 973.2377536991468
###Markdown
Communicate Results
###Code
# write a function to communicate results of model
def predictions(bedroom_num):
y_pred = model.predict([[bedroom_num]])
estimate = y_pred[0]
coefficient = model.coef_[0]
result = f'When predicting the rent for a {bedroom_num} bedroom apartment our model estimates the rent will be ${estimate:,.2f}.'
explanation = f'Each additional bedroom adds ${coefficient:,.2f} to the rent.'
return result + '\n' + explanation
# test out function
print(predictions(10))
# test out function again
print(predictions(2))
###Output
When predicting the rent for a 2 bedroom apartment our model estimates the rent will be $3,967.81.
Each additional bedroom adds $854.86 to the rent.
###Markdown
Model using Appliances in Unit Wrangle Data
###Code
# create new feature to see if appliance number can be used to estimate rent
df['appliances_in_unit'] = df['dishwasher'] + df['laundry_in_unit']
#take a look at new feature I made
df.head()
###Output
_____no_output_____
###Markdown
Split Data
###Code
# Arrange X features matrix & y target vector for second model
X2 = df[['appliances_in_unit']]
y2 = df['price']
###Output
_____no_output_____
###Markdown
Establish Baseline
###Code
# set the baseline up again (same as one above, just nice to have it here for quick reference)
print('Mean Rent Price:', y2.mean())
y2_pred = [y2.mean()]*len(y2)
print('Baseline MAE:', mean_absolute_error(y2, y2_pred))
###Output
Mean Rent Price: 3578.0165040942848
Baseline MAE: 1203.1825021400039
###Markdown
Build Model
###Code
# Instantiate class
model2 = LinearRegression()
# Fit new model
model2.fit(X2, y2)
# visualize the new model
plt.scatter(df['appliances_in_unit'], df['price'])
plt.plot(df['appliances_in_unit'], y2_pred, label = 'Baseline Model', color = 'brown')
plt.plot(X2, model2.predict(X2), label = 'Linear Model', color = 'green')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Check Metrics
###Code
# look at MAE using the new model
print('Training MAE:', mean_absolute_error(y2, model.predict(X2)))
###Output
Training MAE: 1166.4544241193316
###Markdown
Communicate Results
###Code
# write a function that estimates rent with appliance number
def predictions2(appliance_num):
y_pred = model2.predict([[appliance_num]])
estimate = y_pred[0]
coefficient = model.coef_[0]
result = f'When predicting the rent for an apartment with {appliance_num} appliances, our model estimates the rent will be ${estimate:,.2f}.'
explanation = f'Each additional appliance adds ${coefficient:,.2f} to the rent.'
return result + '\n' + explanation
# test out new function
print(predictions2(1))
###Output
When predicting the rent for an apartment with 1 appliances, our model estimates the rent will be $3,851.10.
Each additional appliance adds $854.86 to the rent.
###Markdown
Stretch Split Data
###Code
# Arrange X features matrix & y target vector for linear regression with 2 features
X3 = df[['longitude', 'latitude']]
y3 = df['price']
###Output
_____no_output_____
###Markdown
Set Baseline
###Code
# Again, the baseline is same as above, writing it up again for reference
print('Mean rent price:', y.mean())
y3_pred = [y3.mean()] * len(y3)
print('Baseline MAE:', mean_absolute_error(y3, y3_pred))
###Output
Mean rent price: 3579.5609816051456
Baseline MAE: 1201.5251847945751
###Markdown
Build Model
###Code
# instantiate class
model3 = LinearRegression()
# fit model
model3.fit(X3, y3)
###Output
_____no_output_____
###Markdown
Check Metrics
###Code
# look at MAE using 2 feature model
print('Training MAE:', mean_absolute_error(y2, model2.predict(X2)))
###Output
Training MAE: 1146.5318150026005
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*--- Regression 1 AssignmentYou'll use another **New York City** real estate dataset. But now you'll **predict how much it costs to rent an apartment**, instead of how much it costs to buy a condo.The data comes from renthop.com, an apartment listing website.- [ ] Look at the data. Choose a feature, and plot its relationship with the target.- [ ] Use scikit-learn for linear regression with one feature. You can follow the [5-step process from Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.htmlBasics-of-the-API).- [ ] Define a function to make new predictions and explain the model coefficient.- [ ] Organize and comment your code.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.If your **Plotly** visualizations aren't working:- You must have JavaScript enabled in your browser- You probably want to use Chrome or Firefox- You may need to turn off ad blockers- [If you're using Jupyter Lab locally, you need to install some "extensions"](https://plot.ly/python/getting-started/jupyterlab-support-python-35) Stretch Goals- [ ] Do linear regression with two or more features.- [ ] Read [The Discovery of Statistical Regression](https://priceonomics.com/the-discovery-of-statistical-regression/)- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 2.1: What Is Statistical Learning?
###Code
import matplotlib.pyplot as plt
import sys
from sklearn.metrics import mean_absolute_error
from sklearn.linear_model import LinearRegression
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Read New York City apartment rental listing data
import pandas as pd
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv')
assert df.shape == (49352, 34)
# Remove outliers:
# the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= 1375) & (df['price'] <= 15500) &
(df['latitude'] >=40.57) & (df['latitude'] < 40.99) &
(df['longitude'] >= -74.1) & (df['longitude'] <= -73.38)]
#Here we view the first 5 rows of the dataframe.
df.head()
# Here we get the data types for all of the clumns in the dataframe.
df.info()
df.describe()
df['pre-war'].plot(kind='hist')
plt.xlabel('pre war');
df['price'].plot(kind='hist')
plt.xlabel('price');
plt.scatter(df['pre-war'], df['price'])
plt.xlabel('pre war')
plt.ylabel('price');
target = 'price'
y = df[target]
x = df[['pre-war']]
y_pred = [y.mean()] * len(y)
print('Baseline MAE:', mean_absolute_error(y, y_pred))
y.shape
x.shape
model = LinearRegression()
model.fit(x,y)
y_pred = model.predict(x)
print('Training MAE:', mean_absolute_error(y, y_pred))
plt.scatter(x,y)
plt.plot(x, y_pred, color='red', label="Linear Model")
plt.xlabel("Pre War Construction")
plt.ylabel("Price")
plt.legend()
plt.show();
print(f'price = {model.intercept_} + {model.coef_[0]} x pre-war')
###Output
_____no_output_____
###Markdown
lambda School Data Science*Unit 2, Sprint 1, Module 1*--- Regression 1 AssignmentYou'll use another **New York City** real estate dataset. But now you'll **predict how much it costs to rent an apartment**, instead of how much it costs to buy a condo.The data comes from renthop.com, an apartment listing website.- [ ] Look at the data. Choose a feature, and plot its relationship with the target.- [ ] Use scikit-learn for linear regression with one feature. You can follow the [5-step process from Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.htmlBasics-of-the-API).- [ ] Define a function to make new predictions and explain the model coefficient.- [ ] Organize and comment your code.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.If your **Plotly** visualizations aren't working:- You must have JavaScript enabled in your browser- You probably want to use Chrome or Firefox- You may need to turn off ad blockers- [If you're using Jupyter Lab locally, you need to install some "extensions"](https://plot.ly/python/getting-started/jupyterlab-support-python-35) Stretch Goals- [ ] Do linear regression with two or more features.- [ ] Read [The Discovery of Statistical Regression](https://priceonomics.com/the-discovery-of-statistical-regression/)- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 2.1: What Is Statistical Learning?
###Code
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Read New York City apartment rental listing data
import pandas as pd
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv')
assert df.shape == (49352, 34)
# Remove outliers:
# the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= 1375) & (df['price'] <= 15500) &
(df['latitude'] >=40.57) & (df['latitude'] < 40.99) &
(df['longitude'] >= -74.1) & (df['longitude'] <= -73.38)]
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 1*--- Regression 1 Assignment - - You'll use another **New York City** real estate dataset. But now you'll **predict how much it costs to rent an apartment**, instead of how much it costs to buy a condo.The data comes from renthop.com, an apartment listing website.- [ ] Look at the data. Choose a feature, and plot its relationship with the target.- [ ] Use scikit-learn for linear regression with one feature. You can follow the [5-step process from Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.htmlBasics-of-the-API).- [ ] Define a function to make new predictions and explain the model coefficient.- [ ] Organize and comment your code.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.If your **Plotly** visualizations aren't working:- You must have JavaScript enabled in your browser- You probably want to use Chrome or Firefox- You may need to turn off ad blockers- [If you're using Jupyter Lab locally, you need to install some "extensions"](https://plot.ly/python/getting-started/jupyterlab-support-python-35) Stretch Goals- [ ] Do linear regression with two or more features.- [ ] Read [The Discovery of Statistical Regression](https://priceonomics.com/the-discovery-of-statistical-regression/)- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 2.1: What Is Statistical Learning?
###Code
import sys
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Read New York City apartment rental listing data
import pandas as pd
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv')
assert df.shape == (49352, 34)
pd.set_option('display.max_columns', 100)
# Remove outliers:
# the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= 1375) & (df['price'] <= 15500) &
(df['latitude'] >=40.57) & (df['latitude'] < 40.99) &
(df['longitude'] >= -74.1) & (df['longitude'] <= -73.38)]
df.describe()
## (When I first plotted below, there was one distracting 10-bath outlier.
## I remove that here.)
df = df[df['bathrooms'] < 10]
## Most of the columns are boolean (shown by max and min being 1 and 0).
## Let's plot price against number of bathrooms.
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.scatter(df['bathrooms'], df['price'], alpha=.15)
ax.set_title('Monthly Rent vs # Bathrooms', fontsize=14, y=1.04)
ax.set_xlabel('# of bathrooms')
ax.set_ylabel('Montly rent ($)')
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
plt.show()
## There's clearly a relationship here. Let's try making a model from
## only bathroom count.
X = df[['bathrooms']]
y = df['price']
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_absolute_error
model = LinearRegression()
model.fit(X=X, y=y)
y_pred = model.predict(X)
y_pred
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.scatter(df['bathrooms'], df['price'], alpha=.15)
model1 = ax.plot(X, y_pred, color='black', alpha=.6)
ax.set_title('Monthly Rent vs # Bathrooms', fontsize=14, y=1.04)
ax.set_xlabel('# of bathrooms')
ax.set_ylabel('Montly rent ($)')
ax.legend(model1, ['Predictions'])
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
plt.show()
mean_absolute_error(y, y_pred)
## Ok. Let's see if, by adding bedroom count to our model,
## we can reduce our mean absolute error.
X = df[['bathrooms', 'bedrooms']]
model = LinearRegression()
model.fit(X=X, y=y)
y_pred = model.predict(X)
mean_absolute_error(y, y_pred)
## Not bad! I don't think this will be helpful, but let's see
## if these boolean features can do anything.
X = df[['bathrooms', 'bedrooms', 'cats_allowed']]
model = LinearRegression()
model.fit(X=X, y=y)
y_pred = model.predict(X)
mean_absolute_error(y, y_pred)
## Tiny improvement. Just for fun, let's iterate through
## the boolean features and see which one of them does the
## most good for our model.
bool_feats = df.loc[:,'elevator':].columns
MAEs = []
for feat in bool_feats:
X = df[['bathrooms', 'bedrooms', feat]]
model = LinearRegression()
model.fit(X=X, y=y)
y_pred = model.predict(X)
MAEs.append(mean_absolute_error(y, y_pred))
df_bool_errors = pd.DataFrame({'feat':bool_feats, 'MAE':MAEs})
df_bool_errors.sort_values(by='MAE').head()
## There you have it! The doorman's presence is the most important boolean
## predictor of rental price.
## What if we added those first three boolean features? What would our
## error be then?
X = df[['bathrooms', 'bedrooms', 'doorman', 'elevator', 'fitness_center']]
model = LinearRegression()
model.fit(X=X, y=y)
y_pred = model.predict(X)
mean_absolute_error(y, y_pred)
## Hardly better than with only the doorman. Let's revert.
X = df[['bathrooms', 'bedrooms', 'doorman']]
model = LinearRegression()
model.fit(X=X, y=y)
y_pred = model.predict(X)
## And check out the coefficients.
print('coefficients:\t', model.coef_)
print('intercept:\t', model.intercept_)
## Let's explain these coefficients. The first one means that for
## every added bathroom, the monthly price prediction goes up by
## $1922. For every added bedroom, it goes up by $441. The presence
## of a doorman singlehandedly raises the monthly price prediction
## by $745!
## Now we can make our own function using these coefficients.
def rent_predict(bathrooms, bedrooms, doorman):
price = 276.26 + 1922.18*bathrooms + 440.52*bedrooms + 744.87*doorman
return price
## When we run mean_absolute_error() on our y_pred from the sklearn
## model compared to predictions from this homemade function, the
## result should be just about zero.
homemade_pred = rent_predict(X['bathrooms'], X['bedrooms'], X['doorman'])
mean_absolute_error(y_pred, homemade_pred)
## Nice. How much would a studio apartment with a doorman cost?
rent_predict(1, 0, 1)
## Yikes.
###Output
_____no_output_____ |
week10_ML_competition_pca_kmeans/day3_unsupervised_PCA_kmeans/competition_diamond/h2o_diamond.ipynb | ###Markdown
H2Ohttps://www.cienciadedatos.net/documentos/py04_machine_learning_con_h2o_y_python
###Code
!pip3 install h2o
import pandas as pd
import numpy as np
from sklearn.metrics import mean_squared_error
import h2o
from h2o.automl import H2OAutoML
train = pd.read_csv('diamonds_train.csv')
test = pd.read_csv('diamonds_test.csv')
# hay que instalar JAVA versión < 14
# https://www.oracle.com/java/technologies/javase-jdk11-downloads.html
h2o.init()
h2train = h2o.H2OFrame(train)
h2test = h2o.H2OFrame(test)
columnas = [a for a in h2train.columns if a != "price"][1:]
x = columnas
y = "price"
print(columnas)
automl = H2OAutoML(max_models=50, seed=42, max_runtime_secs=300, sort_metric='RMSE')
automl.train(x=x, y=y, training_frame=h2train)
# en Windows no está disponible XGBoost :'(
print('[INFO] Models leader board:')
leader_board = automl.leaderboard
leader_board.head()
predictions = automl.leader.predict(h2test)
test.rename(columns={"Unnamed: 0" : "id"}, inplace = True)
test["price"] = predictions.as_data_frame()
columnasentrega = ["id","price"]
entrega = test["price"]
test = test[columnasentrega]
test.head()
test['price'].astype("int")
test.to_csv('to_submit_int.csv', index=False)
testing = pd.read_csv('to_submit_int.csv')
testing
h2o.cluster().shutdown()
###Output
H2O session _sid_9a68 was not closed properly.
###Markdown
**Evaluación con los datos reales**
###Code
true = pd.read_csv('evaluation_price.csv')
y_pred = testing['price']
y_true = true['price']
np.sqrt(mean_squared_error(y_true, y_pred))
###Output
_____no_output_____ |
Climate_Population_ConflictData.ipynb | ###Markdown
###Code
import pandas as pd
import numpy as np
test = pd.read_stata('/content/BS_replication_data.dta')
test
test.shape
test['year'].nunique()
test.columns
Fin = test.drop(['grid_id', 'extreme', 'extreme_p', 'edate', 'fh', 'sh', 'above_equator', 'extreme_1', 'migration_70_00_w', 'temp_ap_95', 'extreme_p5', 'extreme_2', 'extreme_p3', 'prec', 'prec_ap_5', 'drought', 'spei', 'extreme_spei5', 'incidence', 'incidence_number', 'rural', 'agropastoral', 'onset_incidence', 'battle_terr', 'battle_noterr', 'riot', 'csi_mean', 'popgrowth_ipo', 'popdens_ipo', 'lnpopdens_ipo', 'popgrowth_25', 'popgrowth_50', 'popgrowth_75', 'popdens_25', 'popdens_50', 'popdens_75', 'pop_int', 'lnpopdens', 'popgrowth_int', 'popgrowth15', 'popgrowth00', 'migration_90_00', 'migration_70_80', 'migration_80_90', 'migration_70_00', 'migration_90_00_w', 'migration_80_90_w', 'migration_70_80_w'], axis=1)
Fin.head()
Fin.tail()
Fin['country_id'].nunique()
Fin['point_x'].nunique()
Fin['year'].values
Fin = Fin[Fin.year != 1979.0]
Fin = Fin[Fin.year != 1980.0]
Fin = Fin[Fin.year != 1981.0]
Fin = Fin[Fin.year != 1982.0]
Fin = Fin[Fin.year != 1983.0]
Fin = Fin[Fin.year != 1984.0]
Fin = Fin[Fin.year != 1985.0]
Fin = Fin[Fin.year != 1986.0]
Fin = Fin[Fin.year != 1987.0]
Fin = Fin[Fin.year != 1988.0]
Fin = Fin[Fin.year != 1989.0]
Fin = Fin[Fin.year != 2015.0]
Fin = Fin[Fin.year != 2014.0]
Fin
Fin.to_csv(r'C:\Users\surfb\OneDrive\Desktop\Fin.csv', header=True)
Fin_C1 = Fin.drop(['country_id', 'climate_id', 'temp_sd', 'severity_ld'], axis=1)
Fin_C1
Fin_C1 = Fin_C1[Fin_C1.month != 2.0]
Fin_C1
Fin_C1 = Fin_C1[Fin_C1.month != 3.0]
Fin_C1 = Fin_C1[Fin_C1.month != 4.0]
Fin_C1 = Fin_C1[Fin_C1.month != 1.0]
Fin_C1 = Fin_C1[Fin_C1.month != 5.0]
Fin_C1 = Fin_C1[Fin_C1.month != 6.0]
Fin_C1 = Fin_C1[Fin_C1.month != 8.0]
Fin_C1 = Fin_C1[Fin_C1.month != 9.0]
Fin_C1 = Fin_C1[Fin_C1.month != 10.0]
Fin_C1 = Fin_C1[Fin_C1.month != 11.0]
Fin_C1 = Fin_C1[Fin_C1.month != 12.0]
Fin_C1
Fin_C1.to_csv(r'C:\Users\surfb\OneDrive\Desktop\Fin_C1.csv', header=True)
print (Fin_C1)
from google.colab import drive
drive.mount('/content/drive')
###Output
_____no_output_____ |
Use-case_Fraud_detection/BEST_fraud_detection/QuantumCreditFraud-Qiskit-Working.ipynb | ###Markdown
Credit Card Fraud - A Growing IssueCredit card fraud is a growing issue with \\$28.5 billion lost globally due to credit card fraud in 2020 and is expected to exceed \\$49 billion by 2030 [1]). In 2020, around 1 out of 4 digital interactions were credit card fraud attempts (Cite Arkose Labs). Since there are so many non fraudulent transactions, it is challenging to detect the fraudulent transactions. In this notebook, we will be using a quantum auto-encoder to perform anomaly detection. We can use the quantum auto encoder to encode the 4 qubit state into a 3 qubit state and then use a decoder to decode the 3 qubit state back into 4 qubit state. The quantum auto encoder is trained on the normal dataset (or in this case the non fraudulent transactions) which means the quantum auto To tell if a datapoint is an anomaly, Import the dataset
###Code
df = pd.read_csv('creditcard.csv')
###Output
_____no_output_____
###Markdown
We are only going to print the first 5 rows because the dataset contains over 280,000 rows. Each row represents a transaction. Time shows the time passed between the current and first transactions and amount shows the dollar amount spent on the transaction. There are also 28 more features represented by V1, V2, ... , V28 which come from principal component analysis. Finally, there is the class, where a '0' represents no fraud committed and a '1' represents a fraudulent transaction Let's now check the class distribution
###Code
print('No Frauds: ', df['Class'].value_counts()[0])
print('Frauds: ', df['Class'].value_counts()[1])
###Output
No Frauds: 284315
Frauds: 492
###Markdown
Credit card fraud is relatively rare, this creates a very imbalanced distribution. A very imbalanced distribution is not ideal as this can lead to overfitting and our model assuming no fraud most of the time. It is also challenging to find the true correlations between the features and class.
###Code
plot_correlation_matrix(df, "Original Correlation Matrix")
###Output
_____no_output_____
###Markdown
As you can see, nothing can really be inferred from this correlation matrix since the data is so imbalanced. We are going to create a sub sample dataset with equal amounts of non fraudulent data and fraudulent data. We are also going to scale the 'Time' and 'Amount' values for better processing.
###Code
#from sklearn.preprocessing import StandardScaler, RobustScaler
from sklearn.preprocessing import RobustScaler
# Scaling amount and time for the subsample
df['scaled_amount'] = RobustScaler().fit_transform(df['Amount'].values.reshape(-1,1))
df['scaled_time'] = RobustScaler().fit_transform(df['Time'].values.reshape(-1,1))
df.drop(['Time','Amount'], axis=1, inplace=True) # Drop the original time and amount values
# Add scaled amount and times to the data frame
scaled_amount = df['scaled_amount']
scaled_time = df['scaled_time']
df.drop(['scaled_amount', 'scaled_time'], axis=1, inplace=True)
df.insert(0, 'scaled_amount', scaled_amount)
df.insert(1, 'scaled_time', scaled_time)
# Create the balanced subsample of 49
df = df.sample(frac=1)
fraud_df = df.loc[df['Class'] == 1]
non_fraud_df = df.loc[df['Class'] == 0][:492]
normal_distributed_df = pd.concat([fraud_df, non_fraud_df])
sub_sample_df = normal_distributed_df.sample(frac=1, random_state=42)
# Display the first 5 rows t ose
sub_sample_df.head()
###Output
_____no_output_____
###Markdown
We can now plot the correlation matrix of our new sub sample to get a better idea of the true correlations between features and 'Class'
###Code
sub_sample_corr = sub_sample_df.corr()
plot_correlation_matrix(sub_sample_corr, "Sub Sample Correlation Matrix")
###Output
_____no_output_____
###Markdown
The correlations are now much more noticeable. Now, we can find the features with the strongest correlation to class. Half are the strongest positive correlations, half are the strongest negative correlations.
###Code
def find_strongest_correlations(dataframe, latent_qubits):
num_features = latent_qubits**2
class_correlations = dataframe.loc['Class', :]
class_correlations = class_correlations.drop(index = 'Class')
feature_list = list(class_correlations.index)
correlation_list = [class_correlations[x] for x in feature_list]
features = []
correlations = []
for i in range(int(num_features/2)):
correlations.append(max(correlation_list))
features.append(feature_list[correlation_list.index(max(correlation_list))])
del feature_list[correlation_list.index(max(correlation_list))]
del correlation_list[correlation_list.index(max(correlation_list))]
correlations.append(min(correlation_list))
features.append(feature_list[correlation_list.index(min(correlation_list))])
del feature_list[correlation_list.index(min(correlation_list))]
del correlation_list[correlation_list.index(min(correlation_list))]
return features, correlations
feature_list, correlations = find_strongest_correlations(sub_sample_corr, 4)
print(find_strongest_correlations(sub_sample_corr, 4))
###Output
(['V4', 'V14', 'V11', 'V12', 'V2', 'V10', 'V19', 'V16', 'V20', 'V3', 'V21', 'V17', 'V28', 'V9', 'V27', 'V7'], [0.7049280336690459, -0.7491777234950696, 0.6839459727426278, -0.677290478542406, 0.48594631996592036, -0.6228064388962743, 0.24025481223101428, -0.5901179580594654, 0.16528809523020951, -0.561764117252581, 0.12903443052145347, -0.5605289480943042, 0.08345266375453653, -0.5538916275260506, 0.08181031705095593, -0.4772773115289583])
###Markdown
We now have 16 features that are the most correlated with 'Class'. In this use case, we will be using 4 qubits to represent the data which means we will need 2$^{4}$ = 16 features to encode into the 4 qubits though amplitude encoding. Later, we will use the quantum autoencoder to encode the 4 qubits into 3 qubits and then use a decoder to decode those 3 qubits back to 4 qubits.
###Code
# Dataframe of all non fraudulent transactions
branch = df
non_fraud = branch[branch["Class"]!="1"]
# All examples of non fraudulent data with 16 features
non_fraud = non_fraud[feature_list]
non_fraud.head()
input_data = non_fraud.to_numpy()
###Output
_____no_output_____
###Markdown
Training
###Code
shots = 1000 #The amount of shots used for each epoch of training
nr_trash= 1 # Number of qubits 'thrown away'
nr_latent= 3 # Number of qubits left after the encoder is used
nr_ent = 0
epochs = 500 # Number of iterations of training to perform to find the final encoder parameters
learning_rate = .005 # Learning rate for the optimizer, dictates how fast the optimizer trains
batch_size = 2
num_samples = 250 # Number of training samples used for each epoch
beta1 = 0.9
beta2 = 0.999
opt = AdamOptimizer(learning_rate, beta1=beta1, beta2=beta2)
# Organizes and specifies our qubits for the device
spec = QubitsArrangement(nr_trash, nr_latent, nr_swap=1, nr_ent=nr_ent)
IBMQ.load_account()
provider=IBMQ.get_provider(hub='ibm-q-community', group='qhack-hackathon', project='16-qubit')
dev_qiskit = qml.device('qiskit.ibmq', wires=spec.num_qubits, backend='ibmq_guadalupe',provider=provider)
@qml.qnode(dev_qiskit)
def training_circuit_example(init_params, encoder_params, reinit_state):
#initilaization
setAB_amplitude(spec, init_params)
setAux(spec, reinit_state)
setEnt(spec, inputs=[1 / np.sqrt(2), 0, 0, 1 / np.sqrt(2)])
#encoder
for params in encoder_params:
e2_classic(params, [*spec.latent_qubits, *spec.trash_qubits])
#swap test
swap_t(spec)
return [qml.probs(i) for i in spec.swap_qubits]
def fid_func(output):
# Implemented as the Fidelity Loss
# output[0] because we take the probability that the state after the
# SWAP test is ket(0), like the reference state
fidelity_loss = 1 / output[0]
return fidelity_loss
# Define cost function
def cost(encoder_params, X):
reinit_state = [0 for i in range(2 ** len(spec.aux_qubits))]
reinit_state[0] = 1.0
loss = 0.0
for x in X:
output = training_circuit_example(init_params=x[0], encoder_params=encoder_params, reinit_state=reinit_state)[0]
f = fid_func(output)
loss = loss + f
return loss / len(X)
# Define fidelity function
def fidelity(encoder_params, X):
reinit_state = [0 for i in range(2 ** len(spec.aux_qubits))]
reinit_state[0] = 1.0
loss = 0.0
for x in X:
output = training_circuit_example(init_params=x[0], encoder_params=encoder_params, reinit_state=reinit_state)[0]
f = output[0]
loss = loss + f
return loss / len(X)
def iterate_batches(X, batch_size):
random.shuffle(X)
batch_list = []
batch = []
for x in X:
if len(batch) < batch_size:
batch.append(x)
else:
batch_list.append(batch)
batch = []
if len(batch) != 0:
batch_list.append(batch)
return batch_list
# Create a tensor dataset with only fraud data and most correlated features
# for finding the fidelity of the quantum autoencoder on fraud transactions
fraud = fraud_df[feature_list]
np_fraud = fraud.to_numpy()
fraud_data = [ torch.tensor([np_fraud[i]]) for i in range(len(fraud.to_numpy()))]
fraud.head()
encoder_params = torch.tensor([[-0.45793534, -0.60180382, 1.36706854, -0.39273726, 0.78967496,
-0.11834691, 1.21601293, 0.7257659 , 0.16775198, 0.87110514,
0.5825973 , 1.35786832, 1.6198694 , -0.21858262, 1.41542989,
1.2968311 , 1.8630585 , 0.50511886, 0.75524677, 0.82482716,
1.02949018, -0.023521 , 0.55110408, 0.15877528, 0.62316124,
0.37113699, 0.4557925 , 0.62940097, 0.61549768, 0.95122916,
0.22349399, 0.86457997, 0.81546047, 1.47984623, 1.72818011,
-0.30175269, 0.67999145, -0.22226086, 0.94370564, 1.48028116,
0.72720142, 0.20210445, 0.14995309, 0.19133051, -0.35101019,
0.40932117, -0.09846242, 0.65960454, 0.78151562, 1.17058629,
0.23858532, 0.71485483, 0.31327769, 1.63693523, 0.95525645,
0.58935465, 0.76165831, 0.62729872, 0.55561916, 0.19378356,
0.41408805, 1.01374824, 0.37282255, -0.06769513, 0.45583351,
-0.05101048, 0.83344398, 1.58156091, 1.46059524, 0.9371276 ,
0.96522386, 0.27626285, 0.19818911, 0.11227637, 0.38220371,
0.64166103, 0.92703234, 0.3736458 , 0.21161801, 0.62412085,
0.3278856 , -0.18893975, 0.86769553, 0.78573112, 0.50142613,
0.96622037, 0.40300401, 0.55802604, 0.12912973, 0.14822851]], requires_grad=True)
branch = df
non_fraud_df = branch.loc[branch["Class"]!=1][:250]
non_fraud = non_fraud_df[feature_list]
np_non_fraud = non_fraud.to_numpy()
non_fraud_data = [ torch.tensor([np_non_fraud[i]]) for i in range(len(non_fraud.to_numpy()))]
non_fraud_flist=[]
for b in non_fraud_data:
f=fidelity(encoder_params, [b])
non_fraud_flist.append(f.item())
print(min(non_fraud_flist))
print(max(non_fraud_flist))
branch = df
fraud_df = branch.loc[branch["Class"]!=0][:250]
fraud = fraud_df[feature_list]
np_fraud = fraud.to_numpy()
fraud_data = [ torch.tensor([np_fraud[i]]) for i in range(len(fraud.to_numpy()))]
fraud_flist=[]
for b in fraud_data:
f=fidelity(encoder_params, [b])
fraud_flist.append(f.item())
print(min(fraud_flist))
print(max(fraud_flist))
plt.hist(non_fraud_flist, bins =100 ,label="non_fraud",color = "red",alpha=0.4)
plt.hist(fraud_flist, bins = 100 ,label="fraud", color = "skyblue",alpha=0.4)
plt.title("Compression fidelity",)
plt.legend()
plt.savefig("Compression_fidelity-Qiskit")
plt.show()
split=0.75
print("split:",split)
b_e=[]
for i in fraud_flist:
if i<split:
b_e.append(1)
else:
b_e.append(0)
ab_ac=sum(b_e)/len(b_e)
print("non fraud classification accuracy:",ab_ac)
m_e=[]
for i in non_fraud_flist:
if i>split:
m_e.append(1)
else:
m_e.append(0)
am_ac=sum(m_e)/len(m_e)
print("fraud accuracy:",am_ac)
t_ac=(sum(b_e)+sum(m_e))/(len(b_e)+len(m_e))
print("total accuracy:",t_ac)
###Output
split: 0.75
non fraud classification accuracy: 0.992
fraud accuracy: 0.0
total accuracy: 0.496
|
SagemakerModelling.ipynb | ###Markdown
Ibovespa forecasting using neural networks Machine Learning Engineer Nanodegree - Capstone Proposal Sagemaker Modelling- Upload data do S3- Train Model Import python packages
###Code
import boto3
import sagemaker
from sagemaker.pytorch import PyTorch
from ibovespa.utils import load_config
import os
import io
import ast
import boto3
import sagemaker
import numpy as np
import pandas as pd
from ibov.deploy import get_deploy_config, define_model
###Output
_____no_output_____
###Markdown
Loading Configs
###Code
config = load_config()
dropout = config.get("model").get("dropout")
window = config.get("feature").get("window")
hidden_layer = config.get("model").get("hidden_layer")
lr = config.get("model").get("lr")
seed = config.get("model").get("seed")
epochs = config.get("model").get("epochs")
###Output
_____no_output_____
###Markdown
Set Sagemaker Session
###Code
role = config["sagemaker"]["role"]
region = config["sagemaker"]["region"]
prefix = config["sagemaker"]["bucket_prefx"]
session = sagemaker.Session(boto_session=boto3.session.Session(region_name=region))
bucket = session.default_bucket()
###Output
_____no_output_____
###Markdown
Upload data to S3
###Code
input_data = session.upload_data(path="data/data.csv", bucket=bucket, key_prefix=prefix)
input_config = session.upload_data(path="config.json", bucket=bucket, key_prefix=prefix)
input_scaler = session.upload_data(path="data/scaler.json", bucket=bucket, key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Train Model
###Code
estimator = PyTorch(entry_point="train.py",
source_dir="ibovespa",
py_version="py3",
role=role,
framework_version='0.4.0',
instance_count=1,
instance_type='ml.p2.xlarge')
estimator.fit({'train': input_data, "config": input_config, "scaler": input_scaler})
###Output
2021-01-30 02:37:45 Starting - Starting the training job...
2021-01-30 02:38:08 Starting - Launching requested ML instancesProfilerReport-1611974262: InProgress
......
2021-01-30 02:39:15 Starting - Preparing the instances for training.........
2021-01-30 02:41:04 Downloading - Downloading input data......
2021-01-30 02:42:13 Training - Training image download completed. Training in progress.[34mbash: cannot set terminal process group (-1): Inappropriate ioctl for device[0m
[34mbash: no job control in this shell[0m
[34m2021-01-30 02:42:16,278 sagemaker-containers INFO Imported framework sagemaker_pytorch_container.training[0m
[34m2021-01-30 02:42:16,304 sagemaker_pytorch_container.training INFO Block until all host DNS lookups succeed.[0m
[34m2021-01-30 02:42:16,919 sagemaker_pytorch_container.training INFO Invoking user training script.[0m
[34m2021-01-30 02:42:17,311 sagemaker-containers INFO Module train does not provide a setup.py. [0m
[34mGenerating setup.py[0m
[34m2021-01-30 02:42:17,312 sagemaker-containers INFO Generating setup.cfg[0m
[34m2021-01-30 02:42:17,312 sagemaker-containers INFO Generating MANIFEST.in[0m
[34m2021-01-30 02:42:17,312 sagemaker-containers INFO Installing module with the following command:[0m
[34m/usr/bin/python -m pip install -U . -r requirements.txt[0m
[34mProcessing /opt/ml/code[0m
[34mCollecting pandas (from -r requirements.txt (line 1))[0m
[34m Downloading https://files.pythonhosted.org/packages/74/24/0cdbf8907e1e3bc5a8da03345c23cbed7044330bb8f73bb12e711a640a00/pandas-0.24.2-cp35-cp35m-manylinux1_x86_64.whl (10.0MB)[0m
[34mRequirement already satisfied, skipping upgrade: numpy>=1.12.0 in /usr/local/lib/python3.5/dist-packages (from pandas->-r requirements.txt (line 1)) (1.15.4)[0m
[34mCollecting pytz>=2011k (from pandas->-r requirements.txt (line 1))
Downloading https://files.pythonhosted.org/packages/89/06/2c2d3034b4d6bf22f2a4ae546d16925898658a33b4400cfb7e2c1e2871a3/pytz-2020.5-py2.py3-none-any.whl (510kB)[0m
[34mRequirement already satisfied, skipping upgrade: python-dateutil>=2.5.0 in /usr/local/lib/python3.5/dist-packages (from pandas->-r requirements.txt (line 1)) (2.7.5)[0m
[34mRequirement already satisfied, skipping upgrade: six>=1.5 in /usr/local/lib/python3.5/dist-packages (from python-dateutil>=2.5.0->pandas->-r requirements.txt (line 1)) (1.11.0)[0m
[34mBuilding wheels for collected packages: train
Running setup.py bdist_wheel for train: started[0m
[34m Running setup.py bdist_wheel for train: finished with status 'done'
Stored in directory: /tmp/pip-ephem-wheel-cache-_y9cp6gf/wheels/35/24/16/37574d11bf9bde50616c67372a334f94fa8356bc7164af8ca3[0m
[34mSuccessfully built train[0m
[34mInstalling collected packages: pytz, pandas, train[0m
[34mSuccessfully installed pandas-0.24.2 pytz-2020.5 train-1.0.0[0m
[34mYou are using pip version 18.1, however version 20.3.4 is available.[0m
[34mYou should consider upgrading via the 'pip install --upgrade pip' command.[0m
[34m2021-01-30 02:42:23,253 sagemaker-containers INFO Invoking user script
[0m
[34mTraining Env:
[0m
[34m{
"channel_input_dirs": {
"scaler": "/opt/ml/input/data/scaler",
"train": "/opt/ml/input/data/train",
"config": "/opt/ml/input/data/config"
},
"framework_module": "sagemaker_pytorch_container.training:main",
"module_dir": "s3://sagemaker-us-east-1-977053370764/sagemaker-pytorch-2021-01-30-02-37-41-130/source/sourcedir.tar.gz",
"hosts": [
"algo-1"
],
"log_level": 20,
"job_name": "sagemaker-pytorch-2021-01-30-02-37-41-130",
"hyperparameters": {},
"input_data_config": {
"config": {
"TrainingInputMode": "File",
"S3DistributionType": "FullyReplicated",
"RecordWrapperType": "None"
},
"train": {
"TrainingInputMode": "File",
"S3DistributionType": "FullyReplicated",
"RecordWrapperType": "None"
},
"scaler": {
"TrainingInputMode": "File",
"S3DistributionType": "FullyReplicated",
"RecordWrapperType": "None"
}
},
"additional_framework_parameters": {},
"user_entry_point": "train.py",
"output_data_dir": "/opt/ml/output/data",
"num_gpus": 1,
"network_interface_name": "eth0",
"input_dir": "/opt/ml/input",
"current_host": "algo-1",
"output_dir": "/opt/ml/output",
"model_dir": "/opt/ml/model",
"num_cpus": 4,
"input_config_dir": "/opt/ml/input/config",
"resource_config": {
"hosts": [
"algo-1"
],
"current_host": "algo-1",
"network_interface_name": "eth0"
},
"output_intermediate_dir": "/opt/ml/output/intermediate",
"module_name": "train"[0m
[34m}
[0m
[34mEnvironment variables:
[0m
[34mSM_MODEL_DIR=/opt/ml/model[0m
[34mSM_INPUT_DATA_CONFIG={"config":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"},"scaler":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"},"train":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"}}[0m
[34mSM_HPS={}[0m
[34mSM_FRAMEWORK_PARAMS={}[0m
[34mSM_CHANNEL_CONFIG=/opt/ml/input/data/config[0m
[34mSM_LOG_LEVEL=20[0m
[34mSM_CURRENT_HOST=algo-1[0m
[34mSM_HOSTS=["algo-1"][0m
[34mSM_OUTPUT_INTERMEDIATE_DIR=/opt/ml/output/intermediate[0m
[34mSM_CHANNEL_SCALER=/opt/ml/input/data/scaler[0m
[34mSM_OUTPUT_DATA_DIR=/opt/ml/output/data[0m
[34mSM_OUTPUT_DIR=/opt/ml/output[0m
[34mSM_TRAINING_ENV={"additional_framework_parameters":{},"channel_input_dirs":{"config":"/opt/ml/input/data/config","scaler":"/opt/ml/input/data/scaler","train":"/opt/ml/input/data/train"},"current_host":"algo-1","framework_module":"sagemaker_pytorch_container.training:main","hosts":["algo-1"],"hyperparameters":{},"input_config_dir":"/opt/ml/input/config","input_data_config":{"config":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"},"scaler":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"},"train":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"}},"input_dir":"/opt/ml/input","job_name":"sagemaker-pytorch-2021-01-30-02-37-41-130","log_level":20,"model_dir":"/opt/ml/model","module_dir":"s3://sagemaker-us-east-1-977053370764/sagemaker-pytorch-2021-01-30-02-37-41-130/source/sourcedir.tar.gz","module_name":"train","network_interface_name":"eth0","num_cpus":4,"num_gpus":1,"output_data_dir":"/opt/ml/output/data","output_dir":"/opt/ml/output","output_intermediate_dir":"/opt/ml/output/intermediate","resource_config":{"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"},"user_entry_point":"train.py"}[0m
[34mSM_INPUT_DIR=/opt/ml/input[0m
[34mSM_FRAMEWORK_MODULE=sagemaker_pytorch_container.training:main[0m
[34mSM_NUM_CPUS=4[0m
[34mSM_INPUT_CONFIG_DIR=/opt/ml/input/config[0m
[34mSM_RESOURCE_CONFIG={"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"}[0m
[34mSM_CHANNELS=["config","scaler","train"][0m
[34mSM_MODULE_DIR=s3://sagemaker-us-east-1-977053370764/sagemaker-pytorch-2021-01-30-02-37-41-130/source/sourcedir.tar.gz[0m
[34mSM_NETWORK_INTERFACE_NAME=eth0[0m
[34mSM_USER_ENTRY_POINT=train.py[0m
[34mSM_NUM_GPUS=1[0m
[34mSM_MODULE_NAME=train[0m
[34mPYTHONPATH=/usr/local/bin:/usr/lib/python35.zip:/usr/lib/python3.5:/usr/lib/python3.5/plat-x86_64-linux-gnu:/usr/lib/python3.5/lib-dynload:/usr/local/lib/python3.5/dist-packages:/usr/lib/python3/dist-packages[0m
[34mSM_CHANNEL_TRAIN=/opt/ml/input/data/train[0m
[34mSM_USER_ARGS=[]
[0m
[34mInvoking script with the following command:
[0m
[34m/usr/bin/python -m train
[0m
[34m02:42:25, epoch: 0, train: 4.076, valid: 0.545[0m
[34m02:42:26, epoch: 1, train: 2.282, valid: 0.217[0m
[34m02:42:27, epoch: 2, train: 0.992, valid: 0.089[0m
[34m02:42:28, epoch: 3, train: 0.95, valid: 0.093[0m
[34m02:42:30, epoch: 4, train: 0.738, valid: 0.055[0m
[34m02:42:32, epoch: 5, train: 0.696, valid: 0.082[0m
[34m02:42:34, epoch: 6, train: 0.723, valid: 0.062[0m
[34m02:42:36, epoch: 7, train: 0.637, valid: 0.046[0m
|
07_eclipse.ipynb | ###Markdown
Ecplise stuff> In progressCailin has created groundtracks at this path:
###Code
from fastcore.utils import Path
groundtracks = Path("/luna2/cailin/data/eclipse")
import pprint
fnames = list(groundtracks.glob("*_tlontlat_track.csv"))
pd.read_csv(fnames[0])
###Output
_____no_output_____ |
prep_data/tabular_data/02_feature_selection_tabular_data.ipynb | ###Markdown
Feature Selection for Tabular DataThe purpose of this notebook is to demonstrate how to select important features and prune unimportant ones prior to training our machine learning model. This is an important step that yields better prediction performance. PrerequisiteThis notebook is a sequel to the [01_preprocessing_tabular_data.ipynb](01_preprocessing_tabular_data.ipynb) notebook. Before running this notebook, run [01_preprocessing_tabular_data.ipynb](01_preprocessing_tabular_data.ipynb) to preprocess the data used in this notebook. NotesIn this notebook, we use the sklearn framework for data partitionining and `storemagic` to share dataframes in [03_training_model_on_tabular_data.ipynb](03_training_model_on_tabular_data.ipynb). While we load data into memory here we do note that is it possible to skip this and load your partitioned data directly to an S3 bucket. Tabular Data Sets* [california house data](https://www.dcc.fc.up.pt/~ltorgo/Regression/cal_housing.html)* [diabetes data ](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html) Library Dependencies:* sagemaker >= 2.0.0* numpy * pandas* plotly* sklearn * matplotlib * seaborn* xgboost Setting up the notebook
###Code
import os
import sys
import plotly.express as px
import plotly.offline as pyo
import plotly.graph_objs as go
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import pickle
import ast
from matplotlib import pyplot
## sklearn dependencies
from sklearn.datasets import make_regression
import sklearn.model_selection
from sklearn.neighbors import KNeighborsRegressor
from sklearn.inspection import permutation_importance
!{sys.executable} -m pip install -qU 'xgboost'
import xgboost
from xgboost import XGBRegressor
## SageMaker dependencies
import sagemaker
from sagemaker import get_execution_role
from sagemaker.inputs import TrainingInput
from sagemaker.image_uris import retrieve
## This instantiates a SageMaker session that we will be operating in.
session = sagemaker.Session()
## This object represents the IAM role that we are assigned.
role = sagemaker.get_execution_role()
print(role)
###Output
_____no_output_____
###Markdown
Step 1: Load Relevant Variables from preprocessing_tabular_data.ipynb (Required for this notebook)Here we load in our training, test, and validation data sets. We preprocessed this data in the [01_preprocessing_tabular_data.ipynb](01_preprocessing_tabular_data.ipynb) and persisted it using storemagic.
###Code
# Load relevant dataframes and variables from preprocessing_tabular_data.ipynb required for this notebook
%store -r X_train
%store -r X_test
%store -r X_val
###Output
_____no_output_____
###Markdown
Step 2: Computing Feature Importance Scores to Select FeaturesWe show two approaches for computing feature importance scores for each feature. We can rank each feature by their corresponding feature importance score in an effort to prune unimportant features which will yield a better performing model. The first approach, uses XGBoost and the second uses permutation feature importance. Step 2a: Ranking features by Feature Importance using XGBoostHere we use gradient boosting to extract importance scores for each feature. The importance scores calculated for each feature inform us how useful the feature was for constructing the boosted decision tree and can be ranked and compared to one another for feature selection.
###Code
X_data, y_label = make_regression(n_samples=X_train.shape[0], n_features=X_train.shape[1], n_informative=10, random_state=1)
xgboost_model = XGBRegressor()
xgboost_model.fit(X_data, y_label)
feature_importances_xgboost = xgboost_model.feature_importances_
for index, importance_score in enumerate(feature_importances_xgboost):
print('Feature: {}, Score: {}'.format(X_train.columns[index], importance_score))
def create_bar_plot(feature_importances, X_train):
'''
Create a bar plot of features against their corresponding feature importance score.
'''
x_indices = [_ for _ in range(len(feature_importances))]
plt.figure(figsize = (15, 5))
plt.bar(x_indices, feature_importances, color='blue')
plt.xticks(x_indices, X_train.columns)
plt.xlabel('Feature', fontsize=18)
plt.ylabel('Importance Score', fontsize=18)
plt.title('Feature Importance Scores', fontsize=18)
plt.show()
create_bar_plot(feature_importances_xgboost, X_train)
###Output
_____no_output_____
###Markdown
In the following cell, we rank each feature based on corresponding importance score.
###Code
def show_ranked_feature_importance_list(scores, data):
'''
Prints the features ranked by their corresponding importance score.
'''
lst = list(zip(data.columns, scores))
ranked_lst = sorted(lst, key= lambda t: t[1], reverse=True)
print(pd.DataFrame(ranked_lst, columns=['Feature', 'Importance Score']))
show_ranked_feature_importance_list(feature_importances_xgboost, X_train)
###Output
_____no_output_____
###Markdown
Step 2b: Ranking features by Permutation Feature Importance using the Scikit-learn k-NN AlgorithmThis approach is commonly used for selecting features in tabular data. We first randomly shuffle a single feature value and train a model. In this example we use the k-nearest-neighbours algorithm to train our model. The permutation feature importance score is the decrease in models score when this single feature value is shuffled. The decrease in the model score is representative of how dependant the model is on the feature. This technique can be computed many times with altering permutations per feature.
###Code
X_data, y_label = make_regression(n_samples=X_train.shape[0], n_features=X_train.shape[1], n_informative=10, random_state=1)
k_nn_model = KNeighborsRegressor()
k_nn_model.fit(X_data, y_label)
feature_importances_permutations = permutation_importance(k_nn_model, X_data, y_label, scoring='neg_mean_squared_error').importances_mean
for index, importance_score in enumerate(feature_importances_permutations):
print('Feature: {}, Score: {}'.format(X_train.columns[index], importance_score))
create_bar_plot(feature_importances_permutations, X_train)
show_ranked_feature_importance_list(feature_importances_permutations, X_train)
###Output
_____no_output_____
###Markdown
Step 3: Prune Unimportant FeaturesThus far, we have discussed two common approaches for obtaining a ranked list of feature importance scores for each feature. From these lists we can infer unimportant features based on their importance scores and can eliminate them from our training, validation and test sets. For example, if feature A has a higher importance score then feature B's importance score, then this implies that feature A is more important then feature B and vice versa. We mention that both approaches constrain the removal of features to the dataset itself which is independent of the problem domain. After selecting your desired approach, move onto the next cell to prune features that have the importance score less than or equal to a threshold value. Depending on the approach of your choice and the distribution of scores, the `threshold` value may vary.In this example, we select the first approach with XGBoost and set the threshold value to 0.01.
###Code
threshold = 0.01
def remove_features(lst, data, threshold):
'''
Remove features found in lst from data iff its importance score is below threshold.
'''
features_to_remove = []
for index, pair in enumerate(list(zip(data.columns, lst))):
if pair[1] <= threshold:
features_to_remove.append(pair[0])
if features_to_remove:
data.drop(features_to_remove, axis=1)
###Output
_____no_output_____
###Markdown
Assign `lst` to be `feature_importances_permutations` or `feature_importances_xgboost` if want to use the ranked list from that uses XGBoost or permutation feature importance respectively.We remove all features that are below `threshold` from our training data, `X_train`, validation data, `X_val` and testing data `X_test` respectively.
###Code
remove_features(lst=feature_importances_xgboost, data=X_train, threshold=threshold)
remove_features(lst=feature_importances_xgboost, data=X_val, threshold=threshold)
remove_features(lst=feature_importances_xgboost, data=X_test, threshold=threshold)
###Output
_____no_output_____
###Markdown
Step 4: Store Variables using `storemagic`After pruning the unimportant features, use `storemagic` to persist all relevant variables so that they can be reused in our next sequel notebook, [03_training_model_on_tabular_data.ipynb](03_training_model_on_tabular_data.ipynb), where we focus on model training.
###Code
# Using storemagic we persist the variables below so we can access them in 03_training_model_on_tabular_data.ipynb
%store X_train
%store X_test
%store X_val
###Output
_____no_output_____
###Markdown
Feature Selection for Tabular DataThe purpose of this notebook is to demonstrate how to select important features and prune unimportant ones prior to training our machine learning model. This is an important step that yields better prediction performance. PrerequisiteThis notebook is a sequel to the [01_preprocessing_tabular_data.ipynb](01_preprocessing_tabular_data.ipynb) notebook. Before running this notebook, run [01_preprocessing_tabular_data.ipynb](01_preprocessing_tabular_data.ipynb) to preprocess the data used in this notebook. NotesIn this notebook, we use the sklearn framework for data partitionining and `storemagic` to share dataframes in [03_training_model_on_tabular_data.ipynb](03_training_model_on_tabular_data.ipynb). While we load data into memory here we do note that is it possible to skip this and load your partitioned data directly to an S3 bucket. Tabular Data Sets* [california house data](https://www.dcc.fc.up.pt/~ltorgo/Regression/cal_housing.html)* [diabetes data ](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html) Library Dependencies:* sagemaker >= 2.0.0* numpy * pandas* plotly* sklearn * matplotlib * seaborn* xgboost Setting up the notebook
###Code
import os
import sys
import plotly.express as px
import plotly.offline as pyo
import plotly.graph_objs as go
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import pickle
import ast
from matplotlib import pyplot
## sklearn dependencies
from sklearn.datasets import make_regression
import sklearn.model_selection
from sklearn.neighbors import KNeighborsRegressor
from sklearn.inspection import permutation_importance
!{sys.executable} -m pip install -qU 'xgboost'
import xgboost
from xgboost import XGBRegressor
## SageMaker dependencies
import sagemaker
from sagemaker import get_execution_role
from sagemaker.inputs import TrainingInput
from sagemaker.image_uris import retrieve
## This instantiates a SageMaker session that we will be operating in.
session = sagemaker.Session()
## This object represents the IAM role that we are assigned.
role = sagemaker.get_execution_role()
print(role)
###Output
_____no_output_____
###Markdown
Step 1: Load Relevant Variables from preprocessing_tabular_data.ipynb (Required for this notebook)Here we load in our training, test, and validation data sets. We preprocessed this data in the [01_preprocessing_tabular_data.ipynb](01_preprocessing_tabular_data.ipynb) and persisted it using storemagic.
###Code
# Load relevant dataframes and variables from preprocessing_tabular_data.ipynb required for this notebook
%store -r X_train
%store -r X_test
%store -r X_val
###Output
_____no_output_____
###Markdown
Step 2: Computing Feature Importance Scores to Select FeaturesWe show two approaches for computing feature importance scores for each feature. We can rank each feature by their corresponding feature importance score in an effort to prune unimportant features which will yield a better performing model. The first approach, uses XGBoost and the second uses permutation feature importance. Step 2a: Ranking features by Feature Importance using XGBoostHere we use gradient boosting to extract importance scores for each feature. The importance scores calculated for each feature inform us how useful the feature was for constructing the boosted decision tree and can be ranked and compared to one another for feature selection.
###Code
X_data, y_label = make_regression(
n_samples=X_train.shape[0], n_features=X_train.shape[1], n_informative=10, random_state=1
)
xgboost_model = XGBRegressor()
xgboost_model.fit(X_data, y_label)
feature_importances_xgboost = xgboost_model.feature_importances_
for index, importance_score in enumerate(feature_importances_xgboost):
print("Feature: {}, Score: {}".format(X_train.columns[index], importance_score))
def create_bar_plot(feature_importances, X_train):
"""
Create a bar plot of features against their corresponding feature importance score.
"""
x_indices = [_ for _ in range(len(feature_importances))]
plt.figure(figsize=(15, 5))
plt.bar(x_indices, feature_importances, color="blue")
plt.xticks(x_indices, X_train.columns)
plt.xlabel("Feature", fontsize=18)
plt.ylabel("Importance Score", fontsize=18)
plt.title("Feature Importance Scores", fontsize=18)
plt.show()
create_bar_plot(feature_importances_xgboost, X_train)
###Output
_____no_output_____
###Markdown
In the following cell, we rank each feature based on corresponding importance score.
###Code
def show_ranked_feature_importance_list(scores, data):
"""
Prints the features ranked by their corresponding importance score.
"""
lst = list(zip(data.columns, scores))
ranked_lst = sorted(lst, key=lambda t: t[1], reverse=True)
print(pd.DataFrame(ranked_lst, columns=["Feature", "Importance Score"]))
show_ranked_feature_importance_list(feature_importances_xgboost, X_train)
###Output
_____no_output_____
###Markdown
Step 2b: Ranking features by Permutation Feature Importance using the Scikit-learn k-NN AlgorithmThis approach is commonly used for selecting features in tabular data. We first randomly shuffle a single feature value and train a model. In this example we use the k-nearest-neighbours algorithm to train our model. The permutation feature importance score is the decrease in models score when this single feature value is shuffled. The decrease in the model score is representative of how dependant the model is on the feature. This technique can be computed many times with altering permutations per feature.
###Code
X_data, y_label = make_regression(
n_samples=X_train.shape[0], n_features=X_train.shape[1], n_informative=10, random_state=1
)
k_nn_model = KNeighborsRegressor()
k_nn_model.fit(X_data, y_label)
feature_importances_permutations = permutation_importance(
k_nn_model, X_data, y_label, scoring="neg_mean_squared_error"
).importances_mean
for index, importance_score in enumerate(feature_importances_permutations):
print("Feature: {}, Score: {}".format(X_train.columns[index], importance_score))
create_bar_plot(feature_importances_permutations, X_train)
show_ranked_feature_importance_list(feature_importances_permutations, X_train)
###Output
_____no_output_____
###Markdown
Step 3: Prune Unimportant FeaturesThus far, we have discussed two common approaches for obtaining a ranked list of feature importance scores for each feature. From these lists we can infer unimportant features based on their importance scores and can eliminate them from our training, validation and test sets. For example, if feature A has a higher importance score then feature B's importance score, then this implies that feature A is more important then feature B and vice versa. We mention that both approaches constrain the removal of features to the dataset itself which is independent of the problem domain. After selecting your desired approach, move onto the next cell to prune features that have the importance score less than or equal to a threshold value. Depending on the approach of your choice and the distribution of scores, the `threshold` value may vary.In this example, we select the first approach with XGBoost and set the threshold value to 0.01.
###Code
threshold = 0.01
def remove_features(lst, data, threshold):
"""
Remove features found in lst from data iff its importance score is below threshold.
"""
features_to_remove = []
for index, pair in enumerate(list(zip(data.columns, lst))):
if pair[1] <= threshold:
features_to_remove.append(pair[0])
if features_to_remove:
data.drop(features_to_remove, axis=1)
###Output
_____no_output_____
###Markdown
Assign `lst` to be `feature_importances_permutations` or `feature_importances_xgboost` if want to use the ranked list from that uses XGBoost or permutation feature importance respectively.We remove all features that are below `threshold` from our training data, `X_train`, validation data, `X_val` and testing data `X_test` respectively.
###Code
remove_features(lst=feature_importances_xgboost, data=X_train, threshold=threshold)
remove_features(lst=feature_importances_xgboost, data=X_val, threshold=threshold)
remove_features(lst=feature_importances_xgboost, data=X_test, threshold=threshold)
###Output
_____no_output_____
###Markdown
Step 4: Store Variables using `storemagic`After pruning the unimportant features, use `storemagic` to persist all relevant variables so that they can be reused in our next sequel notebook, [03_training_model_on_tabular_data.ipynb](03_training_model_on_tabular_data.ipynb), where we focus on model training.
###Code
# Using storemagic we persist the variables below so we can access them in 03_training_model_on_tabular_data.ipynb
%store X_train
%store X_test
%store X_val
###Output
_____no_output_____
###Markdown
Feature Selection for Tabular DataThe purpose of this notebook is to demonstrate how to select important features and prune unimportant ones prior to training our machine learning model. This is an important step that yields better prediction performance. PrerequisiteThis notebook is a sequel to the [01_preprocessing_tabular_data.ipynb](01_preprocessing_tabular_data.ipynb) notebook. Before running this notebook, run [01_preprocessing_tabular_data.ipynb](01_preprocessing_tabular_data.ipynb) to preprocess the data used in this notebook. NotesIn this notebook, we use the sklearn framework for data partitionining and `storemagic` to share dataframes in [03_training_model_on_tabular_data.ipynb](03_training_model_on_tabular_data.ipynb). While we load data into memory here we do note that is it possible to skip this and load your partitioned data directly to an S3 bucket. Tabular Data Sets* [boston house data](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html)* [california house data](https://www.dcc.fc.up.pt/~ltorgo/Regression/cal_housing.html)* [diabetes data ](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html) Library Dependencies:* sagemaker >= 2.0.0* numpy * pandas* plotly* sklearn * matplotlib * seaborn* xgboost Setting up the notebook
###Code
import os
import sys
import plotly.express as px
import plotly.offline as pyo
import plotly.graph_objs as go
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import pickle
import ast
from matplotlib import pyplot
## sklearn dependencies
from sklearn.datasets import make_regression
import sklearn.model_selection
from sklearn.neighbors import KNeighborsRegressor
from sklearn.inspection import permutation_importance
!{sys.executable} -m pip install -qU 'xgboost'
import xgboost
from xgboost import XGBRegressor
## SageMaker dependencies
import sagemaker
from sagemaker import get_execution_role
from sagemaker.inputs import TrainingInput
from sagemaker.image_uris import retrieve
## This instantiates a SageMaker session that we will be operating in.
session = sagemaker.Session()
## This object represents the IAM role that we are assigned.
role = sagemaker.get_execution_role()
print(role)
###Output
_____no_output_____
###Markdown
Step 1: Load Relevant Variables from preprocessing_tabular_data.ipynb (Required for this notebook)Here we load in our training, test, and validation data sets. We preprocessed this data in the [01_preprocessing_tabular_data.ipynb](01_preprocessing_tabular_data.ipynb) and persisted it using storemagic.
###Code
# Load relevant dataframes and variables from preprocessing_tabular_data.ipynb required for this notebook
%store -r X_train
%store -r X_test
%store -r X_val
###Output
_____no_output_____
###Markdown
Step 2: Computing Feature Importance Scores to Select FeaturesWe show two approaches for computing feature importance scores for each feature. We can rank each feature by their corresponding feature importance score in an effort to prune unimportant features which will yield a better performing model. The first approach, uses XGBoost and the second uses permutation feature importance. Step 2a: Ranking features by Feature Importance using XGBoostHere we use gradient boosting to extract importance scores for each feature. The importance scores calculated for each feature inform us how useful the feature was for constructing the boosted decision tree and can be ranked and compared to one another for feature selection.
###Code
X_data, y_label = make_regression(n_samples=X_train.shape[0], n_features=X_train.shape[1], n_informative=10, random_state=1)
xgboost_model = XGBRegressor()
xgboost_model.fit(X_data, y_label)
feature_importances_xgboost = xgboost_model.feature_importances_
for index, importance_score in enumerate(feature_importances_xgboost):
print('Feature: {}, Score: {}'.format(X_train.columns[index], importance_score))
def create_bar_plot(feature_importances, X_train):
'''
Create a bar plot of features against their corresponding feature importance score.
'''
x_indices = [_ for _ in range(len(feature_importances))]
plt.figure(figsize = (15, 5))
plt.bar(x_indices, feature_importances, color='blue')
plt.xticks(x_indices, X_train.columns)
plt.xlabel('Feature', fontsize=18)
plt.ylabel('Importance Score', fontsize=18)
plt.title('Feature Importance Scores', fontsize=18)
plt.show()
create_bar_plot(feature_importances_xgboost, X_train)
###Output
_____no_output_____
###Markdown
In the following cell, we rank each feature based on corresponding importance score.
###Code
def show_ranked_feature_importance_list(scores, data):
'''
Prints the features ranked by their corresponding importance score.
'''
lst = list(zip(data.columns, scores))
ranked_lst = sorted(lst, key= lambda t: t[1], reverse=True)
print(pd.DataFrame(ranked_lst, columns=['Feature', 'Importance Score']))
show_ranked_feature_importance_list(feature_importances_xgboost, X_train)
###Output
_____no_output_____
###Markdown
Step 2b: Ranking features by Permutation Feature Importance using the Scikit-learn k-NN AlgorithmThis approach is commonly used for selecting features in tabular data. We first randomly shuffle a single feature value and train a model. In this example we use the k-nearest-neighbours algorithm to train our model. The permutation feature importance score is the decrease in models score when this single feature value is shuffled. The decrease in the model score is representative of how dependant the model is on the feature. This technique can be computed many times with altering permutations per feature.
###Code
X_data, y_label = make_regression(n_samples=X_train.shape[0], n_features=X_train.shape[1], n_informative=10, random_state=1)
k_nn_model = KNeighborsRegressor()
k_nn_model.fit(X_data, y_label)
feature_importances_permutations = permutation_importance(k_nn_model, X_data, y_label, scoring='neg_mean_squared_error').importances_mean
for index, importance_score in enumerate(feature_importances_permutations):
print('Feature: {}, Score: {}'.format(X_train.columns[index], importance_score))
create_bar_plot(feature_importances_permutations, X_train)
show_ranked_feature_importance_list(feature_importances_permutations, X_train)
###Output
_____no_output_____
###Markdown
Step 3: Prune Unimportant FeaturesThus far, we have discussed two common approaches for obtaining a ranked list of feature importance scores for each feature. From these lists we can infer unimportant features based on their importance scores and can eliminate them from our training, validation and test sets. For example, if feature A has a higher importance score then feature B's importance score, then this implies that feature A is more important then feature B and vice versa. We mention that both approaches constrain the removal of features to the dataset itself which is independent of the problem domain. After selecting your desired approach, move onto the next cell to prune features that have the importance score less than or equal to a threshold value. Depending on the approach of your choice and the distribution of scores, the `threshold` value may vary.In this example, we select the first approach with XGBoost and set the threshold value to 0.01.
###Code
threshold = 0.01
def remove_features(lst, data, threshold):
'''
Remove features found in lst from data iff its importance score is below threshold.
'''
features_to_remove = []
for index, pair in enumerate(list(zip(data.columns, lst))):
if pair[1] <= threshold:
features_to_remove.append(pair[0])
if features_to_remove:
data.drop(features_to_remove, axis=1)
###Output
_____no_output_____
###Markdown
Assign `lst` to be `feature_importances_permutations` or `feature_importances_xgboost` if want to use the ranked list from that uses XGBoost or permutation feature importance respectively.We remove all features that are below `threshold` from our training data, `X_train`, validation data, `X_val` and testing data `X_test` respectively.
###Code
remove_features(lst=feature_importances_xgboost, data=X_train, threshold=threshold)
remove_features(lst=feature_importances_xgboost, data=X_val, threshold=threshold)
remove_features(lst=feature_importances_xgboost, data=X_test, threshold=threshold)
###Output
_____no_output_____
###Markdown
Step 4: Store Variables using `storemagic`After pruning the unimportant features, use `storemagic` to persist all relevant variables so that they can be reused in our next sequel notebook, [03_training_model_on_tabular_data.ipynb](03_training_model_on_tabular_data.ipynb), where we focus on model training.
###Code
# Using storemagic we persist the variables below so we can access them in 03_training_model_on_tabular_data.ipynb
%store X_train
%store X_test
%store X_val
###Output
_____no_output_____ |
examples/example_parametric_reactors/submersion_reactor.ipynb | ###Markdown
This example creates a submersion reactor using the SubmersionTokamakparametric reactor. By default the script saves stp, stl, html and svg files.
###Code
import paramak
my_reactor = paramak.SubmersionTokamak(
inner_bore_radial_thickness=30,
inboard_tf_leg_radial_thickness=30,
center_column_shield_radial_thickness=30,
divertor_radial_thickness=80,
inner_plasma_gap_radial_thickness=50,
plasma_radial_thickness=200,
outer_plasma_gap_radial_thickness=50,
firstwall_radial_thickness=30,
blanket_rear_wall_radial_thickness=30,
number_of_tf_coils=16,
rotation_angle=180,
support_radial_thickness=90,
inboard_blanket_radial_thickness=30,
outboard_blanket_radial_thickness=30,
elongation=2.00,
triangularity=0.50,
pf_coil_case_thicknesses=[10, 10, 10, 10],
pf_coil_radial_thicknesses=[20, 50, 50, 20],
pf_coil_vertical_thicknesses=[20, 50, 50, 20],
pf_coil_radial_position=[500, 550, 550, 500],
pf_coil_vertical_position=[270, 100, -100, -270],
rear_blanket_to_tf_gap=50,
outboard_tf_coil_radial_thickness=30,
outboard_tf_coil_poloidal_thickness=30,
)
my_reactor.show()
my_reactor.export_stp(output_folder='SubmersionTokamak')
my_reactor.export_neutronics_description('manifest.json')
my_reactor.export_svg('SubmersionTokamak/reactor.svg')
my_reactor.export_stl(output_folder='SubmersionTokamak')
my_reactor.export_html('SubmersionTokamak/reactor.html')
###Output
_____no_output_____
###Markdown
This example creates a submersion reactor using the SubmersionTokamakparametric reactor. By default the script saves stp, stl, html and svg files.
###Code
import paramak
my_reactor = paramak.SubmersionTokamak(
inner_bore_radial_thickness=30,
inboard_tf_leg_radial_thickness=30,
center_column_shield_radial_thickness=30,
divertor_radial_thickness=80,
inner_plasma_gap_radial_thickness=50,
plasma_radial_thickness=200,
outer_plasma_gap_radial_thickness=50,
firstwall_radial_thickness=30,
blanket_rear_wall_radial_thickness=30,
number_of_tf_coils=16,
rotation_angle=180,
support_radial_thickness=90,
inboard_blanket_radial_thickness=30,
outboard_blanket_radial_thickness=30,
elongation=2.00,
triangularity=0.50,
pf_coil_case_thicknesses=[10, 10, 10, 10],
pf_coil_radial_thicknesses=[20, 50, 50, 20],
pf_coil_vertical_thicknesses=[20, 50, 50, 20],
pf_coil_radial_position=[500, 550, 550, 500],
pf_coil_vertical_position=[270, 100, -100, -270],
rear_blanket_to_tf_gap=50,
outboard_tf_coil_radial_thickness=30,
outboard_tf_coil_poloidal_thickness=30,
)
my_reactor.show()
my_reactor.export_stp(filename='SubmersionTokamak.stp')
my_reactor.export_svg('SubmersionTokamak/reactor.svg')
my_reactor.export_stl(filename='SubmersionTokamak.stl')
my_reactor.export_html('SubmersionTokamak/reactor.html')
###Output
_____no_output_____ |
jupyter_notebooks/test notebook.ipynb | ###Markdown
Code Testing notebook
###Code
import torchvision.models as models
test = models.densenet201(num_classes=100)
test
model_names = sorted(name for name in models.__dict__
if name.islower() and not name.startswith("__")
and callable(models.__dict__[name]))
type(model_names)
a = [1,23,4]
b = [5,6,7,8]
c = a+b
c
'test: {0}'.format(c)
import numpy as np
np.mean(c)
import torch.optim.
###Output
_____no_output_____ |
Python_exercises/Exercicios_python.ipynb | ###Markdown
Curso Deep Learning - Exercício pré-curso Python - NumPy Este é um notebook Jupyter contendo exercícios de programação matricial utilizando o Python e biblioteca NumPy.Estes exercícios servem para familiarizar o participante na linguagem Python para manipulação matricial.Esta lista de exercício é um guia de estudo. Para fazer os exercícios será necessário estudar o Python, consultar documentação e tutoriais disponíveis na Internet.Estes exercícios são de dificuldade **intermediária para difícil**, usaremos programaçãoavançada Python, explorando listas, dicionários, programação matricial e programaçãoorientada a objeto. Python O curso será feito utilizando a versão Python 3.6, assim recomenda-se que estes exercícios sejam feitos com o Python 3.Recomenda-se instalar o Jupyter e o Python utilizando-se o [Anaconda](https://www.continuum.io/downloads) - uma distribuiçãofocada em Data Science, contendo os principais pacotes usados nesta área. Jupyter notebook Este é um Notebook Jupyter. Um notebook Jupyter é uma mistura de linguagem Markdown para formatar texto (como uma Wiki) e um programa Python. É muito usado entre as pessoas que trabalham com Data Science e Machine Learning em Python. Se você precisar de ajuda sobre como usar os Notebooks Jupyter, veja [beginner-guide](https://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/).Instale o Jupyter nbextensions que é um conjunto de ferramentas auxiliares. Habilite o "Table of Contents" do nbextensions para que este notebook fique itemizado. Você pode adicionar quantas células quiser neste notebook para deixar suas respostas bem organizadas. Exercícios básicos Vamos começar! Comece imprimindo seu nome e e-mail aqui:
###Code
# preencha com seus dados
print('My name is Gabriel Moraes Barros')
print('My email is [email protected]')
###Output
My name is Gabriel Moraes Barros
My email is [email protected]
###Markdown
Listas Seguem alguns exercícios com lista. Python é muito bom para processar listas. Uma lista é uma sequência de elementos separados por vírgula dentro de chaves:
###Code
mylist = [5, 8, 'abc', 0, 8.3]
###Output
_____no_output_____
###Markdown
Imprimindo o número de elementos de uma `mylist` e alguns de seus elementos
###Code
print(len(mylist))
print(mylist[0])
print(mylist[-1]) # observe o índice -1 (o que isso significa?)
###Output
5
5
8.3
###Markdown
índice [-1] significa último elemento da lista Seja uma lista de 10 elementos numéricos sequenciais, começando em zero O código a seguir é típico para criar uma lista com os 10 primeiros inteiros maiores ou iguais a zero: inicializa-se lista vazia, para cada i entre 0 e menor que 10, faz append na lista a:
###Code
a = []
for i in range(10):
a.append(i)
print(a)
###Output
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
###Markdown
List Comprehension Esta mesma lista pode ser criada utilizando a construção denominada "List Comprehension" ou"Compreensão de Lista" em portugues. Coloque este termo no Google adicionando python na frente na forma:"python comprehension list" e você poderá ver vários exemplos e tutoriais sobre o assunto. É uma forma compacta que fazer um append iterativo numa lista:Veja como o trecho acima foi reduzido para uma linha apenas:
###Code
a = [i for i in range(10)]
print(a)
m_list = ['r', 'i', 'c', 'a', 'r', 'd', 'o']
new_list = []
for k,i in enumerate(m_list):
new_list.append(str(k)+i)
print(new_list)
###Output
['0r', '1i', '2c', '3a', '4r', '5d', '6o']
###Markdown
- Explique o que o enumerate faz no programa acima: Resposta: O enumerate serve para iterar sobre o "m_list". k sendo o iterador e i sendo o elemento "k" da lista "m_list".- Repita o mesmo exercício de criar a lista new_list, porém utilizando list comprehension:
###Code
m_list = ['r', 'i', 'c', 'a', 'r', 'd', 'o']
n_list = [str(k)+i for k,i in enumerate(m_list)] # utilize aqui o list comprehension
print(n_list)
###Output
['0r', '1i', '2c', '3a', '4r', '5d', '6o']
###Markdown
Fatiamento em Python Um conceito fundamental em Python é o do fatiamento "slicing", em inglês. É um conceito que será muito usadodurante todo o curso. Estude bem isso. Inicialmente iremos trabalhar com fatiamento de lista, mas posteriormentecom arrays (ou tensores) utilizando o NumPy
###Code
print(a[3:5])
###Output
[3, 4]
###Markdown
Criar e imprimir uma lista a partir da lista "a" já criada, porém apenas alguns elementos
###Code
# imprima os elementos ímpares de a
print(a[1::2])
# imprima os elementos pares de a
print(a[::2])
# imprima os últimos 3 elementos da lista (utilize índice negativo, pesquise na Internet)
print(a[-3:])
# Imprima os 3 primeiros elementos da lista (veja quando é possível ignorar valores iniciais e finais)
print(a[:3])
print(a[1:4])
###Output
[1, 3, 5, 7, 9]
[0, 2, 4, 6, 8]
[7, 8, 9]
[0, 1, 2]
[1, 2, 3]
###Markdown
Não entendi: (veja quando é possível ignorar valores iniciais e finais).
###Code
# Veja o significado do passo:
print(a[::2])
print(a[::-1])
# imprima os elementos ímpares na order reversa, do maior para o menor
print(a[1::2][::-1])
###Output
[9, 7, 5, 3, 1]
###Markdown
Tuplas
###Code
b = tuple(a)
print(b)
###Output
(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
###Markdown
A principal diferença entre uma lista e uma tupla é: A tupla é uma estrutura de dados heterogênea, imutável e com estrutura. É interessante ser usada, por exemplo, (nome,cpf) já que a pessoa não vai trocar de CPF. Tuplas podem ser usadas como chaves em dicionários. Geralmente a tupla é uma sequência de coisas distintas, mas útil para lidar com estes itens ao mesmo tempo.A lista é uma estrutura homogênea e mutável. Geralmente é uma sequência de items da mesma natureza, e você lida com eles individualmente Dicionários em Python Python possui uma estrutura de dados muito versátil denominada dicionário.
###Code
rob_record = {'nome': 'Roberto', 'idade': 18}
print(rob_record['nome'])
rob_record['idade'] = 20
print(rob_record)
###Output
{'nome': 'Roberto', 'idade': 20}
###Markdown
Lista de dicionários:
###Code
records = [{'nome':'Alfredo', 'idade': 23},
{'nome':'Fernanda', 'idade': 16},
{'nome':'Carla', 'idade':33}]
# acesse o nome da Fernanda: complete e descomente a linha a seguir
print(records[1]['nome'])
# crie uma lista com todos os nomes da lista records.
# resultado deve ser uma lista ['Alfredo', 'Fernanda', 'Carla']
# 1. Utilizando a forma tradicional de criar lista com append
nlist = []
for v in records:
nlist.append(v['nome'])
print(nlist)
# 2. Utilizando a forma de list comprehension
nlist2 = [v['nome'] for v in records]
print(nlist2)
###Output
['Alfredo', 'Fernanda', 'Carla']
###Markdown
Exercícios usando NumPy Os seguintes exercícios devem usar apenas o pacote NumPy.Não se deve utilizar nenhum outro pacote adicional.Existem vários exemplos de uso de NumPy no conjunto denotebooks tutorias disponíveis no GitHub:- https://github.com/robertoalotufo/ia898/blob/master/master/0_index.ipynb
###Code
import numpy as np
array = np.arange(10)
print(array)
A = np.arange(24).reshape(4,6)
print(A)
###Output
[[ 0 1 2 3 4 5]
[ 6 7 8 9 10 11]
[12 13 14 15 16 17]
[18 19 20 21 22 23]]
###Markdown
rows, cols, dimensions, shape and datatype
###Code
# imprima o número de linhas de A
print(A.shape[0])
# imprima o número de colunas de A
print(A.shape[1])
# imprima o número de dimensões de A
print(len(A.shape))
# imprima o shape de A:
print(A.shape)
# imprima o tipo de dados (dtype) dos elementos de A:
print(A.dtype)
###Output
4
6
2
(4, 6)
int64
###Markdown
Reshape
###Code
# Seja o vetor unidimension a:
a = np.array([1,2,3,4])
print(a, a.shape)
# converta o vetor unidimensional a em uma matriz vetor coluna (4 linhas e 1 coluna) utilizando reshape
a = a.reshape(4,1)
print(a)
###Output
[[1]
[2]
[3]
[4]]
###Markdown
Operações aritméticas
###Code
B = A + 10
B
###Output
_____no_output_____
###Markdown
Array binário (booleano)
###Code
# crie uma matriz booleana C com True nos elementos de B menores que 18 (não utilize loop explícito)
C = B < 18 # modify your code here
print(C)
###Output
[[ True True True True True True]
[ True True False False False False]
[False False False False False False]
[False False False False False False]]
###Markdown
Indexação booleana Veja um programa que cria matriz D_loop a partir da matriz B, porém trocando os elementos menores que 18 por seus valores negativos
###Code
D_loop = B.copy()
for row in np.arange(B.shape[0]):
for col in np.arange(B.shape[1]):
if B[row,col] < 18:
D_loop[row,col] = - B[row,col]
print(D_loop)
###Output
[[-10 -11 -12 -13 -14 -15]
[-16 -17 18 19 20 21]
[ 22 23 24 25 26 27]
[ 28 29 30 31 32 33]]
###Markdown
Troque o programa acima por uma única linha sem loop
###Code
D = np.where(B < 18, -B, B) # << modifique aqui utilizando indexação booleana
print(D)
###Output
[[-10 -11 -12 -13 -14 -15]
[-16 -17 18 19 20 21]
[ 22 23 24 25 26 27]
[ 28 29 30 31 32 33]]
###Markdown
Redução de eixo: soma Operações matriciais de redução de eixo são muito úteis e importantes.É um conceito importante da programação matricial.Estude o exemplo a seguir:
###Code
print(A)
print('-'*30)
print(A.shape)
print('-'*30)
As = A.sum(axis=0)
print(As)
print('-'*30)
print(As.shape)
# imprima o número de dimensões do array As = A.sum(axis=0)?
print(len(As.shape))
# calcule o valor médio do array A
print(A.mean())
###Output
1
11.5
###Markdown
$$ C(i,j) = \frac{A(i,j) - A_{min}}{A_{max}} - A_{min} $$
###Code
# Criar a matriz C que é a normalização de A, de modo que os valores de C estejam entre 0 e 1
C = ((A-np.min(A))/(np.max(A)))-(np.min(A))
print(C)
# Modificar o exercício anterior, porém agora faça a normalização para cada coluna de A
# de modo que as colunas da matriz D estejam entre os valores de 0 a 1.
# Dica: utilize o conceito de redução de eixo
D = 1 - (np.max(A,axis=0) - A)/(np.max(A,axis=0) - np.min(A,axis=0) )
print(D)
###Output
[[ 0. 0. 0. 0. 0. 0. ]
[ 0.33333333 0.33333333 0.33333333 0.33333333 0.33333333 0.33333333]
[ 0.66666667 0.66666667 0.66666667 0.66666667 0.66666667 0.66666667]
[ 1. 1. 1. 1. 1. 1. ]]
###Markdown
Fatiamento em arrays (slicing)
###Code
print(A)
# esta indexação é chamada fatiamento:
AA = A[:,1::2]
print(AA)
# criar a matriz AB apenas com as linhas pares da matriz A, utilizando o conceito de fatiamento:
AB = A[0::2,::]
print(AB)
# crie a matriz AC com mesmo shape da matriz A, porém com os elementos na ordem inversa:
# trocando a ordem das linhas e das colunas
AC = A[::-1,::-1]
print(AC)
###Output
[[23 22 21 20 19 18]
[17 16 15 14 13 12]
[11 10 9 8 7 6]
[ 5 4 3 2 1 0]]
###Markdown
Produto matricial (dot product) Calcule a matriz E dada pelo produto matricial entre a matriz A e sua transposta: $$ E = A A^T $$
###Code
E = A.dot(A.T) # modify your code here
print(E)
print(A.shape)
print(A.T.shape)
# Descomente a linha e explique
# por que a operação de multiplicação dá erro?
Ee = A * A.T
print(A.shape)
print(A.T.shape)
###Output
(4, 6)
(6, 4)
###Markdown
O broadcast no numpy tem duas regras: a) Funciona quando as duas dimensões são iguais; b) Quando uma delas é 1; Como A.shape[0] != 1 && AT.shape[0] !=1 && A.shape[0] ! = A.T.shape[0], então a operação dá erro. Matrizes multidimensionais Em deep learning, iremos utilizar matrizes multidimensionaisque são denominados como arrays no NumPy. Tensorflow usa o nome de tensor parasuas matrizes multimensionais.Matrizes de dimensões maior que 4 são de difícil intuição. A melhor forma de lidarcom elas é observando o seu *shape*. 3-D array
###Code
F = A.reshape(2,3,4)
print(F)
###Output
[[[ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]]
[[12 13 14 15]
[16 17 18 19]
[20 21 22 23]]]
###Markdown
Indexação Estude e explique as seguintes indexações:
###Code
#Essa matriz é segunda matriz (da dimensão 0) de 3x4 elementos, do tensor F
print(F[1])
#Essa matriz é a PRIMEIRA LINHA da segunda matriz (da dimensão 0) de 3x4 elementos, do tensor F
print(F[1,0])
#Essa matriz é ao TERCEIRO ELEMENTO DA primeira linha da segunda matriz (da dimensão 0) de 3x4 elementos, do tensor F
print(F[1,0,2])
# imprima o número de dimensões de F
print(len(F.shape))
# imprima o shape de F
print(F.shape)
###Output
3
(2, 3, 4)
###Markdown
Redução de eixo - aplicado a dois eixos simultâneos
###Code
# calcule o valor médio das matrizes F[0] e F[1], com apenas um comando usando F.mean(??)
print(np.mean( np.array([F[0,:,:], F[1,:,:] ]), axis=0 ))
print(np.mean( np.array([F[0,:,:], F[1,:,:] ]) ))
###Output
[[ 6. 7. 8. 9.]
[ 10. 11. 12. 13.]
[ 14. 15. 16. 17.]]
11.5
###Markdown
Broadcasting O que significa o conceito de broadcasting em NumPy?
###Code
# Usando o conceito de broadcast, mude o shape do vetor a para que o broadcast possa ocorrer em G = A + a
a = np.arange(4)
#print(a.shape)
a = a.reshape(4,1)
#print(A.shape)
G = A + a
print(G)
###Output
[[ 0 1 2 3 4 5]
[ 7 8 9 10 11 12]
[14 15 16 17 18 19]
[21 22 23 24 25 26]]
###Markdown
Function - split - dados treino e validação Defina uma função que receba como entrada um array unidimensional e gere como saída oarray na codificação one-hot, conforme exemplo a seguir:
###Code
# Entrada, vetor de 5 elementos representando classes
aa = np.arange(90).reshape(10,9)
print(aa)
# Saída da função t.v = split(aa, 0.8)
t = np.arange(72).reshape(8,9)
print(t)
v = np.arange(72,90).reshape(2,9)
print(v)
print(aa.shape)
print(aa.shape)
bb = aa[:int(aa.shape[0]*0.8)]
print(bb.shape)
# Evite o uso de laço explícito
# Não utilize outras bibliotecas além do NumPy
def split(dados, split_factor):
'''
divide a matriz dados em dois conjuntos:
matriz train: split_factor * n. de linhas de dados
matriz val: 1-split_factor * n. de linhas de dados
parametros entrada:
dados: matriz de entrada
split: entre 0. e 1. - fator de divisão em duas matrizes
parametros de saída:
train : matriz com as linhas iniciais de dados
val: matriz com as linhas restantes
'''
assert split_factor > 0 and split_factor < 1
# insert your code here
train = dados[:int(dados.shape[0]*split_factor)]
val = dados[int(dados.shape[0]*split_factor):]
return train, val
t,v = split(aa, 0.8)
#print('-'*30)
print('v=\n', v)
#print('-'*30)
print('t=\n', t)
# Teste sua função com outros valores
teste = np.random.rand(10,10)
teste_2 = teste_2.reshape(50,2)
t,v = split(teste_2, 0.50)
#print('-'*30)
print('v=\n', v.shape)
#print('-'*30)
print('t=\n', t.shape)
###Output
v=
(25, 2)
t=
(25, 2)
###Markdown
Programação Orientada a Objetos Documentação oficial: https://docs.python.org/3/tutorial/classes.htmlForma clássica de definição de uma função:
###Code
data = np.array([13, 63, 5, 378, 58, 40])
def avg(d):
return sum(d)/len(d)
avg(data)
###Output
_____no_output_____
###Markdown
Definição da classe, variáveis, inicialização e método Definição da classe `MyAvg`, contendo duas variávels: `id` (compartilhada) e `d`; inicialização e método `avg`.
###Code
class MyAvg:
id = 0.33 # variável compartilhada com todas as instâncias
def __init__(self,data):
self.d = data # variável associada a cada instância
def avg(self): # método para calcular a média
return sum(self.d)/len(self.d)
###Output
_____no_output_____
###Markdown
Objetos `a` e `b` são instâncias da classe `myavg`.Instanciar uma classe é inicializá-la através da chamada ao método __init__:
###Code
a = MyAvg(data)
b = MyAvg(2*data)
###Output
_____no_output_____
###Markdown
Aplicação do método `avg()`, retorna a média dos dados dos objetos a e b:
###Code
print(a.avg())
print(b.avg())
# Imprima os valores dos dados associados aos objetos a e b:
print(a.d)
print(b.d)
# Imprima a variável compartilhada `id` de cada objeto a e b:
print(a.id)
print(b.id)
###Output
0.33
0.33
###Markdown
Herança de classe
###Code
class MyAvgStd(MyAvg):
def var(self): # método adicional para calcular a variância
u = self.avg()
return np.sqrt(np.sum((self.d - u)**2)/len(self.d))
c = MyAvgStd(data)
print('media:',c.avg())
print('variancia:',c.var())
# imprima os dados associados ao objeto c e a sua variável compartilhada id
print(c.d)
print(c.id)
###Output
[ 13 63 5 378 58 40]
0.33
###Markdown
Exercício: convertendo a função `split` na classe `C_Split` Implemente a classe `C_Split` para ter a mesma funcionalidade da função `split` feita acima
###Code
class C_Split():
def __init__(self,matrix):
self.d = matrix
#self.split_factor
def split(self, split_factor ):
train = self.d[:int(self.d.shape[0]*split_factor)]
val = self.d[int(self.d.shape[0]*split_factor):]
return train,val
data_train_val = C_Split(aa)
train, val = data_train_val.split(0.8)
print('train=\n', train)
print('val=\n', val)
###Output
train=
[[ 0 1 2 3 4 5 6 7 8]
[ 9 10 11 12 13 14 15 16 17]
[18 19 20 21 22 23 24 25 26]
[27 28 29 30 31 32 33 34 35]
[36 37 38 39 40 41 42 43 44]
[45 46 47 48 49 50 51 52 53]
[54 55 56 57 58 59 60 61 62]
[63 64 65 66 67 68 69 70 71]]
val=
[[72 73 74 75 76 77 78 79 80]
[81 82 83 84 85 86 87 88 89]]
###Markdown
Classe com métodos `__len__` e `__getitem__` Uma classe com métodos `__len__` e `__getitem__` permite que os objetos possam ser indexados e calculado o seu número de elementos.Veja o exemplo a seguir:
###Code
class Word():
def __init__(self, phrase):
self.wordlist = phrase.split() # separa frase em uma lista de palavras
def __len__(self):
return len(self.wordlist)
def __getitem__(self,x):
return self.wordlist[x]
frase = 'Esta frase é formada por 7 palavras'
palavras = Word(frase)
print(list(palavras))
palavras[3] # permite a indexação do objeto
print(len(palavras))
###Output
7
###Markdown
Exercício para indexar elementos de um dicionário Um dicionário em Python não é indexado. Por exemplo seja o dicionário `d` a seguir.Não é possível indexar d[0] ou d[1] para buscar o primeiro ou segundo par (chave:valor).
###Code
d = {'a':1,'b': 2}
###Output
_____no_output_____
###Markdown
Implementar uma classe que receba um dicionário e permita que ele possa ser indexado.Para converter um dicionário em uma lista de pares, use:
###Code
list(d.items())
###Output
_____no_output_____
###Markdown
Complete a definição da classe `dicdata` a seguir para que um dicionário possa serindexado:
###Code
print(len(d))
lista_items = d.items()
print(list(lista_items)[1])
class DicData():
def __init__(self, dic):
self.dictlist = dic.items()
def __len__(self):
return(len(self.dictlist))
def __getitem__(self,x):
return list(self.dictlist)[x]
dd = DicData(d)
print(len(dd))
print(dd[0])
###Output
2
('a', 1)
###Markdown
Iteradores Iteradores são uteis para serem usados em estruturas do tipo `for a in b:`.Listas em Python são consideradas iteráveis, pois podem ser utilizadas nessa estrutura:
###Code
for i in ['a', 'b', 'c']:
print(i)
###Output
a
b
c
###Markdown
O método do `range()` do python também é um iterável:
###Code
for i in range(3):
print(i)
###Output
0
1
2
###Markdown
É possível acessar o iterador destas estruturas utilizando o método `iter()` do Python e então é possível percorrer seus elementos utilizado `next()`:
###Code
lista = ['a', 'b', 'c']
iterador = iter(lista)
print('tipo de iterador:', type(lista))
print('tipo de iterador:', type(iterador))
print(next(iterador))
###Output
tipo de iterador: <class 'list'>
tipo de iterador: <class 'list_iterator'>
a
###Markdown
O acesso de iteradores é sequencial e após o ultimo elemento uma exceção é levantada indicando o fim do iterador.Descomente o último `next` e veja o tipo da exceção que acontece.
###Code
print(next(iterador))
print(next(iterador))
print(next(iterador))
###Output
b
c
###Markdown
Criando objetos iteráveis Para implementar um objeto iterador é preciso escrever um método `__next__()` para a classe e para que ele seja acessível como iterável também é necessário escrever um método `__iter__()`:
###Code
class WordIterator():
def __init__(self, phrase):
self.words = phrase.split()
def __iter__(self):
self.iter_index = 0
return self
def __next__(self):
if self.iter_index < len(self.words):
i = self.iter_index
self.iter_index += 1
return self.words[i]
else:
raise StopIteration()
###Output
_____no_output_____
###Markdown
A classe acima é um iterador e é iterável. No método `__iter__()` reiniciamos o índice inicial para o iterador e retornamos o próprio objeto (um iterador).No método `__next__()` retornamos a palavra do índice atual ou a exceção de parada, caso seja o fim.
###Code
frase = 'Esta frase é formada por 7 palavras'
iterador_de_palavras = WordIterator(frase)
for palavra in iterador_de_palavras:
print(palavra)
###Output
Esta
frase
é
formada
por
7
palavras
###Markdown
Exercício com iterador Crie uma classe `DictIterator` que permita varrer os itens de um dicionário utilizando o `for`
###Code
class DicData():
def __init__(self, dic):
self.dictlist = dic.items()
def __len__(self):
return(len(self.dictlist))
def __getitem__(self,x):
return list(self.dictlist)[x]
class DictIterator():
def __init__(self, dic):
self.dictlist = dic.items()
def __len__(self):
return(len(self.dictlist))
def __getitem__(self,x):
return list(self.dictlist)[x]
def __iter__(self):
self.iter_index = 0
return self
def __next__(self):
if self.iter_index < len(list(self.dictlist)):
i = self.iter_index
self.iter_index += 1
return list(self.dictlist)[i]
else:
#print("The generator has come to an end!")
raise StopIteration()
d = {'a':1,'b': 2, 'c': 3}
d_iter = DictIterator(d)
for i in d_iter:
print(i)
###Output
('a', 1)
('b', 2)
('c', 3)
###Markdown
Objeto como função É possível declarar uma classe contendo um objeto possa ser chamado (*callable object*).Para isso, a classe deve conter o método `__call__`. Veja o exemplo a seguir:
###Code
class Scale():
def __init__(self, w):
self._w = w
def __call__(self, x):
return x * self._w
s = Scale(100.)
print(s(5))
###Output
500.0
###Markdown
Exercício de classe contendo objeto chamável Defina uma classe herdada da classe `Scale` que permita modificar a variável `self._w`.
###Code
class AjustaPeso(Scale):
def wset(self,new_value):
self._w = new_value
return self._w
ap = AjustaPeso(100.)
print(ap(5))
ap.wset(10)
print(ap(5))
###Output
50
|
examples/test_mean_field.ipynb | ###Markdown
Benchmark on Potts Model- LBP is much more stable than MF.- MF can get similar results to LBP.
###Code
genops.set_backend(genops.NUMPY)
size = 40
n_states = 20
expids = list(range(10))
record = []
for expid in expids:
with genops.local_seed(expid):
x = genops.normal(shape=[size, size, n_states])
binary_edges = genops.convert(make_grid_edges(x))
binary_potentials = genops.normal(shape=[binary_edges.shape[0], n_states, n_states])
unary_potentials = x.reshape(-1, n_states)
args = (unary_potentials, binary_potentials, binary_edges)
E = partial(compute_energy_plus, *args)
greedy = E(labels=genops.argmax(unary_potentials, axis=1))
mf = E(labels=naive_mean_field(*args, max_iter=20, damping=0.2, track_best=False))
bp = E(labels=lbp_plus(*args, max_iter=20, track_best=False))
# print(E(labels=inference_ad3(*args)))
record.append([greedy, mf, bp])
plt.scatter(expids, genops.tensor(record)[:, 0], label='greedy')
plt.scatter(expids, genops.tensor(record)[:, 1], label='mf')
plt.scatter(expids, genops.tensor(record)[:, 2], label='bp')
plt.legend()
###Output
_____no_output_____
###Markdown
Benchmark on Chain-Like CRFChain-Like CRF means: `A-B-C-D-E`, while some skip-binaries in a fixed window exist (`A-C`, `B-D`, `C-E`).- LBP is always more stable than MF.- The less factors, the better LBP.
###Code
genops.set_backend(genops.NUMPY)
size = 25
n_states = 32
plt.subplots(2, 2, figsize=(10, 10))
for wid, window in enumerate([1, 2, 3, size-1]):
plt.subplot(2, 2, wid + 1)
expids = list(range(10))
record = []
for expid in expids:
with genops.local_seed(expid):
unary_potentials = genops.abs(genops.normal(shape=(size, n_states)))
binary_edges = genops.tensor(generate_binary_edges(size, window))
binary_potentials = genops.abs(genops.normal(shape=(binary_edges.shape[0], n_states, n_states)))
args = (unary_potentials, binary_potentials, binary_edges)
E = partial(compute_energy_plus, *args)
greedy = E(labels=genops.argmax(unary_potentials, axis=1))
mf = E(labels=naive_mean_field(*args, max_iter=20, damping=0.2, track_best=True))
bp = E(labels=lbp_plus(*args, max_iter=20, track_best=True))
# print(E(labels=inference_ad3(*args)))
record.append([greedy, mf, bp])
plt.scatter(expids, genops.tensor(record)[:, 0], label='greedy')
plt.scatter(expids, genops.tensor(record)[:, 1], label='mf')
plt.scatter(expids, genops.tensor(record)[:, 2], label='bp')
plt.legend()
plt.title(f"window={window}")
###Output
_____no_output_____ |
python/HW15.ipynb | ###Markdown
Homework 15: Integrals, Interpolation, Curve Fit Problem 1Find the numerical integral of the following function:$$I = \int_0^{2\pi} e^{-x}\sin(3x)dx.$$ Problem 2Given the data ```x_given``` and ```y_given``` below. Find the linear interpolate y_il corresponding to ```x_i=0.54```. Also find the cubic spline value of y_is at the same point. The ```x_given``` and ```y_given``` values are computed from $y=f(x)=\exp(4x)$. Find the relative error between your interpolants and the exact value. The relative error is given by $\epsilon = |(y-y_{exact})/y_{exact}|$. Are you happy with the result?
###Code
import numpy as np
x_given = np.array([0,0.2,0.4,0.8,1.0])
y_given = np.exp(4.0*x_given)
x_i = 0.54
y_exact = np.exp(4.0*x_i)
###Output
_____no_output_____
###Markdown
Problem 3In a previous assignment we used thermo data for heat capacities of species. Its kind of annoying that we have two temperature ranges for each species. For the given array of temperatures (K) below, the corresponding cp/Rg array is given for CH4. This is computed using the equations and the coefficients for the two temperature ranges given in previous assignments.Fit a single 4th order polynomial to the whole range of temperature data. At each point, compare your polynomial fit to the one given. Report the maximum relative error that occurs over the list of temperatures.Plot the two versions to visually compare the original data and your polynomial fit.
###Code
T = np.linspace(300.,3000.,100)
a_lo = np.array([ 5.15,-1.37E-02,4.92E-05,-4.85E-08,1.67E-11])
a_hi = np.array([7.49E-02,1.34E-02,-5.73E-06,1.22E-09,-1.02E-13])
i_lo = np.where(np.logical_and(T>=300.0, T<1000.0))
i_hi = np.where(np.logical_and(T>=1000.0, T<=3000.0))
cpRg = np.zeros(100)
cpRg[i_lo] = a_lo[0] + a_lo[1]*T[i_lo] + a_lo[2]*T[i_lo]**2.0 + \
a_lo[3]*T[i_lo]**3.0 + a_lo[4]*T[i_lo]**4.0
cpRg[i_hi] = a_hi[0] + a_hi[1]*T[i_hi] + a_hi[2]*T[i_hi]**2.0 + \
a_hi[3]*T[i_hi]**3.0 + a_hi[4]*T[i_hi]**4.0
###Output
_____no_output_____
###Markdown
Problem 4The following kinetic R(T) were collected. We want to fit this to the following model function: $R = kT^me^{-E_a/(R_gT)}$. Here, $k$, $m$, and $(E_a/R_g)$, are adjustable parameters that we want to find so that our model best fits the data. Find the best parameters and plot the data and the model function together on the same graph.
###Code
T = np.linspace(500.,1000.,8)
R = [105.598, 89.700, 70.768, 66.996, 60.711, 58.992, 55.8328, 53.420]
###Output
_____no_output_____ |
mf_performance_analysis/mf data extraction/Equity Funds/Sectoral Fund/st_mf_data_extraction.ipynb | ###Markdown
Sectoral/Thematic FundThese mutual funds invest in stocks selected from single sector or fits in specific theme. Exctracting Sectoral/Thematic Mutual Fund's Historical Investment Returns DataData in this table: Get Absolute historical returns for ₹1000 investment. If 1Y column value is 1234.5 that means, your ₹1000 investment 1 year back would have grown to ₹1234.5.
###Code
sf_lump_sum_rtn = pd.read_html(
"https://www.moneycontrol.com/mutual-funds/performance-tracker/returns/sectoralthematic.html")
df1 = pd.DataFrame(sf_lump_sum_rtn[0])
#Renaming historical returns column names
df1.rename({'1W': '1W_RTN(%)', '1M': '1M_RTN(%)', '3M': '3M_RTN(%)', '6M': '6M_RTN(%)',
'YTD': 'YTD_RTN(%)', '1Y': '1Y_RTN(%)', '2Y': '2Y_RTN(%)', '3Y': '3Y_RTN(%)',
'5Y': '5Y_RTN(%)', '10Y': '10Y_RTN(%)'
}, axis=1, inplace=True)
print("Shape of the dataframe:", df1.shape)
df1.head()
###Output
Shape of the dataframe: (236, 13)
###Markdown
Exctracting Sectoral/Thematic Mutual Fund's Monthly Returns DataData in this table: Get monthly returns. If Jan month column value is 5.4% that means, fund has given 5.4% returns in Jan month.
###Code
sf_monthly_rtn = pd.read_html(
"https://www.moneycontrol.com/mutual-funds/performance-tracker/monthly-returns/sectoralthematic.html")
df2 = pd.DataFrame(sf_monthly_rtn[0])
#Renaming df1 column names
df1.rename({"Apr'21": "Apr'21(%)", "Apr'21": "Apr'21(%)", "Apr'21": "Apr'21(%)", "Apr'21": "Apr'21(%)",
'MTD': 'MTD_RTN(%)', "Apr'21": "Apr'21(%)", "Apr'21": "Apr'21(%)", "Apr'21": "Apr'21(%)",
"Apr'21": "Apr'21(%)", "Apr'21": "Apr'21(%)"
}, axis=1, inplace=True)
print("Shape of the dataframe:", df2.shape)
df2.head()
###Output
Shape of the dataframe: (236, 14)
###Markdown
Exctracting Sectoral/Thematic Mutual Fund's Quarterly Returns DataData in this table: Get quarterly returns. If Q1 column value is 5.4% that means, fund has given 5.4% returns from 1st Jan to 31st Mar.
###Code
sf_quarterly_rtn = pd.read_html(
"https://www.moneycontrol.com/mutual-funds/performance-tracker/quarterly-returns/sectoralthematic.html")
df3 = pd.DataFrame(sf_quarterly_rtn[0])
print("Shape of the dataframe:", df3.shape)
df3.head()
###Output
Shape of the dataframe: (234, 14)
###Markdown
Exctracting Sectoral/Thematic Mutual Fund's Annual Investment Returns DataData in this table: Get annual returns. If 2018 year column value is 5.4% that means, fund has given 5.4% returns from 1st Jan to 31st Dec/Last date.
###Code
sf_annual_rtn = pd.read_html(
"https://www.moneycontrol.com/mutual-funds/performance-tracker/annual-returns/sectoralthematic.html")
df4 = pd.DataFrame(sf_annual_rtn[0])
#Renaming yearly returns column names
df4.rename({'2020': '2020_RTN(%)', '2019': '2019_RTN(%)', '2018': '2018_RTN(%)', '2017': '2017_RTN(%)',
'2016': '2016_RTN(%)', '2015': '2015_RTN(%)', '2014': '2014_RTN(%)', '2013': '2013_RTN(%)',
'2012': '2012_RTN(%)', '2011': '2011_RTN(%)', '2010': '2010_RTN(%)'
}, axis=1, inplace=True)
print("Shape of the dataframe:", df4.shape)
df4.head()
###Output
Shape of the dataframe: (226, 14)
###Markdown
Exctracting Sectoral/Thematic Mutual Fund's Rank Within Category DataData in this table: Get performance rank within category. If 1Y column value is 3/45 that means, Fund ranked 3rd in terms of performance out of 45 funds in that category.
###Code
sf_rank_in_category = pd.read_html(
"https://www.moneycontrol.com/mutual-funds/performance-tracker/ranks/sectoralthematic.html")
df5 = pd.DataFrame(sf_rank_in_category[0])
#Renaming df5 column names
df5.rename({'1W': '1W_Rank', '1M': '1M_Rank', '3M': '3M_Rank', '6M': '6M_Rank', 'YTD': 'YTD_Rank',
'1Y': '1Y_Rank', '2Y': '2Y_Rank', '3Y': '3Y_Rank', '5Y': '5Y_Rank', '10Y': '10Y_Rank'
}, axis=1, inplace=True)
print("Shape of the dataframe:", df5.shape)
df5.head()
###Output
Shape of the dataframe: (205, 12)
###Markdown
Exctracting Sectoral/Thematic Mutual Fund's Risk Ratios DataData in this table: Get values of risk ratios calculated on daily returns for last 3 years.
###Code
sf_risk_ratio = pd.read_html(
"https://www.moneycontrol.com/mutual-funds/performance-tracker/risk-ratios/sectoralthematic.html")
df6 = pd.DataFrame(sf_risk_ratio[0])
#Droping the 'Category' column
df6.drop('Category', inplace=True, axis=1)
print("Shape of the dataframe:", df6.shape)
df6.head()
###Output
Shape of the dataframe: (77, 8)
###Markdown
Exctracting Sectoral/Thematic Mutual Fund's Portfolio DataData in this table: Compare how schemes have invested money across various asset class and number of instruments.
###Code
sf_portfolio = pd.read_html(
"https://www.moneycontrol.com/mutual-funds/performance-tracker/portfolioassets/sectoralthematic.html")
df7 = pd.DataFrame(sf_portfolio[0])
#Renaming SIP returns column names
df7.rename({'Turnover ratio': 'Turnover ratio(%)'}, axis=1, inplace=True)
print("Shape of the dataframe:", df7.shape)
df7.head()
###Output
Shape of the dataframe: (228, 10)
###Markdown
Exctracting Sectoral/Thematic Mutual Fund's Latest NAV DataData in this table: Get the latest values of NAV for the mutual funds.
###Code
sf_nav = pd.read_html(
"https://www.moneycontrol.com/mutual-funds/performance-tracker/navs/sectoralthematic.html")
df8 = pd.DataFrame(sf_nav[0])
df8.rename({'1D Change' : '1D Change(%)'}, axis=1, inplace=True)
print("Shape of the dataframe:", df8.shape)
df8.head()
###Output
Shape of the dataframe: (236, 10)
###Markdown
Exctracting Sectoral/Thematic Mutual Fund's SIP Returns DataData in this table: Get absolute SIP returns. If 1Y column value is 10%, that means fund has given 10% returns on your SIP investments started 1 year back from latest NAV date.
###Code
sf_sip_rtns = pd.read_html(
"https://www.moneycontrol.com/mutual-funds/performance-tracker/sip-returns/sectoralthematic.html")
df9 = pd.DataFrame(sf_sip_rtns[0])
#Renaming SIP returns column names
df9.rename({'1Y': '1Y_SIP_RTN(%)', '2Y': '2Y_SIP_RTN(%)', '3Y': '3Y_SIP_RTN(%)',
'5Y': '5Y_SIP_RTN(%)', '10Y': '10Y_SIP_RTN(%)', 'YTD' : 'YTD_SIP_RTN(%)'
}, axis=1, inplace=True)
print("Shape of the dataframe:", df9.shape)
df9.head()
df_final = pd.concat([df1,df2,df3,df4,df5,df6,df7,df8,df9],axis=1,sort=False)
print("Shape of the dataframe:", df_final.shape)
# Remove duplicate columns by name in Pandas
df_final = df_final.loc[:,~df_final.columns.duplicated()]
# Removing spaces in the column names
#df_final.columns = df_final.columns.str.replace(' ','_')
print("Shape of the dataframe:", df_final.shape)
df_final.head()
#Exporting the consolidated elss mf data as a csv file
#print("Shape of the dataframe:", df_final.shape)
#df_final.to_csv('st_mf_data('+ str(pd.to_datetime('today').strftime('%d-%b-%Y %H:%M:%S')) + ').csv',
# index=False)
#Exporting the elss mf data columns with its datatype as a csv file
#df_dtypes.to_csv('elss_mf_col_data_types('+ str(pd.to_datetime('today').strftime('%d-%b-%Y %H:%M:%S')) + '.csv)')
###Output
_____no_output_____ |
machine_learning/Mega-BDT-ku-kd-HL-LHC.ipynb | ###Markdown
Just kappa_u and kappa_d
###Code
df_ku['class'] = 1
df_ku_test['class'] = 1
df_kd['class'] = 0
df_kd_test['class'] = 0
channels = [df_ku, df_kd]
df_train = pd.concat(channels, ignore_index=True)
df_train = df_train.sample(frac=1).reset_index(drop=True)
names_orig = names
class_names = [r'$d \bar d hh$', r'$u \bar u hh$']
filename = '../results/models/HL-LHC-BDT/hh-BDT-2class-ku-kd.pickle.dat'
shap_plot = '../plots/HL-LHC-shap-ku-kd.pdf'
classifier, x_test, y_test, shap_values_2ud, X_shap_2ud = runBDT(df_train, filename, depth=10)
abs_shap(shap_values_2ud, X_shap_2ud, shap_plot, names=names, class_names=class_names, cmp=cmp_5)
ku_p = df_ku_test.sample(n=round(weight_ku), replace=True, random_state=seed).reset_index(drop=True)
kd_p = df_kd_test.sample(n=round(weight_kd), replace=True, random_state=seed).reset_index(drop=True)
print('Accuracy Score for ku: {:4.2f}% '.format(100*metrics.accuracy_score(ku_p['class'].values, classifier.predict(ku_p.drop(columns=['class', 'weight']).values))))
print('Accuracy Score for kd: {:4.2f}% '.format(100*metrics.accuracy_score(kd_p['class'].values, classifier.predict(kd_p.drop(columns=['class', 'weight']).values))))
df_array = [df_kd_test, df_ku_test]
weight_array = [weight_kd, weight_ku]
keys = ['ku', 'kd']
filename = '../results/confusion/HL-LHC-BDT/hh-BDT-2class-ku-kd.confusion.json'
df = build_confusion(df_array, weight_array, classifier, filename, keys)
df
###Output
_____no_output_____
###Markdown
hh and kappa_u, kappa_d
###Code
df_ku['class'] = 2
df_ku_test['class'] = 2
df_kd['class'] = 1
df_kd_test['class'] = 1
df_hhsm['class'] = 0
df_hhsm_test['class'] = 0
channels = [df_ku, df_kd, df_hhsm]
df_train = pd.concat(channels, ignore_index=True)
df_train = df_train.sample(frac=1).reset_index(drop=True)
names_orig = names
class_names = [r'$hh^{gg\rm F}$', r'$d \bar d hh$', r'$u \bar u hh$']
filename = '../results/models/HL-LHC-BDT/hh-BDT-2class-ku-kd-hhsm.pickle.dat'
shap_plot = '../plots/HL-LHC-shap-ku-kd-hhsm.pdf'
classifier, x_test, y_test, shap_values_3ud, X_shap_3ud = runBDT(df_train, filename, depth=10)
abs_shap(shap_values_3ud, X_shap_3ud, shap_plot, names=names, class_names=class_names, cmp=cmp_3)
ku_p = df_ku_test.sample(n=round(weight_ku), replace=True, random_state=seed).reset_index(drop=True)
kd_p = df_kd_test.sample(n=round(weight_kd), replace=True, random_state=seed).reset_index(drop=True)
hhsm_p = df_hhsm_test.sample(n=round(weight_hhsm), replace=True, random_state=seed).reset_index(drop=True)
print('Accuracy Score for ku: {:4.2f}% '.format(100*metrics.accuracy_score(ku_p['class'].values, classifier.predict(ku_p.drop(columns=['class', 'weight']).values))))
print('Accuracy Score for kd: {:4.2f}% '.format(100*metrics.accuracy_score(kd_p['class'].values, classifier.predict(kd_p.drop(columns=['class', 'weight']).values))))
print('Accuracy Score for hhsm: {:4.2f}% '.format(100*metrics.accuracy_score(hhsm_p['class'].values, classifier.predict(hhsm_p.drop(columns=['class', 'weight']).values))))
df_array = [df_kd_test, df_ku_test, df_hhsm_test]
weight_array = [weight_kd, weight_ku, weight_hhsm]
keys = ['ku', 'kd', 'hhsm']
filename = '../results/confusion/HL-LHC-BDT/hh-BDT-2class-ku-kd-hhsm.confusion.json'
df = build_confusion(df_array, weight_array, classifier, filename, keys)
df
###Output
_____no_output_____
###Markdown
_________________________________ kappa_u
###Code
df_ku['class'] = 3
df_ku_test['class'] = 3
df_hhsm['class'] = 2
df_hhsm_test['class'] = 2
channels = [df_ku, df_hhsm, df_bbh_tth, df_bbxaa]
df_train = pd.concat(channels, ignore_index=True)
df_train = df_train.sample(frac=1).reset_index(drop=True)
class_names = [r'$bb\gamma\gamma$', r'$Q\bar{Q}h$', r'$hh^{gg\rm F}$', r'$u \bar u hh$']
filename = '../results/models/HL-LHC-BDT/hh-BDT-4class-ku.pickle.dat'
shap_plot = '../plots/HL-LHC-shap-bbxaa-bbh-tth-hhsm-ku-only.pdf'
classifier, x_test, y_test, shap_values_4u, X_shap_4u = runBDT(df_train, filename, depth=10)
abs_shap(shap_values_4u, X_shap_4u, shap_plot, names=names, class_names=class_names, cmp=cmp_5)
ku_p = df_ku_test.sample(n=round(weight_ku), replace=True, random_state=seed).reset_index(drop=True)
hhsm_p = df_hhsm_test.sample(n=round(weight_hhsm), replace=True, random_state=seed).reset_index(drop=True)
bbh_tth_p = df_bbh_tth_test.sample(n=round(weight_bbh_tth), replace=True, random_state=seed).reset_index(drop=True)
bbxaa_p = df_bbxaa_test.sample(n=round(weight_bbxaa), replace=True, random_state=seed).reset_index(drop=True)
print('Accuracy Score for ku: {:4.2f}% '.format(100*metrics.accuracy_score(ku_p['class'].values, classifier.predict(ku_p.drop(columns=['class', 'weight']).values))))
print('Accuracy Score for hhsm: {:4.2f}% '.format(100*metrics.accuracy_score(hhsm_p['class'].values, classifier.predict(hhsm_p.drop(columns=['class', 'weight']).values))))
print('Accuracy Score for bbh+tth: {:4.2f}% '.format(100*metrics.accuracy_score(bbh_tth_p['class'].values, classifier.predict(bbh_tth_p.drop(columns=['class', 'weight']).values))))
print('Accuracy Score for bbxaa: {:4.2f}% '.format(100*metrics.accuracy_score(bbxaa_p['class'].values, classifier.predict(bbxaa_p.drop(columns=['class', 'weight']).values))))
df_array = [df_bbxaa_test, df_bbh_tth_test, df_hhsm, df_ku_test]
weight_array = [weight_bbxaa, weight_bbh_tth, weight_hhsm, weight_ku]
keys = ['ku', 'hhsm', 'tth+bbh', 'bbxaa']
filename = '../results/confusion/HL-LHC-BDT/hh-BDT-4class-ku-only.confusion.json'
df = build_confusion(df_array, weight_array, classifier, filename, keys)
df
###Output
_____no_output_____
###Markdown
_________________________________ kappa_d
###Code
df_kd['class'] = 3
df_kd_test['class'] = 3
df_hhsm['class'] = 2
df_hhsm_test['class'] = 2
channels = [df_kd, df_hhsm, df_bbh_tth, df_bbxaa]
df_train = pd.concat(channels, ignore_index=True)
df_train = df_train.sample(frac=1).reset_index(drop=True)
class_names = [r'$bb\gamma\gamma$', r'$Q\bar{Q}h$', r'$hh^{gg\rm F}$', r'$d \bar d hh$']
filename = '../results/models/HL-LHC-BDT/hh-BDT-4class-kd.pickle.dat'
shap_plot = '../plots/HL-LHC-shap-bbxaa-bbh-tth-hhsm-kd-only.pdf'
classifier, x_test, y_test, shap_values_4d, X_shap_4d = runBDT(df_train, filename, depth=10)
abs_shap(shap_values_4d, X_shap_4d, shap_plot, names=names, class_names=class_names, cmp=cmp_5)
kd_p = df_kd_test.sample(n=round(weight_kd), replace=True, random_state=seed).reset_index(drop=True)
hhsm_p = df_hhsm_test.sample(n=round(weight_hhsm), replace=True, random_state=seed).reset_index(drop=True)
bbh_tth_p = df_bbh_tth_test.sample(n=round(weight_bbh_tth), replace=True, random_state=seed).reset_index(drop=True)
bbxaa_p = df_bbxaa_test.sample(n=round(weight_bbxaa), replace=True, random_state=seed).reset_index(drop=True)
print('Accuracy Score for kd: {:4.2f}% '.format(100*metrics.accuracy_score(kd_p['class'].values, classifier.predict(kd_p.drop(columns=['class', 'weight']).values))))
print('Accuracy Score for hhsm: {:4.2f}% '.format(100*metrics.accuracy_score(hhsm_p['class'].values, classifier.predict(hhsm_p.drop(columns=['class', 'weight']).values))))
print('Accuracy Score for bbh+tth: {:4.2f}% '.format(100*metrics.accuracy_score(bbh_tth_p['class'].values, classifier.predict(bbh_tth_p.drop(columns=['class', 'weight']).values))))
print('Accuracy Score for bbxaa: {:4.2f}% '.format(100*metrics.accuracy_score(bbxaa_p['class'].values, classifier.predict(bbxaa_p.drop(columns=['class', 'weight']).values))))
df_array = [df_bbxaa_test, df_bbh_tth_test, df_hhsm, df_kd_test]
weight_array = [weight_bbxaa, weight_bbh_tth, weight_hhsm, weight_kd]
keys = ['kd', 'hhsm', 'tth+bbh', 'bbxaa']
filename = '../results/confusion/HL-LHC-BDT/hh-BDT-4class-kd-only.confusion.json'
df = build_confusion(df_array, weight_array, classifier, filename, keys)
df
###Output
_____no_output_____
###Markdown
_________________________________ The kappa_u & kappa_d
###Code
df_ku['class'] = 4
df_ku_test['class'] = 4
df_kd['class'] = 3
df_kd_test['class'] = 3
df_hhsm['class'] = 2
df_hhsm_test['class'] = 2
channels = [df_ku, df_kd, df_hhsm, df_bbh_tth, df_bbxaa]
df_train = pd.concat(channels, ignore_index=True)
df_train = df_train.sample(frac=1).reset_index(drop=True)
class_names = [r'$bb\gamma\gamma$', r'$Q\bar{Q}h$', r'$hh^{gg\rm F}$', r'$d \bar d hh$', r'$u \bar u hh$']
filename = '../results/models/HL-LHC-BDT/hh-BDT-5class-ku-kd.pickle.dat'
shap_plot = '../plots/HL-LHC-shap-bbxaa-bbh-tth-hhsm-ku-kd.pdf'
classifier, x_test, y_test, shap_values_5ud, X_shap_5ud = runBDT(df_train, filename, depth=10)
abs_shap(shap_values_5ud, X_shap_5ud, shap_plot, names=names, class_names=class_names, cmp=cmp_5)
ku_p = df_ku_test.sample(n=round(weight_ku), replace=True, random_state=seed).reset_index(drop=True)
kd_p = df_kd_test.sample(n=round(weight_kd), replace=True, random_state=seed).reset_index(drop=True)
hhsm_p = df_hhsm_test.sample(n=round(weight_hhsm), replace=True, random_state=seed).reset_index(drop=True)
bbh_tth_p = df_bbh_tth_test.sample(n=round(weight_bbh_tth), replace=True, random_state=seed).reset_index(drop=True)
bbxaa_p = df_bbxaa_test.sample(n=round(weight_bbxaa), replace=True, random_state=seed).reset_index(drop=True)
print('Accuracy Score for ku: {:4.2f}% '.format(100*metrics.accuracy_score(ku_p['class'].values, classifier.predict(ku_p.drop(columns=['class', 'weight']).values))))
print('Accuracy Score for kd: {:4.2f}% '.format(100*metrics.accuracy_score(kd_p['class'].values, classifier.predict(kd_p.drop(columns=['class', 'weight']).values))))
print('Accuracy Score for hhsm: {:4.2f}% '.format(100*metrics.accuracy_score(hhsm_p['class'].values, classifier.predict(hhsm_p.drop(columns=['class', 'weight']).values))))
print('Accuracy Score for bbh+tth: {:4.2f}% '.format(100*metrics.accuracy_score(bbh_tth_p['class'].values, classifier.predict(bbh_tth_p.drop(columns=['class', 'weight']).values))))
print('Accuracy Score for bbxaa: {:4.2f}% '.format(100*metrics.accuracy_score(bbxaa_p['class'].values, classifier.predict(bbxaa_p.drop(columns=['class', 'weight']).values))))
df_array = [df_bbxaa_test, df_bbh_tth_test, df_hhsm, df_kd_test, df_ku_test]
weight_array = [weight_bbxaa, weight_bbh_tth, weight_hhsm, weight_kd, weight_ku]
keys = ['ku', 'kd', 'hhsm', 'tth+bbh', 'bbxaa']
filename = '../results/confusion/HL-LHC-BDT/hh-BDT-5class-ku-kd.confusion.json'
df = build_confusion(df_array, weight_array, classifier, filename, keys)
df
###Output
_____no_output_____ |
Wine Classification.ipynb | ###Markdown
Importing required libraries
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn. metrics import confusion_matrix, classification_report, roc_curve, roc_auc_score
import joblib
###Output
_____no_output_____
###Markdown
Reading and Viewing Data
###Code
df = pd.read_csv("wine.csv")
df.head()
df.describe()
df.shape
###Output
_____no_output_____
###Markdown
Checking the NULL values
###Code
df.isnull().sum()
###Output
_____no_output_____
###Markdown
Plotting the Boxplots
###Code
features = ["Alcohol", "Malic_acid","Ash","Alcalinity","Magnesium","Phenols","Flavanoids","Nonflavanoids","Proanthocyanins",
"Color_intensity","Hue","OD280_315_of_diluted_wines","Proline"]
label = "WineVariety"
for col in features:
df.boxplot(column=col, by=label, figsize=(6,6))
###Output
C:\Users\Razi\anaconda3\lib\site-packages\numpy\core\_asarray.py:102: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
return array(a, dtype, copy=False, order=order)
C:\Users\Razi\anaconda3\lib\site-packages\numpy\core\_asarray.py:102: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
return array(a, dtype, copy=False, order=order)
C:\Users\Razi\anaconda3\lib\site-packages\numpy\core\_asarray.py:102: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
return array(a, dtype, copy=False, order=order)
C:\Users\Razi\anaconda3\lib\site-packages\numpy\core\_asarray.py:102: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
return array(a, dtype, copy=False, order=order)
C:\Users\Razi\anaconda3\lib\site-packages\numpy\core\_asarray.py:102: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
return array(a, dtype, copy=False, order=order)
C:\Users\Razi\anaconda3\lib\site-packages\numpy\core\_asarray.py:102: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
return array(a, dtype, copy=False, order=order)
C:\Users\Razi\anaconda3\lib\site-packages\numpy\core\_asarray.py:102: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
return array(a, dtype, copy=False, order=order)
C:\Users\Razi\anaconda3\lib\site-packages\numpy\core\_asarray.py:102: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
return array(a, dtype, copy=False, order=order)
C:\Users\Razi\anaconda3\lib\site-packages\numpy\core\_asarray.py:102: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
return array(a, dtype, copy=False, order=order)
C:\Users\Razi\anaconda3\lib\site-packages\numpy\core\_asarray.py:102: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
return array(a, dtype, copy=False, order=order)
C:\Users\Razi\anaconda3\lib\site-packages\numpy\core\_asarray.py:102: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
return array(a, dtype, copy=False, order=order)
C:\Users\Razi\anaconda3\lib\site-packages\numpy\core\_asarray.py:102: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
return array(a, dtype, copy=False, order=order)
C:\Users\Razi\anaconda3\lib\site-packages\numpy\core\_asarray.py:102: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
return array(a, dtype, copy=False, order=order)
###Markdown
Getting the Categorical and Numeric features
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
Since there are other numeric columns, we'll work only with the numeric columns Splitting the data into Train and Test
###Code
X, y = df[features], df[label]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
print("X_train:", X_train.shape)
print("X_test:", X_test.shape)
print("y_train:", y_train.shape)
print("y_test:", y_test.shape)
###Output
X_train: (124, 13)
X_test: (54, 13)
y_train: (124,)
y_test: (54,)
###Markdown
Creating a Pipeline for preprocessing the model
###Code
feature_transformer = Pipeline(steps=[("scaler", StandardScaler())])
preprocessor = ColumnTransformer(transformers=[("preprocess", feature_transformer, features)])
pipeline_log = Pipeline(steps=[("preprocessor", preprocessor),
("logRegression", LogisticRegression(solver='lbfgs', multi_class='auto'))])
model_log = pipeline_log.fit(X_train, y_train)
predictions_log = model_log.predict(X_test)
cl_report_log = classification_report(y_test, predictions_log)
print(cl_report_log)
###Output
precision recall f1-score support
0 1.00 1.00 1.00 21
1 1.00 1.00 1.00 20
2 1.00 1.00 1.00 13
accuracy 1.00 54
macro avg 1.00 1.00 1.00 54
weighted avg 1.00 1.00 1.00 54
###Markdown
Finding the AUC and ROC
###Code
probabilities_log = model_log.predict_proba(X_test)
auc = roc_auc_score(y_test, probabilities_log, multi_class='ovr')
print(auc)
###Output
1.0
###Markdown
The AUC looks promising which means the Logistic Regression model performs pretty well. It is now time to test is against some unseen data. Dumping the model
###Code
#saving the model
filename = "./model_log.pkl"
joblib.dump(model_log, filename)
###Output
_____no_output_____
###Markdown
Loading the model
###Code
model = joblib.load(filename)
#new data
X_new = pd.DataFrame([[12.37,0.94,1.36,10.6,88,1.98,0.57,0.28,0.42,1.95,1.05,1.82,520]], columns=features)
predictions = model.predict(X_new)
print(predictions)
###Output
[1]
###Markdown
Logistic RegressionWe will try implementing a Logistic Regression Model on the Wine Dataset from the Sklearn Library by finding the appropriate suitable related parameters in the data. Importing dependenciesImporting dependencies like numpy, pandas, sklearn, matplotlib that we are going to use in the future
###Code
import numpy as np
import pandas as pd
from sklearn.datasets import load_wine
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import f1_score
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.metrics import plot_confusion_matrix
###Output
/usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
import pandas.util.testing as tm
###Markdown
Loading our datasetWe load the Wine dataset and print it's description to see about what the DataFrame object is.
###Code
dataSrc = load_wine()
print(dataSrc.DESCR)
###Output
.. _wine_dataset:
Wine recognition dataset
------------------------
**Data Set Characteristics:**
:Number of Instances: 178 (50 in each of three classes)
:Number of Attributes: 13 numeric, predictive attributes and the class
:Attribute Information:
- Alcohol
- Malic acid
- Ash
- Alcalinity of ash
- Magnesium
- Total phenols
- Flavanoids
- Nonflavanoid phenols
- Proanthocyanins
- Color intensity
- Hue
- OD280/OD315 of diluted wines
- Proline
- class:
- class_0
- class_1
- class_2
:Summary Statistics:
============================= ==== ===== ======= =====
Min Max Mean SD
============================= ==== ===== ======= =====
Alcohol: 11.0 14.8 13.0 0.8
Malic Acid: 0.74 5.80 2.34 1.12
Ash: 1.36 3.23 2.36 0.27
Alcalinity of Ash: 10.6 30.0 19.5 3.3
Magnesium: 70.0 162.0 99.7 14.3
Total Phenols: 0.98 3.88 2.29 0.63
Flavanoids: 0.34 5.08 2.03 1.00
Nonflavanoid Phenols: 0.13 0.66 0.36 0.12
Proanthocyanins: 0.41 3.58 1.59 0.57
Colour Intensity: 1.3 13.0 5.1 2.3
Hue: 0.48 1.71 0.96 0.23
OD280/OD315 of diluted wines: 1.27 4.00 2.61 0.71
Proline: 278 1680 746 315
============================= ==== ===== ======= =====
:Missing Attribute Values: None
:Class Distribution: class_0 (59), class_1 (71), class_2 (48)
:Creator: R.A. Fisher
:Donor: Michael Marshall (MARSHALL%[email protected])
:Date: July, 1988
This is a copy of UCI ML Wine recognition datasets.
https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data
The data is the results of a chemical analysis of wines grown in the same
region in Italy by three different cultivators. There are thirteen different
measurements taken for different constituents found in the three types of
wine.
Original Owners:
Forina, M. et al, PARVUS -
An Extendible Package for Data Exploration, Classification and Correlation.
Institute of Pharmaceutical and Food Analysis and Technologies,
Via Brigata Salerno, 16147 Genoa, Italy.
Citation:
Lichman, M. (2013). UCI Machine Learning Repository
[https://archive.ics.uci.edu/ml]. Irvine, CA: University of California,
School of Information and Computer Science.
.. topic:: References
(1) S. Aeberhard, D. Coomans and O. de Vel,
Comparison of Classifiers in High Dimensional Settings,
Tech. Rep. no. 92-02, (1992), Dept. of Computer Science and Dept. of
Mathematics and Statistics, James Cook University of North Queensland.
(Also submitted to Technometrics).
The data was used with many others for comparing various
classifiers. The classes are separable, though only RDA
has achieved 100% correct classification.
(RDA : 100%, QDA 99.4%, LDA 98.9%, 1NN 96.1% (z-transformed data))
(All results using the leave-one-out technique)
(2) S. Aeberhard, D. Coomans and O. de Vel,
"THE CLASSIFICATION PERFORMANCE OF RDA"
Tech. Rep. no. 92-01, (1992), Dept. of Computer Science and Dept. of
Mathematics and Statistics, James Cook University of North Queensland.
(Also submitted to Journal of Chemometrics).
###Markdown
Converting dataset to DataFrame ObjectWe load our Wine dataset into a Pandas DataFrame object.
###Code
data = pd.DataFrame(data= np.c_[dataSrc['data'], dataSrc['target']], columns= dataSrc['feature_names'] + ['target'])
###Output
_____no_output_____
###Markdown
Describing our datasetWe display the first 10 entries of the DataFrame object and display the summary of the dataset.
###Code
data.head(10)
data.describe()
###Output
_____no_output_____
###Markdown
Finding relation between the target and featuresWe plot a graph to see how the target feature vary with different features.
###Code
depends_upon = dataSrc.feature_names
for i in depends_upon:
x = data[i]
y = data['target']
plt.xlabel(i)
plt.ylabel('target')
plt.scatter(x, y)
plt.show()
###Output
_____no_output_____
###Markdown
Using heatmapsGraphs can give a pretty fair picture about the relationship between the targetted data and the feature. But using a heatmap shows a more accurate picture about the correlation between different features and the target variable.
###Code
plt.figure(figsize=(14,14))
corr_matrix = data.corr().round(2)
sns.heatmap(data=corr_matrix, annot=True)
###Output
_____no_output_____
###Markdown
Conclusion from the graphs and heatmapsFrom both, the graphs and the heatmap, we can infer that malic_acid, alcalinity_of_ash, nonflavanoid_phenols, od280/od315_of_diluted_wines, total_phenols, flavanoids, hue, proline could provide us with a better understanding of the data with Regression.We now shape the X and Y variables we are gonna use.
###Code
x = data[['malic_acid', 'alcalinity_of_ash', 'nonflavanoid_phenols', 'od280/od315_of_diluted_wines', 'total_phenols', 'flavanoids', 'hue', 'proline']]
y = data[['target']]
###Output
_____no_output_____
###Markdown
Splitting the datasetWe use train_test_split to test our dataset into training and testing variables.
###Code
X_train, x_test, Y_train, y_test = train_test_split(x, y, random_state=4, test_size=0.3)
###Output
_____no_output_____
###Markdown
We create a logistic regression model. Since the dimensions of our dataset is big, we set the Maximum Iterations value to 2,250 (the default value being 1000).
###Code
model = LogisticRegression(max_iter=2250)
###Output
_____no_output_____
###Markdown
We train the Logistic Regression model using the fit method.
###Code
model.fit(X_train, Y_train)
###Output
/usr/local/lib/python3.6/dist-packages/sklearn/utils/validation.py:760: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
y = column_or_1d(y, warn=True)
###Markdown
We make predictions for the X values in the testing dataset and store it in the y_hat variable. Now we can compare the y_hat variable with the y_test variable to compare the accuracy of our model.
###Code
y_hat = model.predict(x_test)
###Output
_____no_output_____
###Markdown
Calculating our model's efficiencyCalculated the efficiency of our model in different ways. Using the in-built score method we calculate the performance score of our model.
###Code
model.score(x_test, y_test)
###Output
_____no_output_____
###Markdown
We calculate the F1 Score of the model using sklearns, f1_score function.
###Code
score = f1_score(y_test, y_hat, average='weighted')
print("F1 Score of the model is", score)
###Output
F1 Score of the model is 0.9627995642701526
###Markdown
Plotting a confusion matrixVisualising the predictions of our model in the form of a confusion matrix to see how well our model performs
###Code
plot_confusion_matrix(model, x_test, y_test)
###Output
_____no_output_____
###Markdown
Calculating TF, TN, FP, FNWriting a function to manually calculate the True Positives, False Positives, True Negatives and False Negatives.
###Code
def perf_measure(y_actual, y_hat):
TP = 0
FP = 0
TN = 0
FN = 0
for i in range(len(y_hat)):
if y_actual[i]==y_hat[i]==1:
TP += 1
if y_hat[i]==1 and y_actual[i]!=y_hat[i]:
FP += 1
if y_actual[i]==y_hat[i]==0:
TN += 1
if y_hat[i]==0 and y_actual[i]!=y_hat[i]:
FN += 1
return(TP, FP, TN, FN)
truePositive, falsePositive, trueNegative, falseNegative = perf_measure(np.asarray(y_test), np.asarray(y_hat))
print("Precision is", (truePositive / (truePositive + falsePositive)))
print("Recall is", (truePositive / (truePositive + falseNegative)))
print("Specificity is", (trueNegative / (trueNegative + falsePositive)))
print("Accuracy is", ((truePositive + trueNegative) / (truePositive + falsePositive + falseNegative + trueNegative)))
###Output
Precision is 0.9047619047619048
Recall is 1.0
Specificity is 0.8888888888888888
Accuracy is 0.9459459459459459
###Markdown
Plotting the malic_acid versus target class graphThe grey dots highlight the data fed in the testing set for classifying on the basis of malic_acid. The red dots are what the model predicted.
###Code
plt.scatter(x_test['malic_acid'], y_test, color='grey')
plt.scatter(x_test['malic_acid'], y_hat, c='red')
plt.xlabel("malic_acid", fontsize=18)
plt.ylabel("Alcohol Type", fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
Plotting the alcalinity_of_ash versus target class graphThe grey dots highlight the data fed in the testing set for classifying on the basis of alcalinity_of_ash. The red dots are what the model predicted.
###Code
plt.scatter(x_test['alcalinity_of_ash'], y_test, color='grey')
plt.scatter(x_test['alcalinity_of_ash'], y_hat, c='red')
plt.xlabel("alcalinity_of_ash", fontsize=18)
plt.ylabel("Alcohol Type", fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
Plotting the total_phenols versus target class graphThe grey dots highlight the data fed in the testing set for classifying on the basis of total_phenols. The red dots are what the model predicted.
###Code
plt.scatter(x_test['total_phenols'], y_test, color='grey')
plt.scatter(x_test['total_phenols'], y_hat, c='red')
plt.xlabel("total_phenols", fontsize=18)
plt.ylabel("Alcohol Type", fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
Plotting the flavanoids versus target class graphThe grey dots highlight the data fed in the testing set for classifying on the basis of flavanoids. The red dots are what the model predicted.
###Code
plt.scatter(x_test['flavanoids'], y_test, color='grey')
plt.scatter(x_test['flavanoids'], y_hat, c='red')
plt.xlabel("flavanoids", fontsize=18)
plt.ylabel("Alcohol Type", fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
Plotting the nonflavanoid_phenols versus target class graphThe grey dots highlight the data fed in the testing set for classifying on the basis of nonflavanoid_phenols. The red dots are what the model predicted.
###Code
plt.scatter(x_test['nonflavanoid_phenols'], y_test, color='grey')
plt.scatter(x_test['nonflavanoid_phenols'], y_hat, c='red')
plt.xlabel("nonflavanoid_phenols", fontsize=18)
plt.ylabel("Alcohol Type", fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
Plotting the hue versus target class graphThe grey dots highlight the data fed in the testing set for classifying on the basis of hue. The red dots are what the model predicted.
###Code
plt.scatter(x_test['hue'], y_test, color='grey')
plt.scatter(x_test['hue'], y_hat, c='red')
plt.xlabel("hue", fontsize=18)
plt.ylabel("Alcohol Type", fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
Plotting the proline versus target class graphThe grey dots highlight the data fed in the testing set for classifying on the basis of proline. The red dots are what the model predicted.
###Code
plt.scatter(x_test['proline'], y_test, color='grey')
plt.scatter(x_test['proline'], y_hat, c='red')
plt.xlabel("proline", fontsize=18)
plt.ylabel("Alcohol Type", fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
Plotting the od280/od315_of_diluted_wines versus target class graphThe grey dots highlight the data fed in the testing set for classifying on the basis of od280/od315_of_diluted_wines. The red dots are what the model predicted.
###Code
plt.scatter(x_test['od280/od315_of_diluted_wines'], y_test, color='grey')
plt.scatter(x_test['od280/od315_of_diluted_wines'], y_hat, c='red')
plt.xlabel("od280/od315_of_diluted_wines", fontsize=18)
plt.ylabel("Alcohol Type", fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
Classification ChallengeWine experts can identify wines from specific vineyards through smell and taste, but the factors that give different wines their individual charateristics are actually based on their chemical composition.In this challenge, you must train a classification model to analyze the chemical and visual features of wine samples and classify them based on their cultivar (grape variety).> **Citation**: The data used in this exercise was originally collected by Forina, M. et al.>> PARVUS - An Extendible Package for Data Exploration, Classification and Correlation.Institute of Pharmaceutical and Food Analysis and Technologies, Via Brigata Salerno,16147 Genoa, Italy.>> It can be downloaded from the UCI dataset repository (Dua, D. and Graff, C. (2019). [UCI Machine Learning Repository]([http://archive.ics.uci.edu/ml). Irvine, CA: University of California, School of Information and Computer Science). Explore the dataRun the following cell to load a CSV file of wine data, which consists of 12 numeric features and a classification label with the following classes:- **0** (*variety A*)- **1** (*variety B*)- **2** (*variety C*)
###Code
import pandas as pd
# load the training dataset
data = pd.read_csv('data/wine.csv')
data.sample(10)
###Output
_____no_output_____
###Markdown
Your challenge is to explore the data and train a classification model that achieves an overall *Recall* metric of over 0.95 (95%).> **Note**: There is no single "correct" solution. A sample solution is provided in [03 - Wine Classification Solution.ipynb](03%20-%20Wine%20Classification%20Solution.ipynb). Train and evaluate a modelAdd markdown and code cells as required to to explore the data, train a model, and evaluate the model's predictive performance.
###Code
# separate futures and labels
features = ['Alcohol', 'Malic_acid', 'Ash', 'Alcalinity', 'Magnesium', 'Phenols',
'Flavanoids', 'Nonflavanoids', 'Proanthocyanins', 'Color_intensity',
'Hue', 'OD280_315_of_diluted_wines', 'Proline']
label = 'WineVariety'
X, y = data[features].values, data[label].values
for n in range(0,4):
print('Wine variety', str(n+1), '\n fetures:', list(X[n]), '\n labels:', y[n])
import matplotlib.pyplot as plt
%matplotlib inline
for col in features:
data.boxplot(column=col, by=label, figsize=(8,8))
plt.title(col)
plt.show()
from sklearn.model_selection import train_test_split
# split data on test and train sets
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0, stratify=y)
print('Training set: %d \nTest set: %d' % (x_train.shape[0], x_test.shape[0]))
# training model
from sklearn.linear_model import LogisticRegression
# set regularization rate for avoiding overfitting
reg = 0.1
# train model
model = LogisticRegression(C=1/reg, solver='lbfgs', multi_class='auto', max_iter=1000).fit(x_train, y_train)
print(model)
# make prediction
prediction = model.predict(x_test)
print('Predicted labels: ', prediction[:15])
print('Actual labels: ', y_test[:15])
# look at metrics
from sklearn.metrics import classification_report
print(classification_report(y_test, prediction))
from sklearn.metrics import accuracy_score, precision_score, recall_score
print('Overall accuracy:', accuracy_score(y_test, prediction))
print('Overall precision', precision_score(y_test, prediction, average='macro'))
print('Overall recall:', recall_score(y_test, prediction, average='macro'))
###Output
Overall accuracy: 0.9074074074074074
Overall precision 0.9093567251461989
Overall recall: 0.9116402116402117
###Markdown
We get Overall recall: 0.9116402116402117, but we must get overall recall more than 95%. Let's make data preprocessing.
###Code
from sklearn.preprocessing import StandardScaler
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.svm import SVC
feature_columns = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
feature_transformer = Pipeline(steps=[('scaler', StandardScaler())])
# create preprocessing steps
preprocessor = ColumnTransformer(
transformers=[('preprocess', feature_transformer, feature_columns)])
# create training pipeline
pipeline = Pipeline(steps=[('preprocessor', preprocessor), ('regressor', SVC(probability=True))])
# fit the pipeline on training set
model = pipeline.fit(x_train, y_train)
print(model)
prediction = model.predict(x_test)
print(classification_report(y_test, prediction))
print('Overall accuracy:', accuracy_score(y_test, prediction))
print('Overall precision', precision_score(y_test, prediction, average='macro'))
print('Overall recall:', recall_score(y_test, prediction, average='macro'))
###Output
Overall accuracy: 1.0
Overall precision 1.0
Overall recall: 1.0
###Markdown
Use the model with new data observationWhen you're happy with your model's predictive performance, save it and then use it to predict classes for the following two new wine samples:- \[13.72,1.43,2.5,16.7,108,3.4,3.67,0.19,2.04,6.8,0.89,2.87,1285\]- \[12.37,0.94,1.36,10.6,88,1.98,0.57,0.28,0.42,1.95,1.05,1.82,520\]
###Code
import joblib
# save model in the file
file_name = 'wine_prediction.pkl'
joblib.dump(model, file_name)
import numpy as np
model_prod = joblib.load(file_name)
x_new = np.array([[13.72,1.43,2.5,16.7,108,3.4,3.67,0.19,2.04,6.8,0.89,2.87,1285]])
new_predict = model.predict(x_new)[0]
new_predict
x_new2 = np.array([[12.37,0.94,1.36,10.6,88,1.98,0.57,0.28,0.42,1.95,1.05,1.82,520]])
new_predict2 = model.predict(x_new2)[0]
new_predict2
###Output
_____no_output_____ |
wiki_data_scrap_and_clean.ipynb | ###Markdown
###Code
import pandas as pd
import numpy as np
# Create a link to data i need scrap.
link = "https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M"
table = pd.read_html(link)
table
# Printing table we can see that information we need existing on zero position of this list
# Create DataFrame.
toronto_neighs_df = pd.DataFrame(table[0])
toronto_neighs_df.shape
# Lets remove rows with Borough == Not assigned
# I will use np.where().
toronto_neighs_df.head(5)
toronto_neighs_df = toronto_neighs_df.drop(np.where(toronto_neighs_df.Borough == "Not assigned")[0])
# toronto_neighs_df.head(5)
toronto_neighs_df.shape
###Output
_____no_output_____ |
code/calc_sticker_position_by_grid.ipynb | ###Markdown
Creates images where the sample stickers were placed on the stock images. Uses the previously calculated relevant grid cells. Creates large/medium/small sign versions for each sign.
###Code
import json
from collections import Counter
from pathlib import Path
from PIL import Image, ImageDraw, ImageFilter
# Sign to sticker ratio
sign_shapes = {
"circle": {
"bochum": {"large": "75x75.png", "medium": "93x93.png", "small": "133x133.png"},
"fortuna": {
"large": "95x197.png",
"medium": "119x247.png",
"small": "170x352.png",
},
},
"triangle": {
"bochum": {"large": "44x44.png", "medium": "62x62.png", "small": "89x89.png"},
"fortuna": {
"large": "57x117.png",
"medium": "79x164.png",
"small": "113x235.png",
},
},
"hexagon": {
"bochum": {"large": "53x53.png", "medium": "62x62.png", "small": "62x62.png"},
"fortuna": {
"large": "68x141.png",
"medium": "79x164.png",
"small": "79x164.png",
},
},
"square": {
"bochum": {"large": "67x67.png", "medium": "93x93.png", "small": "133x133.png"},
"fortuna": {
"large": "85x176.png",
"medium": "119x247.png",
"small": "170x352.png",
},
},
}
def get_cell_dict(path_csv: str) -> dict:
"""
Returns a dictionary with the most relevant cell for each class
path_csv : str
Path to result csv from masking prozess
"""
with open(path_csv) as json_file:
data = json.load(json_file)
most_relevant_cell_dict = {}
for _class in range(43):
_class = str(_class).zfill(5)
most_relevant_cell_lst = []
for _, cell_lst in data[_class].items():
most_relevant_cell_lst.append(cell_lst[0])
most_relevant_cell_dict[_class] = Counter(most_relevant_cell_lst).most_common()[
0
][0]
return most_relevant_cell_dict
def get_x_y_coordinates(cell: int):
"""
Calculates coordinates of the passed cell
cell : int
"""
row = int(cell / 8)
col = cell % 8
x_coordinate = (col) * 100
y_coordinate = (row) * 100
return x_coordinate, y_coordinate
# Adjust the paths!
sticker_folder = Path(r"/Users/robin/Downloads/GTSRB_Visualization/data/raw_sticker")
stock_images = [
x
for x in Path(
r"/Users/robin/Downloads/GTSRB_Visualization/data/sticker/original"
).glob("**/*")
if x.is_file()
]
# Get dict with relevant cells
cell_dict = get_cell_dict(
"/Users/robin/Downloads/content-20/masking_jsons/heatmap_saliency__heatmap_masked.csv"
)
for size in ["large", "medium", "small"]:
for sticker_type in ["fortuna", "bochum"]:
for stock_image in stock_images:
if stock_image.name == ".DS_Store":
continue
# Get coordinates
x_coordinate, y_coordinate = get_x_y_coordinates(
cell_dict[stock_image.parent.name]
)
# Get correct sticker
sticker = sign_shapes[stock_image.stem.split("_")[0]][sticker_type][size]
sticker_path = sticker_folder.joinpath(sticker)
# Load images
im1 = Image.open(stock_image)
im2 = Image.open(sticker_path)
# Correct coordinates if sticker would reach over
if im2.size[0] + x_coordinate > 799:
x_coordinate = x_coordinate - (im2.size[0] + x_coordinate - 799)
if im2.size[1] + y_coordinate > 799:
y_coordinate = y_coordinate - (im2.size[1] + y_coordinate - 799)
# Paste sticker and store image
im1.paste(im2, (x_coordinate, y_coordinate))
trg = sticker_folder.joinpath(
"generated_sticker",
sticker_type,
size + "_sign",
stock_image.parent.name,
stock_image.name,
)
trg.parent.mkdir(parents=True, exist_ok=True)
im1.save(str(trg))
###Output
_____no_output_____ |
numeric-types/ex 2.4 hexadecimal output.ipynb | ###Markdown
**Hexadecimal output:** Exercise 2.4Hexadecimal numbers are fairly common in the world of computers. Actually, that’s not entirely true: some programmers use them all of the time. Other programmers, typically using high-level languages and doing things such as Web development, barely ever remember how to use them.Now, the fact is that I barely use hexadecimal numbers in my day-to-day work. And even if I were to need them, I could use Python’s built-in hex function and 0x prefix. The former takes an integer and returns a hex string; the latter allows me to enter a number using hexadecimal notation, which can be more convenient. Thus, 0x50 is 80, and hex(80) will return the string 0x50.For this exercise, you need to write a program that takes a hex number and returns the decimal equivalent. That is, if the user enters 50, then we will assume that it is a hex number (equal to 0x50), and will print the value 80 on the screen.
###Code
conversion_table = {
'A': 10,
'B': 11,
'C': 12,
'D': 13,
'E': 14,
'F': 15
}
def conver_hex_char_to_dec_num(hex_char):
_hex_char = hex_char.upper()
if '0' <= _hex_char <= '9':
dec = int(_hex_char)
elif 'A' <= _hex_char <= 'F':
dec = conversion_table[_hex_char]
else:
raise ValueError("invalid hex char {0}".format(hex_char))
return dec
def hex_to_dec(hex_str):
result = None
for hex_char in hex_str:
dec_num = conver_hex_char_to_dec_num(hex_char)
if result is None:
result = dec_num
else:
result = result * 16 + dec_num
return result
print hex_to_dec('ff')
[ord(_) for _ in '0123456789ABCDEFabcdef']
5 in range(4,10)
###Output
_____no_output_____ |
Practicas/Practica2/P1_clustering.ipynb | ###Markdown
Práctica 2: Aprendizaje automático__Fecha de entrega: 16 de mayo de 2021__El objetivo de esta práctica es aplicar los distintos algoritmos de aprendizaje automático disponibles en la scikit-learn [sklearn](https://scikit-learn.org/stable/) sobre varios conjuntos de datos y aprender a interpretar los resultados obtenidos. La práctica consta de 3 notebooks que se entregarán simultáneamente en la tarea de entrega habilitada en el Campus Virtual.Lo más importante en esta práctica no es el código Python, sino el análisis de los datos y modelos que construyas y las explicaciones razonadas de cada una de las decisiones que tomes. __No se valorarán trozos de código o gráficas sin ningún tipo de contexto o explicación__.Finalmente, recuerda establecer el parámetro `random_state` en todas las funciones que tomen decisiones aleatorias para que los resultados sean reproducibles (los resultados no varíen entre ejecuciones). Apartado 1: Clustering __Número de grupo: 17____Nombres de los estudiantes: David del Cerro Domínguez y Sergio Ramos Mesa__ 1) Carga del conjunto de datosCrea un dataframe a partir del fichero `countries_of_the_world.csv` que se proporciona junto con la práctica. Usa como índice el nombre de los países. Vamos a eliminar la columna `Region` por ser categórica y todas las filas en las que faltan valores usando la operación `dropna`.Muestra el dataframe resultante y explica cuántos países y variables contiene.
###Code
#Utilizando la librería pandas creamos un dataframe desde un csv
#Posteriormente ordenamos
#Eliminamos la columna deseada utilizando del (también se puede utilizar drop de dataframe)
#Eliminamos las columnas y filas con al menos un elemento nulo nAn
#Mostramos
import pandas as pd
import numpy as np
df1 = pd.read_csv('countries_of_the_world.csv')
df1.sort_values(by=['Country'])
del df1["Region"]
df1.dropna(inplace=True)
df1
###Output
_____no_output_____
###Markdown
Como podemos apreciar en los datos tenemos un dataframe compuesto por 179 países diferentes que contienen un total de 19 variables.- Estas variables son las siguientes:- Country- Region- Population- Area (sq. mi.)- Pop. Density (per sq. mi.)- Coastline (coast/area ratio)- Net migration- Infant mortality (per 1000 births)- GDP ($ per capita)- Literacy (%)- Phones (per 1000)- Arable (%)- Crops(%)- Other (%)- Climate- Birthrate- Deathrate- Agriculture- Industry- Service 2) Análisis de los datosEn este notebook vamos a trabajar con un subconjunto de las variables. Crea un nuevo dataframe que sólo contenga las variables `GDP ($ per capita)`, `Literacy (%)`, `Phones (per 1000)`, `Agriculture`, `Industry` y `Service`. ¿Qué crees que representan cada una de esas variables?Analiza razonadamente las distribuciones de cada una de las variables (medias, desviaciones típicas, rangos, ...) y las principales relaciones entre pares de variables (diagrama de dispersión, coeficientes de correlación, ...).
###Code
#Creamos un nuevo dataframe con las columnas que nos interesan a traves de df1 y mostramos
#Esto nos sirve para visualizar los datos. Porteriormente veremos las distribuciones y correlación
#entre variables
df2 = df1[["GDP ($ per capita)", "Literacy (%)", "Phones (per 1000)", "Agriculture", "Industry", "Service"]]
df2
###Output
_____no_output_____
###Markdown
Las variables representadas en este segundo dataframe representan el estado de la economía de cada país. Vistas las variables una a una (por su traducción)- Producto interior bruto (gross domestic produc)- Literatura (Muestra el nivel de analfabetismo)- Telefonos por cada 1000 habitantes- Agricultura, Industria y Servicios (Nivel de aportación de cada uno de los sectores a la economía sobre 1)Una vez cargado el datafrme. Pasamos al análisis de las distribuciones y datos obtenidos.Para ver la distribución de los datos utilizaremos un diagrama de dispersión (seaborn)
###Code
import seaborn as sns
sns.pairplot(df2,diag_kind='kde')
###Output
_____no_output_____
###Markdown
Como hemos mencionado antes, al tratar con 179 países este será el count sobre el que están representadas las gráficas. Se puede ver como los Servicios y la industria están relacionados entre sí debido a las dispersiones con el resto de variables. Podemos ver también como estos son, concretamente, inversamente proporcionales si miramos su relacion en la gráfica. Literatura está muy relcionada con todas las variables. Cuanto mayor literatura más posibilidad de tener el resto de variables altas. Veamos a continuación una descripción más detallada de las variables y su correlación. Datos de la tablaPara ver en más detalle la correlación entre variables nos ayudaremos de la tabla de descripción (para conocer los datos exactos individuales) y del indice de correlación de Pearson entre cada una de las variables (par comprar entre ellas)
###Code
df2.describe()
###Output
_____no_output_____
###Markdown
Las conlusiones sacadas en claro a traves de los datos son las siguientes:Para el GPD ( $ per capita) tenemos una media de 9125 dolares y una desviación de 9644 lo que indica una gran disparidad en los datos. Hecho que nos indica también el 25% -> 1800 y el 75% ->12950, donde se ve también la disparidad. Esta variable tiene una dependencia tanto con el grado de alfabetismo en una población, donde se observa que a mayor GPD mayor porcentanje de alfabetismo, y aún una dependencia más fuerte con los móviles por 1000 personas, donde se puede observar una dependencia lineal.Para el % de alfabetismo tenemos que los datos también son poco dispersos, encontrandose la mayor cantidad de paises en torno al 90-100%. Como lo mencionado anteriormente, tiene relación directa con el GPD.Por su parte, el número de teléfonos por 1000 personas nos encontramos los datos más diversos , donde la mayoría de paises se encuentran en torno a los 150-200 móviles. Esta variable tiene una fuerte relación con el GPD del país.Todos estos datos, como hemos dicho, se pueden comprobar con las gráficas proporcionadas.
###Code
correl = df2.corr(method='pearson')
correl
###Output
_____no_output_____
###Markdown
Como podemos ver en la tabla de correlación entre cada una de las variables. Cuanto más cercano esté el valor a +/-1, más ligadas estarán esas dos variables. Un ejemplo que podemos dar es el alto grado de correlación entre el número de telefonos y el GDP (0.88) y el poco grado de correlación entre industria y número de telefonos (-0.08). Vemos pues que los datos obtenidos son concluyentes con respecto a la gráfica y a los datos obtenidos de la misma. 3) Preprocesar los datosTeniendo en cuenta que vamos a utilizar el algoritmo k-Means para encontrar grupos de países similares, explica razonadamente si es necesario o no cambiar la escalas de los datos y si a priori es mejor reescalarlos (MinMaxScaler) o estandarizarlos (StandarScaler).Si decides preprocesarlos, accede al array interno del dataframe y crea un nuevo array con los datos escalados. Vamos a normalizar todos los datos y guardaremos esta información en el X2_scaled. Para esto usamos la función predefinida para crear clusters. La estandarización la haremos sobre StandardScaler, que cambia la distribuación para que los datos se centren en 0 como si fuese una normal ya que no todos los datos queremos representarlos en un rango [0,1]
###Code
import matplotlib as mpl
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
RANDOM_STATE = 16 #Random
def plot_clusters(X, labels=None, centers=None):
""" Función auxiliar para pintar los puntos de los clusters y, optativamente, sus centros.
:param X: array de puntos de dos dimensiones (array de array de enteros)
:param labels: cluster al que pertenece cada punto (array de enteros)
:param centers: coordenadas de los centroides de los clusters (array de array de enteros)
"""
colors = ['r','g','b','c','y','k','m',]
fig = plt.figure(figsize=(8,8))
# Los dos primeros parámetros de la función scatter son las coordenadas de los puntos,
# el parámetro 'c' indica la clase asignada de cada punto, y 'cmap' se usa para colorear
# las distintas clases
plt.scatter(X[:,0], X[:,1], c=labels, cmap=ListedColormap(colors))
# Pintar los centroides de los clusters
if centers is not None:
plt.scatter(centers[:,0], centers[:,1], marker="x", color='k', s=150, linewidths = 5, zorder=10)
plt.show()
#Escalamos los datos
# MinMaxScaler escala los datos al intervalo [0-1] sin modificar su distribución
# StandardScaler cambia la distribución para que estén centrados en 0 y tengan varianza 1 (como una normal).
from sklearn.preprocessing import StandardScaler
# En este caso hemos decidio estandarizar los datos. La diferencia no es considerable
scaler = StandardScaler()
scaler.fit(df2)
X2_scaled = scaler.transform(df2) #Aquí guardamos los datos escalados que utilizaremos para los diagramas
#Mostramos los datos escalados
sca = pd.DataFrame(data=X2_scaled, columns=["GDP ($ per capita)", "Literacy (%)", "Phones (per 1000)", "Agriculture", "Industry", "Service"])
sca
###Output
_____no_output_____
###Markdown
4) Encontrar el número óptimo de clustersDecide razonadamente el número óptimo de clusters en el rango 2..10. Ten en cuenta que para interpretar los datos no nos interesa tampoco tener un número excesivo de clusters. Para hacerlo calcula y pinta el diagrama del codo, el índice davies_boulding y el coeficiente silhouette en función del número de clusters. A continuación vamos a agrupar los datos utilizando el algoritmo proporcionado en clase KMeans. Una vez agrupados vamos a sacar el score para cada uno de las medidas. Describamos también las medidas:- __score__: suma de distancias de cada punto al centroide de su cluster. sklearn nos devuelve el valor en negativo así que lo multiplicamos por -1. Cuanto más cercano a cero, más compactos serán los clusters.- __davies_boulding__: razón entre las distancias intra-cluster y las distancias inter-cluster. Es decir, el índice tendrá un valor más pequeño cuando más compactos y separados estén los clusters.- __silhouette__: distancia media al cluster más cercano menos distancia media intra-cluster dividido entre el mayor de ambos. El índice tendrá un valor más grande cuanto mejor sea la agrupación.
###Code
from sklearn.metrics import davies_bouldin_score, silhouette_score
from sklearn.cluster import KMeans
K_MAX = 11 #Como nos piden, se crea un numero de clusters [2-10]
#Inicializamos los arrays que nos serviran para las medidas
score = np.zeros(K_MAX-2) #Esta hace refefrencia a Elbow
davies_boulding = np.zeros(K_MAX-2)
silhouette = np.zeros(K_MAX-2)
#Rellenamos usando KMeans paritendo de los datos escalados. Esto es importante
for k in range(2, K_MAX):
km = KMeans(init='random', n_clusters=k, random_state=RANDOM_STATE)
km.fit(X2_scaled)
labels = km.labels_
centers = km.cluster_centers_
plot_clusters(X2_scaled, labels, centers) #Crea los centroides
#Para las medidas que veremos posteriormente
score[k-2] = -1 * km.score(X2_scaled)
davies_boulding[k-2] = davies_bouldin_score(X2_scaled, km.labels_)
silhouette[k-2] = silhouette_score(X2_scaled, km.labels_)
#Mostramos las tres gráficas
#score
plt.plot(range(2, K_MAX), score)
plt.xlabel('Number of Clusters')
plt.ylabel('Score')
plt.title('Valor de Elbow para diferentes k')
plt.show()
# davies boulding
plt.plot(range(2, K_MAX), davies_boulding)
plt.xlabel('Number of Clusters')
plt.ylabel('Davies Boulding value')
plt.title('Valor de Davies Boulding para diferentes k')
plt.show()
# silhouette
plt.plot(range(2, K_MAX), silhouette)
plt.xlabel('Number of Clusters')
plt.ylabel('Silhouette value')
plt.title('Valor de Silhouette para diferentes k')
plt.show()
###Output
_____no_output_____
###Markdown
Estudiemos a continuación los datos: - El diagrama del codo (Elbow) busca un valor de K en el que la curva deja de descender tan rápidamente. En nuestro caso K = 4- El diagrama de Davies Boulding busca un valor un valor mínimo. En nuestro caso K = 4- El diagrama de Silhouette busca el número mayor. K = 4 5) Descripción de los clustersDescribe los clusters que has obtenido en el apartado anterior y trata identificar el grupo de países que contienen. Si te han salido más de 3 elige 3 de ellos que sean bastante diferentes entre sí. Para hacerlo estudia sus descriptores estadísticos y pinta el diagrama de dispersión en función de cada par de variables usando colores diferentes para cada cluster. ¿Qué clusters se separan mejor y en función de qué variables? ¿y cuáles se confunden más?__Cuidado__: para poder interpretar correctamente los datos necesitas que estén en su escala original. Si decidiste escalar los datos, deberás ejecutar kMeans con los datos escalados pero asignar las etiquetas de clusters al conjunto de datos inicial. En este caso es muy sencillo porque el algoritmo no cambia el orden de los datos así que puedes crear directamente una nueva columna en el dataframe original con esas etiquetas. Puede que aparezca un SettingWithCopyWarning por asignar una nueva columna en un dataframe que es una vista de otro dataframe. Puedes ignorar este aviso o puedes hacer una copia del dataframe con `copy` para que no comparta memoria con el otro.
###Code
#Tenemos en X2 los datos destransformados. Los hemos mostrado con anterioridad
#Ahora veamos la descripción de los diferentes clusters. Usaremos la librería pandas
#Crear dataframe con los puntos originales y añadimos la columna de clusters
#La mostramos transpuesta para una mejor visualización de los datos
df3 = df2
df3 = df3.assign(cluster=pd.Series(data=km.labels_))
df3.groupby(["cluster"]).describe().transpose()
#Como vemos, nos salen más de tres clusters por lo que vamos a elegir 3 de ellos diferentes
#Creamos un dataframe con los clusters que nos interesan. En nuestro caso 1, 5 y 9
df4 = df3.drop(df3[(df3.cluster != 1) & (df3.cluster != 5) & (df3.cluster != 9)].index)
df4 = df4.assign(cluster=pd.Series(data=km.labels_))
df4.groupby(["cluster"]).describe().transpose()
###Output
_____no_output_____
###Markdown
La tabla nos muestra la información acerca de los datos específicos para cada número de cluster. Podemos concluir por el número de elementos (count) y la media (mean) que el mejor número de cluster para nuestro ejemplo sería 5. Como los datos son poco legibles podemos mostrar por cada clase los datos para los diferentes clusters. En nuestro caso hemos decidido mostrar solamente el gráfico de densidades para el GDP pero se podría hacer por separado para todas las variables.
###Code
#Mostramos los valores para los diferentes clusters en la variables GDP.
#Esto se podría hacer con todas las variables de la tabla
#No aporta demasiado a la explicación pero podemos buscar una correlación de los datos con el número de clusters obtenidos.
#Como podemos comprobar en nuestro ejemplo del GDP, la mayoría de los datos obtenidos están cerca de 5000 y, cuantos más
#clusters, menos es la media de los datos.
df4.groupby(["cluster"])["GDP ($ per capita)"].plot.density()
###Output
_____no_output_____ |
assignments/assignment3/PyTorch_CNN.ipynb | ###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
raise Exception("Not implemented")
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
_____no_output_____
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
# TODO: Add good augmentations
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# TODO create new instances of loaders with the augmentations you chose
train_aug_loader = None
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
_____no_output_____
###Markdown
Подбор гиперпараметров
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e0, 1e-1, 1e-2, 1e-3, 1e-4]
anneal_coeff = 0.2
anneal_epochs = [1, 5, 10, 15, 20, 50]
reg = [1e-3, 1e-4, 1e-5, 1e-7]
batch_size = 64
epoch_num = 10
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
# TODO: Your code here!
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
_____no_output_____
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
best_model = None
###Output
_____no_output_____
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
# TODO Write the code to compute accuracy on test set
final_test_accuracy = 0.0
print("Final test accuracy - ", final_test_accuracy)
###Output
_____no_output_____
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
correct_samples = 0
total_samples = 0
with torch.no_grad():
for i_step, (x, y) in enumerate(loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
accuracy = float(correct_samples) / total_samples
return accuracy
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 1.376624, Train accuracy: 0.539177, Val accuracy: 0.774691
Average loss: 0.693164, Train accuracy: 0.789731, Val accuracy: 0.810115
Average loss: 0.583886, Train accuracy: 0.825922, Val accuracy: 0.804518
Average loss: 0.538610, Train accuracy: 0.838020, Val accuracy: 0.838714
Average loss: 0.503441, Train accuracy: 0.850084, Val accuracy: 0.834073
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:1293: UserWarning: The parameter 'resample' is deprecated since 0.12 and will be removed 0.14. Please use 'interpolation' instead.
"The parameter 'resample' is deprecated since 0.12 and will be removed 0.14. "
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:1293: UserWarning: The parameter 'resample' is deprecated since 0.12 and will be removed 0.14. Please use 'interpolation' instead.
"The parameter 'resample' is deprecated since 0.12 and will be removed 0.14. "
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
# TODO: Add good augmentations
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# TODO create new instances of loaders with the augmentations you chose
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 0.600192, Train accuracy: 0.818005, Val accuracy: 0.828476
Average loss: 0.544960, Train accuracy: 0.834351, Val accuracy: 0.845130
Average loss: 0.523494, Train accuracy: 0.840716, Val accuracy: 0.802744
Average loss: 0.509929, Train accuracy: 0.843327, Val accuracy: 0.844994
Average loss: 0.493383, Train accuracy: 0.849776, Val accuracy: 0.862808
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
batch_size = 256
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, 5, padding=0),
nn.Tanh(),
nn.MaxPool2d(2,stride=2),
nn.Conv2d(6, 16, 5, padding=0),
nn.Tanh(),
nn.MaxPool2d(2, stride=2),
Flattener(),
nn.Linear(5*5*16, 120),
nn.Tanh(),
nn.Linear(120, 84),
nn.Tanh(),
nn.Linear(84, 10),
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
Average loss: 1.266966, Train accuracy: 0.574702, Val accuracy: 0.808409
Average loss: 0.571068, Train accuracy: 0.824455, Val accuracy: 0.849840
Average loss: 0.484889, Train accuracy: 0.851415, Val accuracy: 0.863627
Average loss: 0.444710, Train accuracy: 0.863512, Val accuracy: 0.873046
Average loss: 0.417037, Train accuracy: 0.870781, Val accuracy: 0.877756
Average loss: 0.394382, Train accuracy: 0.880081, Val accuracy: 0.875845
Average loss: 0.373935, Train accuracy: 0.885217, Val accuracy: 0.886356
Average loss: 0.362573, Train accuracy: 0.888288, Val accuracy: 0.884513
Average loss: 0.348474, Train accuracy: 0.894038, Val accuracy: 0.888813
Average loss: 0.335057, Train accuracy: 0.897348, Val accuracy: 0.889700
###Markdown
Подбор гиперпараметров
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e0, 1e-1, 1e-2, 1e-3, 1e-4]
anneal_coeff = 0.2
anneal_epochs = [1, 5, 10, 15, 20, 50]
reg = [1e-3, 1e-4, 1e-5, 1e-7]
batch_size = 64
epoch_num = 10
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
# TODO: Your code here!
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
_____no_output_____
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
best_model = None
###Output
_____no_output_____
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
# TODO Write the code to compute accuracy on test set
final_test_accuracy = 0.0
print("Final test accuracy - ", final_test_accuracy)
###Output
_____no_output_____
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
from google.colab import drive
drive.mount('/content/drive/')
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN(root="/content/drive/My Drive/Colab Notebooks/", split='train',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN(root="/content/drive/My Drive/Colab Notebooks/", split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1000, gamma=1)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs, scheduler):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
scheduler.step()
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
correct_samples_ = 0
total_samples_ = 0
for i_step_, (x_, y_) in enumerate(loader):
x_gpu_ = x_.to(device)
y_gpu_ = y_.to(device)
prediction_ = model(x_gpu_)
__, indices_ = torch.max(prediction_, 1)
correct_samples_ += torch.sum(indices_ == y_gpu_)
total_samples_ += y_gpu_.shape[0]
v_accuracy = float(correct_samples_) / total_samples_
return v_accuracy
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 1, scheduler)
###Output
Average loss: 1.398067, Train accuracy: 0.533085, Val accuracy: 0.727527
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN("/content/drive/My Drive/Colab Notebooks/",
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:1249: UserWarning: Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead
"Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead"
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN("/content/drive/My Drive/Colab Notebooks/",
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:1249: UserWarning: Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead
"Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead"
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
# transforms.RandomHorizontalFlip(),
# transforms.RandomVerticalFlip(),
transforms.RandomRotation(5, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# TODO create new instances of loaders with the augmentations you chose
data_aug_train = dset.SVHN(root="/content/drive/My Drive/Colab Notebooks/", split='train',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 1, scheduler)
###Output
_____no_output_____
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, 5, padding=0),
nn.BatchNorm2d(6, affine=False),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
nn.Conv2d(6, 16, 5, padding=0),
nn.BatchNorm2d(16, affine=False),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
Flattener(),
nn.Linear(16*5*5, 120),
nn.BatchNorm1d(120, affine=False),
nn.ReLU(inplace=True),
nn.Linear(120, 84),
nn.BatchNorm1d(84, affine=False),
nn.ReLU(inplace=True),
nn.Linear(84, 10)
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=0.105978, weight_decay=0.000245)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1000, gamma=1)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 1, scheduler)
###Output
_____no_output_____
###Markdown
Подбор гиперпараметров
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
# Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
# RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
# learning_rates = [1e0, 1e-1, 1e-2, 1e-3, 1e-4]
# anneal_coeff = 0.2
# anneal_epochs = [1, 5, 10, 15, 20, 50]
# reg = [1e-3, 1e-4, 1e-5, 1e-7]
# batch_size = 64
# epoch_num = 10
# # Record all the runs here
# # Key should be Hyperparams and values should be RunResult
# run_record = {}
# # Use grid search or random search and record all runs in run_record dictionnary
# # Important: perform search in logarithmic space!
# # TODO: Your code here!
def approx_val_accur_reg_strength_learning_rate(trainer_func, train_loader, val_loader, \
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor), num_epochs = 10, \
num_experiments = 100, reg_strength_10_exp = (-6, 0), learning_rate_10_exp = (-6, 0)):
"""
ONLY FOR: Pytorch TENSORS, CUDA platform, optim.SGD
trainer_func - should return loss_history, train_accur_history, val_accur_history BY EPOCHS
loss - for example nn.CrossEntropyLoss().type(torch.FloatTensor)
nmb_experiments, int >= 100
reg_strength_10_exp, tulip (-6, 0)
learning_rate_10_exp, tulip (-6, 0)
"""
loss_history = []
train_history = []
val_history = []
reg_strength_history = []
learning_rate_history = []
for count in range(num_experiments):
print('Count #: %d'% (count+1))
reg_strength = 10**np.random.uniform(reg_strength_10_exp[0],reg_strength_10_exp[1])
learning_rate = 10**np.random.uniform(learning_rate_10_exp[0],learning_rate_10_exp[1])
print('reg_strength: %f'% (reg_strength))
print('learning_rate: %f'% (learning_rate))
model = nn.Sequential(
nn.Conv2d(3, 6, 5, padding=0),
nn.BatchNorm2d(6, affine=False),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
nn.Conv2d(6, 16, 5, padding=0),
nn.BatchNorm2d(16, affine=False),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
Flattener(),
nn.Linear(16*5*5, 120),
nn.BatchNorm1d(120, affine=False),
nn.ReLU(inplace=True),
nn.Linear(120, 84),
nn.BatchNorm1d(84, affine=False),
nn.ReLU(inplace=True),
nn.Linear(84, 10)
)
model.type(torch.cuda.FloatTensor)
model.to(device)
optimizer = optim.SGD(model.parameters(), lr=learning_rate, weight_decay=reg_strength)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1000, gamma=1)
loss_, train_, val_ = trainer_func(model, train_loader, val_loader, loss, optimizer, num_epochs, scheduler)
loss_history.append(loss_[-1])
train_history.append(train_[-1])
val_history.append(val_[-1])
reg_strength_history.append(reg_strength)
learning_rate_history.append(learning_rate)
approx_reg_strength = reg_strength_history[np.argmax(val_history)]
approx_learning_rate = learning_rate_history[np.argmax(val_history)]
approx_val_accur = np.max(val_history)
print('best validation accuracy achieved: %f at reg_strength %f and learning_rate %f' % (approx_val_accur, approx_reg_strength, approx_learning_rate))
return approx_val_accur, approx_reg_strength, approx_learning_rate
def final_val_accur_reg_strength_learning_rate(trainer_func, train_loader, val_loader, approx_reg_strength, approx_learning_rate, \
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor), num_epochs = 10):
reg_strength_list = np.array([0.8, 0.9, 1.0, 1.1, 1.2])*approx_reg_strength
learning_rate_list = np.array([0.8, 0.9, 1.0, 1.1, 1.2])*approx_learning_rate
loss_history = []
train_history = []
val_history = []
reg_strength_history = []
learning_rate_history = []
for reg_strength in reg_strength_list:
for learning_rate in learning_rate_list:
print('reg_strength: %f'% (reg_strength))
print('learning_rate: %f'% (learning_rate))
model = nn.Sequential(
nn.Conv2d(3, 6, 5, padding=0),
nn.BatchNorm2d(6, affine=False),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
nn.Conv2d(6, 16, 5, padding=0),
nn.BatchNorm2d(16, affine=False),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
Flattener(),
nn.Linear(16*5*5, 120),
nn.BatchNorm1d(120, affine=False),
nn.ReLU(inplace=True),
nn.Linear(120, 84),
nn.BatchNorm1d(84, affine=False),
nn.ReLU(inplace=True),
nn.Linear(84, 10)
)
model.type(torch.cuda.FloatTensor)
model.to(device)
optimizer = optim.SGD(model.parameters(), lr=learning_rate, weight_decay=reg_strength)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1000, gamma=1)
loss_, train_, val_ = trainer_func(model, train_loader, val_loader, loss, optimizer, num_epochs, scheduler)
loss_history.append(loss_[-1])
train_history.append(train_[-1])
val_history.append(val_[-1])
reg_strength_history.append(reg_strength)
learning_rate_history.append(learning_rate)
final_reg_strength = reg_strength_history[np.argmax(val_history)]
final_learning_rate = learning_rate_history[np.argmax(val_history)]
final_val_accur = np.max(val_history)
print('best validation accuracy achieved: %f at reg_strength %f and learning_rate %f' % (final_val_accur, final_reg_strength, final_learning_rate))
return final_val_accur, final_reg_strength, final_learning_rate
def final_coef_gamma(trainer_func, train_loader, val_loader, reg_strength, learning_rate, \
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor), \
gamma_list = np.array([0.3, 0.6, 0.95]), num_epochs = 30):
loss_history = []
train_history = []
val_history = []
for gamma in gamma_list:
print('coef_gamma: %f'% (gamma))
model = nn.Sequential(
nn.Conv2d(3, 6, 5, padding=0),
nn.BatchNorm2d(6, affine=False),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
nn.Conv2d(6, 16, 5, padding=0),
nn.BatchNorm2d(16, affine=False),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
Flattener(),
nn.Linear(16*5*5, 120),
nn.BatchNorm1d(120, affine=False),
nn.ReLU(inplace=True),
nn.Linear(120, 84),
nn.BatchNorm1d(84, affine=False),
nn.ReLU(inplace=True),
nn.Linear(84, 10)
)
model.type(torch.cuda.FloatTensor)
model.to(device)
optimizer = optim.SGD(model.parameters(), lr=learning_rate, weight_decay=reg_strength)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1, gamma=gamma)
loss_, train_, val_ = trainer_func(model, train_loader, val_loader, loss, optimizer, num_epochs, scheduler)
loss_history.append(loss_[-1])
train_history.append(train_[-1])
val_history.append(val_[-1])
val_accur_best = np.max(val_history)
coef_gamma_best = gamma_list[np.argmax(val_history)]
print('best validation accuracy achieved: %f at coef_gamma %f' % (val_accur_best, coef_gamma_best) )
return val_accur_best, coef_gamma_best
approx_val_accur, approx_reg_strength, approx_learning_rate = approx_val_accur_reg_strength_learning_rate(train_model, train_aug_loader, val_loader, \
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor), num_epochs = 1, \
num_experiments = 1, reg_strength_10_exp = (-4, -3), learning_rate_10_exp = (-2, -1))
val_accur_best, reg_strength_best, learning_rate_best = final_val_accur_reg_strength_learning_rate(train_model, train_aug_loader, val_loader, approx_reg_strength, approx_learning_rate, \
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor), num_epochs = 1)
val_accur_best, coef_gamma_best = final_coef_gamma(train_model, train_aug_loader, val_loader, reg_strength=approx_reg_strength, learning_rate=approx_learning_rate, \
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor), \
gamma_list = np.array([0.1]), num_epochs =1)
# Learn on all_data_aug_train
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
# transforms.RandomHorizontalFlip(),
# transforms.RandomVerticalFlip(),
transforms.RandomRotation(5, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.4436, 0.4431, 0.4438],
std=[0.2035, 0.2036, 0.2037])
])
all_data_aug_train = dset.SVHN(root="/content/drive/My Drive/Colab Notebooks/", split='train',
transform=tfs
)
all_train_aug_loader = torch.utils.data.DataLoader(all_data_aug_train, batch_size=batch_size,
sampler=None, shuffle=True)
test_loader = torch.utils.data.DataLoader(data_test, batch_size=batch_size,
sampler=None)
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, 5, padding=0),
nn.BatchNorm2d(6, affine=False),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
nn.Conv2d(6, 16, 5, padding=0),
nn.BatchNorm2d(16, affine=False),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
Flattener(),
nn.Linear(16*5*5, 120),
nn.BatchNorm1d(120, affine=False),
nn.ReLU(inplace=True),
nn.Linear(120, 84),
nn.BatchNorm1d(84, affine=False),
nn.ReLU(inplace=True),
nn.Linear(84, 10)
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=0.085467, weight_decay=0.000148)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1, gamma=coef_gamma_best)
loss_history, train_history, val_history = train_model(lenet_model, all_train_aug_loader, test_loader, loss, optimizer, 1, scheduler)
# Val accuracy is test accuracy!
plt.plot(train_history)
plt.plot(val_history)
# best_val_accuracy = None
# best_hyperparams = None
# best_run = None
# for hyperparams, run_result in run_record.items():
# if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
# best_val_accuracy = run_result.final_val_accuracy
# best_hyperparams = hyperparams
# best_run = run_result
# print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
_____no_output_____
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
best_model = None
###Output
_____no_output_____
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
test_loader = torch.utils.data.DataLoader(data_test, batch_size=batch_size)
test_accuracy = compute_accuracy(lenet_model, test_loader)
print("Test accuracy: %2.4f" % test_accuracy)
###Output
Test accuracy: 0.9039
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
correct_samples = 0
total_samples = 0
for x, y in loader:
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y_gpu.shape[0]
return float(correct_samples) / total_samples
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 1.353196, Train accuracy: 0.550694, Val accuracy: 0.741588
Average loss: 0.698353, Train accuracy: 0.787121, Val accuracy: 0.798512
Average loss: 0.598032, Train accuracy: 0.819336, Val accuracy: 0.747594
Average loss: 0.549606, Train accuracy: 0.836092, Val accuracy: 0.824108
Average loss: 0.517944, Train accuracy: 0.846961, Val accuracy: 0.825336
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:1201: UserWarning: Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead
"Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead"
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:1201: UserWarning: Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead
"Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead"
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
# TODO: Add good augmentations
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# TODO create new instances of loaders with the augmentations you chose
data_aug_vis = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 2.003870, Train accuracy: 0.292717, Val accuracy: 0.469866
Average loss: 1.731122, Train accuracy: 0.379125, Val accuracy: 0.395058
Average loss: 1.639030, Train accuracy: 0.417432, Val accuracy: 0.504880
Average loss: 1.575667, Train accuracy: 0.440040, Val accuracy: 0.532319
Average loss: 1.533279, Train accuracy: 0.457649, Val accuracy: 0.528633
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, 5),
nn.ReLU(inplace=True),
nn.MaxPool2d(2, stride=2),
nn.Conv2d(6, 16, 5),
nn.ReLU(inplace=True),
nn.MaxPool2d(2, stride=2),
nn.Conv2d(16, 120, 5),
nn.ReLU(inplace=True),
Flattener(),
nn.Linear(120, 84),
nn.ReLU(inplace=True),
nn.Linear(84, 10)
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
Average loss: 2.077533, Train accuracy: 0.252739, Val accuracy: 0.346393
Average loss: 1.498067, Train accuracy: 0.478023, Val accuracy: 0.541465
Average loss: 1.245576, Train accuracy: 0.573900, Val accuracy: 0.599345
Average loss: 1.126555, Train accuracy: 0.614596, Val accuracy: 0.644188
Average loss: 1.051715, Train accuracy: 0.639218, Val accuracy: 0.670193
Average loss: 1.003250, Train accuracy: 0.659386, Val accuracy: 0.657566
Average loss: 0.974027, Train accuracy: 0.666451, Val accuracy: 0.663982
Average loss: 0.946438, Train accuracy: 0.675067, Val accuracy: 0.700020
Average loss: 0.929544, Train accuracy: 0.683309, Val accuracy: 0.681933
Average loss: 0.914256, Train accuracy: 0.686619, Val accuracy: 0.691079
###Markdown
Подбор гиперпараметров
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs, step_size):
scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=step_size, gamma=0.5)
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
scheduler.step()
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e0, 1e-1, 1e-2, 1e-3, 1e-4]
anneal_coeff = 0.2
anneal_epochs = [1, 5, 10, 15, 20, 50]
reg = [1e-3, 1e-4, 1e-5, 1e-7]
batch_size = 64
epoch_num = 10
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
# TODO: Your code here!
import random
params_grid = [learning_rates, anneal_epochs, reg]
n_results = 10
for i in range(n_results):
params = [random.sample(param, 1)[0] for param in params_grid]
print(f"lr={params[0]}, anneal_epochs={params[1]}, reg={params[2]}")
optimizer = optim.SGD(lenet_model.parameters(), lr=params[0], weight_decay=params[2])
_, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, epoch_num, params[1])
run_record[Hyperparams(*params)] = RunResult(lenet_model, train_history, val_history, val_history[-1])
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
Best validation accuracy: 0.07, best hyperparams: Hyperparams(learning_rate=1.0, anneal_epochs=50, reg=1e-07)
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
best_model = nn.Sequential(
nn.Conv2d(3, 32, 3, padding=1),
nn.BatchNorm2d(32),
nn.ReLU(inplace=True),
nn.Dropout(0.25),
nn.Conv2d(32, 64, 3, stride=2, padding=1),
nn.BatchNorm2d(64),
nn.ReLU(inplace=True),
nn.Dropout(0.25),
nn.Conv2d(64, 128, 5, stride=2),
nn.BatchNorm2d(128),
nn.ReLU(inplace=True),
nn.Dropout(0.25),
nn.Conv2d(128, 128, 5, stride=2, padding=2),
nn.BatchNorm2d(128),
nn.ReLU(inplace=True),
nn.Dropout(0.25),
nn.Conv2d(128, 128, 5, stride=2, padding=2),
nn.BatchNorm2d(128),
nn.ReLU(inplace=True),
Flattener(),
nn.Linear(512, 64),
nn.BatchNorm1d(64),
nn.ReLU(inplace=True),
nn.Dropout(0.25),
nn.Linear(64, 10)
)
for module in best_model.parameters():
if isinstance(module, (nn.Conv2d, nn.Linear)):
nn.init.xavier_uniform_(module.weight, gain=nn.init.calculate_gain('relu'))
best_model.type(torch.cuda.FloatTensor)
best_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.Adam(best_model.parameters(), lr=1e-3, weight_decay=1e-6)
loss_history, train_history, val_history = train_model(best_model, train_aug_loader, val_loader, loss, optimizer, 20, 5)
###Output
/usr/local/lib/python3.7/dist-packages/torch/optim/lr_scheduler.py:134: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
"https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
# TODO Write the code to compute accuracy on test set
test_loader = torch.utils.data.DataLoader(data_test, batch_size=batch_size)
final_test_accuracy = compute_accuracy(best_model, test_loader)
print("Final test accuracy - ", final_test_accuracy)
###Output
_____no_output_____
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs, sheduler = None):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
if not sheduler is None:
sheduler.step(val_accuracy)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
correct_samples = 0
total_samples = 0
for x, y in loader:
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
return float(correct_samples) / total_samples
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 1.401596, Train accuracy: 0.530714, Val accuracy: 0.734694
Average loss: 0.700787, Train accuracy: 0.786950, Val accuracy: 0.794280
Average loss: 0.601716, Train accuracy: 0.818909, Val accuracy: 0.832571
Average loss: 0.554854, Train accuracy: 0.833515, Val accuracy: 0.827657
Average loss: 0.517393, Train accuracy: 0.846688, Val accuracy: 0.839260
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:1249: UserWarning: Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead
"Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead"
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, interpolation=transforms.InterpolationMode.BILINEAR),
])
data_aug_vis = dset.SVHN('./', transform=tfs)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomRotation(15, interpolation=transforms.InterpolationMode.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
data_aug_train = dset.SVHN('./', transform=tfs)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 0.665538, Train accuracy: 0.794509, Val accuracy: 0.840693
Average loss: 0.607229, Train accuracy: 0.813654, Val accuracy: 0.844652
Average loss: 0.580849, Train accuracy: 0.822049, Val accuracy: 0.844174
Average loss: 0.560207, Train accuracy: 0.828942, Val accuracy: 0.842332
Average loss: 0.541970, Train accuracy: 0.834147, Val accuracy: 0.860214
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, 5),
nn.LeakyReLU(inplace=True),
nn.MaxPool2d(2),
nn.Conv2d(6, 16, 5),
nn.LeakyReLU(inplace=True),
nn.MaxPool2d(2),
nn.Conv2d(16, 120, 5),
nn.LeakyReLU(inplace=True),
Flattener(),
nn.Linear(120, 84),
nn.LeakyReLU(inplace=True),
nn.Linear(84, 10)
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
Average loss: 1.435073, Train accuracy: 0.509470, Val accuracy: 0.796464
Average loss: 0.599031, Train accuracy: 0.818739, Val accuracy: 0.844994
Average loss: 0.506266, Train accuracy: 0.847081, Val accuracy: 0.861375
Average loss: 0.452622, Train accuracy: 0.861891, Val accuracy: 0.877210
Average loss: 0.418361, Train accuracy: 0.876037, Val accuracy: 0.879599
Average loss: 0.398200, Train accuracy: 0.879773, Val accuracy: 0.876527
Average loss: 0.379105, Train accuracy: 0.886530, Val accuracy: 0.858986
Average loss: 0.360091, Train accuracy: 0.892366, Val accuracy: 0.891611
Average loss: 0.347409, Train accuracy: 0.896222, Val accuracy: 0.896048
Average loss: 0.336130, Train accuracy: 0.898850, Val accuracy: 0.896935
###Markdown
Подбор гиперпараметров
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e-1, 1e-2]
anneal_coeff = 0.2
anneal_epochs = [2, 4]
regs = [1e-4, 1e-5]
batch_size = 64
epoch_num = 6
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
# TODO: Your code here!
for lr in learning_rates:
for anneal_epoch in anneal_epochs:
for reg in regs:
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, kernel_size=(5, 5)),
nn.MaxPool2d(kernel_size=(2, 2), stride=2),
nn.Tanh(),
nn.Conv2d(6, 16, kernel_size=(5, 5)),
nn.MaxPool2d(kernel_size=(2, 2), stride=2),
nn.Tanh(),
Flattener(),
nn.Linear(16 * 5 * 5, 120),
nn.Tanh(),
nn.Linear(120, 84),
nn.Tanh(),
nn.Linear(84, 10)
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
params = Hyperparams(lr, anneal_epoch, reg)
optimizer = optim.SGD(lenet_model.parameters(), lr=lr, weight_decay=reg)
scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=anneal_epoch, gamma=anneal_coeff)
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, epoch_num, scheduler)
result = RunResult(lenet_model, train_history, val_history, val_history[-1])
run_record[params] = result
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
Best validation accuracy: 0.88, best hyperparams: Hyperparams(learning_rate=0.1, anneal_epochs=2, reg=1e-05)
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
best_model=None
###Output
_____no_output_____
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
a = np.ones([9, 5, 7, 4])
c = np.ones([1,5, 4, 3])
np.matmul(a,c).shape
c.T.shape
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
raise Exception("Not implemented")
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
_____no_output_____
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
# TODO: Add good augmentations
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# TODO create new instances of loaders with the augmentations you chose
train_aug_loader = None
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
_____no_output_____
###Markdown
Подбор гиперпараметров
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e0, 1e-1, 1e-2, 1e-3, 1e-4]
anneal_coeff = 0.2
anneal_epochs = [1, 5, 10, 15, 20, 50]
reg = [1e-3, 1e-4, 1e-5, 1e-7]
batch_size = 64
epoch_num = 10
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
# TODO: Your code here!
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
_____no_output_____
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
best_model = None
###Output
_____no_output_____
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
# TODO Write the code to compute accuracy on test set
final_test_accuracy = 0.0
print("Final test accuracy - ", final_test_accuracy)
###Output
_____no_output_____
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
raise Exception("Not implemented")
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
_____no_output_____
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
# TODO: Add good augmentations
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# TODO create new instances of loaders with the augmentations you chose
train_aug_loader = None
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
_____no_output_____
###Markdown
Подбор гиперпараметров
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e0, 1e-1, 1e-2, 1e-3, 1e-4]
anneal_coeff = 0.2
anneal_epochs = [1, 5, 10, 15, 20, 50]
reg = [1e-3, 1e-4, 1e-5, 1e-7]
batch_size = 64
epoch_num = 10
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
# TODO: Your code here!
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
_____no_output_____
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
best_model = None
###Output
_____no_output_____
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
# TODO Write the code to compute accuracy on test set
final_test_accuracy = 0.0
print("Final test accuracy - ", final_test_accuracy)
###Output
_____no_output_____
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs, scheduler=None):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
if scheduler is not None: # добавил к предудыщему заданию изменение по триггеру
scheduler.step()
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
correct_samples = 0 # правильные ответы
total_samples = 0 # общее количество примеров
for x, y in loader:
x_gpu = x.to(device) # сохраняем данные на GPU (чтобы их можно было обработать)
y_gpu = y.to(device)
prediction = model(x_gpu) # предсказание
_, indices = torch.max(prediction, 1) # наибольшие вероятности
correct_samples += torch.sum(indices == y_gpu) # считаем количество совпавших с истинными ответов
total_samples += y_gpu.shape[0] # прибавляем общее количество ответов на этом шаге
val_accuracy = float(correct_samples) / total_samples # точность
return val_accuracy
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 0.462515, Train accuracy: 0.861294, Val accuracy: 0.849976
Average loss: 0.462503, Train accuracy: 0.861294, Val accuracy: 0.849976
Average loss: 0.462495, Train accuracy: 0.861294, Val accuracy: 0.849976
Average loss: 0.462588, Train accuracy: 0.861294, Val accuracy: 0.849976
Average loss: 0.462571, Train accuracy: 0.861294, Val accuracy: 0.849976
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50), # замена цветов
transforms.RandomHorizontalFlip(), # отражение по горизонтали
transforms.RandomVerticalFlip(), # отражение по вертикали
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR), # случайный поворот
transforms.ToTensor(), # numpy images to torch images (поменять оси местами).
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20]) # приводит к -1..1, вычитая среднее и деля на СКО
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:1231: UserWarning: Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead
"Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead"
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:1231: UserWarning: Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead
"Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead"
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# сначала посмотрим примеры из исходного датасета
data_train_orig = dset.SVHN('./')
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_train_orig):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
# проведём аугментацию данных
# TODO:
tfs = transforms.Compose([
# TODO: Add good augmentations
transforms.ColorJitter(hue=.20, saturation=.20),
# transforms.RandomHorizontalFlip(),
# transforms.RandomVerticalFlip(),
transforms.RandomRotation(15, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# TODO create new instances of loaders with the augmentations you chose
# преобразование всех данных
data_aug_train = dset.SVHN('./',
transform=tfs
)
# теперь посмотрим на то, что было до и получилось после аугментации
data_train_orig = dset.SVHN('./')
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_train_orig):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_train):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x.permute(1, 2, 0))
plt.axis('off')
###Output
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
###Markdown
Без изменений исходных данных:Train accuracy: 0.885814, Val accuracy: 0.732646Подмена цветов:Train accuracy: 0.891649, Val accuracy: 0.706232Флипы решил не добавлять - маловероятно, что они крутятсяЦвета и повороты от -15 до +15 градусов:Train accuracy: 0.861806, Val accuracy: 0.541601Цвета, повороты от -15 до +15 градусов и нормализация:Train accuracy: 0.839846, Val accuracy: 0.841854
###Code
# Finally, let's train with augmentations!
# разделение на батчи
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 0.676655, Train accuracy: 0.791250, Val accuracy: 0.836393
Average loss: 0.608167, Train accuracy: 0.813756, Val accuracy: 0.835643
Average loss: 0.581922, Train accuracy: 0.822612, Val accuracy: 0.826633
Average loss: 0.566538, Train accuracy: 0.825615, Val accuracy: 0.842878
Average loss: 0.545835, Train accuracy: 0.832560, Val accuracy: 0.849976
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :) ![LeNet.PNG](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAABSIAAAHECAYAAAAtelSrAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAAFiUAABYlAUlSJPAAAP+lSURBVHhe7P1/cBx5feePKz9gSBQQSjKERBjjCMeA4hhiHehsXIkUzovPx5yV7x6KEysBdN6TueiCdYHjI1LS7l1Edj4hYrNItSL7XRlOB8a2EhT8I0asiHfN2rndtTmxe8jnW9vY8gWVi9NSqanaqeKP16df736/e3q63z3d86unu+f5qHqWrZnunv413f1+zPtHCwEAAAAAAAAAAAAAAECdgYgEAAAAAAAAAAAAAADUHYhIAAAAAAAAAAAAAABA3YGIBAAAAAAAAAAAAAAA1B2ISAAAAAAAAAAAAAAAQN2BiAQAAAAAAAAAAAAAANQdiEgAAAAAAAAAAAAAAEDdgYgEAAAAAAAAAAAAAADUHYhIAAAAAAAAAAAAAABA3YGIBAAAAAAAAAAAAAAA1B2ISAAAAAAAAAAAAAAAQN2BiAQAAAAAAAAAAAAAANQdiEgAAAAAAAAAAAAAAEDdgYgEAAAAAAAAAAAAAADUHYhIAAAAAAAAAAAAAABA3YGIBAAAAAAAAAAAAAAA1B2ISAAAAAAAAAAAAAAAQN2BiAQAAABs5O5dpeVjWTp6+DAdNjI+c4Yu3crJdzXkbtGluXEx7eHxGVp+YYPy8q2Kya/Rs2qZhz9DyxvydSDJ08oXj9JDZ+7IvzXwPjwpj+PRLJ18di3gccnTxkuX6IztHOD5j525RC9tlHlk15fo4cOP03PyTxf5DXrhdCXrGIR1Wnr4MD3u+eF86l6iuXFzG8dnlmm1xGkOAAAAAABALYCIBAAAAJjcKp0Y3k6tLS3U0tJK6c2bafPmtPX39uEFuu6wRPmVWdqbLkzf0ZYy/p+ibSNLtC6nKZ8NOjuUNpbTQq1pXof76QslfFvzkafr8/2UNvbPjuyL8rVi8tfnaaDDPBZtHYXj0jG0WOK4GMtdGKU9Yj7jmLamjX3P+99IRxul1HlwaJouBjm46xdprJuXNUiL8qUi8iuUFe+Xs45BWaeLY91inQe1H25MsTgo9qG5nfI8b91FUyu1U6EAAAAAAAA4gYgEAAAALCmUpr3ZJ6moAiQLyqFtQuqkB+2S6EXK7mihllQ3TV5RM6zTNz7aaSynnQYXK63GeJaGUsZyd0/TDfkKkNiOBctCvYjUHJf8Gi0MstxtpyNLOtG2TotDHcZyU9Sxf4JO62q1ipqvw7S91Vh2ei/Negq7PK1dyEpBzdGJyDwtj/D6pGlwQdWCzNGVSZaHKcrMV64i82sXKLvXFNkcrYi8MUu9xjmW6p6kwi5aoEFe584xugwXCQAAAAAA6gREJAAAgKbnxewO4tpo3dkVt4AS3KDZXhaVnTTxvHzp8hhtammhTusFycY8ZYzXU0Nn5QvlskiDLJG8qrI1KXdOHaIOFrTGvmnt6KB241+diMwvDupl3o1p2q09LrIGamobjZwL0DQ6d4UmWVqnh+isyzU/R3+2vdWUgKkO6hAyUiMiN45TP4vA/uPGp9t5niY6jXl2ZElf17M0z/2ZqtGboo4OU0bqTqPnJ1iW285lycbxfrHvBhdhIgEAAAAAQH2AiAQAANDkLNNIewu1tB8hbWU5Sf76C3T1rq2mXO4e3b66TM/clH8rpIgsFol36MxD3BffQ1SqW8M7Zx4ypnkfdfL8ne8TffeV7AdRx50z9JCcL7/2LJ3MHnX1AVjoG/AoZU8+S2ua7c5vvEDLp2ZonPsvNHI0e8zd/6Xjs6z+BucuFdcqVXC/jWfUMo3PPha8X0Ihi1u30IHsBVpbydIOYx/pROTySDu1pAbJ7dLW6cqZU65tyC+PUJprJi7axKWtf8nxOXP/5C8/ZqzzR+nYVeP99QUaMM6ZzrHLDnHJEjlFbTuH6YSxYYuDLAU1IvLskKjVmZl315oV69/SS7PisL9Efz3G++phWnJ41Re/zMf1ozTzzA/lK8anG5+XattJwydWKbc46CEir9FUt7Fem8bosnzF4s4s9RrztI8syxcAAAAAAACoLRCRAAAAmhtZszE1uOhfG86XQvPa/uN2ySSbC7fsII9uDQXXHttn66/P7KNw32PX5LsBedGUdF0Dg9Sdkv1Mps1aeqnuLF1YGKKOlL1fQmeTc+4/kKfh9ZV9ZVp9JDpqjTo+K9XWUZjW2Xx5fZGGbP02ltsv4cb1FwpyU36uW0RKySZqFObo1qWTlD3KIm+c5i7dMl5xcoOmd7dQu7H96mhxv5/7xXqa255uNffbiQe7jNeUIOQKlrs18vomrdpEp5eIvDbVbbzeTjrfZ9bO3URj0hKqvhzbBxasY5RfyZr721gv+667ufoCWePpeIpI2fS/d5bcilvWxs3MO2pqAgAAAAAAUBsgIgEAADQ3Uth4DXwSiI1l+szhDPUI4ddK20edg9Vs0Mr5U3Tq1Hla8TU8VTbNlpKuuP/BPF0e4+a4xuuOvhNPDHANvG6aUr4zv0RHuIbojget/gOZ/NoJUQuwpX2ELH9mfVaKuievWKJv/eKYEGX2JsZmc+AdNGkzZ5ZQczVR9sFTRF6msU3G5w5M0qzoJ7EgE4Vw7Z8vHnDo+QnqtDdRXjf2PTenTg/SgqomavUfarxur0W4MkldLV3G9si/NXiJSFM26qX0xnxGfFbh8KvBi9ppYME4q4z1mRR9YGaoZFeSniKy1Pkl91+FTcMBAAAAAADwAyISAABAc1MLEfniFP1GR6HmYev2YVpwDrEdmBqJSGeNN1nz09ns9s5sr7HOKbK6Tsxv0N2ry3ReY0xdYk19lmZgHVM8Fmr2nR1imbeDHrTbTSFol4ubvAfBU0TKfWck1T1GFyyZuEYXxCjSKeqeKswjhKAlF9UAMp3GOjvWZnlE9EnZMrBgW0/zs0odpkpEpFYg5pdpRArSCTEYkqMpuY6KRKSsuQsRCQAAAAAA6gREJAAAgOamFiLSRu7KpFkbMD1CyxW5yBqJSOf8XvLOU1gVyN27TVeXj9FHRfNyt4jU7ju53F7Zltnsi5HnT1Hb1j46OD5Dy1fvaZpLB8Dzc2WzY81ALFZtP1utRiEK1Ybnjf3O8+pGK5ef121VGzW4NkXdHs2rFZWISFMMu49H/vKY2XeoEWdTei0ViUjZXypEJAAAAAAAqBMQkQAAAJobOUCHb794N5Zo7tgyXb3nr84uj22iolqGZdF4EZlfu0DTD/RRl9U3JCdFKSH5AopI0XS5+L3c6gka7dtKbWI5Mq3baXjheo1qRKq+ON3yjykWg+a0m1SVTbm+Xbq21mJwmWLpaI4wnSHNeDMWXiLSlI2l+ojUNPmWA+SIZvCeo7vb8BSRUjaW6iOyqOYnAAAAAAAAtQMiEgAAQJMj5VWqn4rGl3EgBidpKdTw21g5TzPjf0KLrupzun7+yqHBInL9OPVzn4qt2ylzNEvHTp2i5au3if2rS6zJZerkXX5hQCx3YEGntHJ0b/USncweou2i/8bCIDCB8BSReVoY4OXpBOEGzWeM91JDZPph87inlC2WI1nrlrl0hPvRtC/THOTGr29LLxGpmnqrc8mO2YTduf7rxrLMZuNvf7vxfqqbsn4D/HiKyDs022usl72vT4Wo5WmTswAAAAAAANQYiEgAAABNz/p8RkgozyavqjZa+yAtSkFk1ohrod3TThO5TgtiAJjSI2R701gRaY7orBOIpnxraRkg6y31Wa7mzEre7SZz93BfkMco+5kzrlp4Zg3A0k2cXXiKSHVcUpRxjuSycZz6uSamrearEIXqb1kztv3IUlFtQDWgTkvXJJm6NU8r2W5KBZCBniJSDQh031zx+aZed9RWVCNnd45dpvz6PGWM9XGOmO3CU0Qqqb6JPn6xeAHm62UeCwAAAAAAAMoAIhIAAACwapylqGP/BJ1WfRfmN+ilJ7O0lwcKcQ4Qkr9MY53G66ltNPK1W2L6/MZL9KQYFMUpNe/QmYcO0+HDD9EZ35p/ehF558xDxvyH6SG/BVQpIpVgTQ+eoJc2WFTlaeOlSzQz0CFeLxKs6rOM/bZt5Gt0S+yENbqQ3SvEWXrorJR+SkymqX/uqqhdaS73a3SE96FN8AaihIgsHJduGnvSPC65W0/SxC4eSChNI7aOO4V0bT9CS+KlGzTba9Y2HLvAo43nae2Ccew7UuZ273iQnrl9lU4MbTP+TtPeWf/m0Z4i0sAUsCnqHntS7Lf8xgs018/nYPE6WhK8c4zUGDo3ZnvFOu2YLLEOJUQkbRjnGC8z3U9zL/BAQTm69aQ50nmqd9YhlQEAAAAAAKgdEJEAAAAAIwTafuqw918ok+rYT1khp4rJr52j0e3mSNmFtNL24QUqHjRb9V0YpJakXkSa4spDvtmpUkRSfoVm97IQU9vDSVHbzmGa/rhZk85quas+a3c/DXRwk+LC9B37Z4tr7K1flDJQTSPTuosmLmrroXpTSkQy60vu45LqoP1OeXhjmnYb62rVnlxfpCH7drRup9GlRRoVIrrwWtA+LUuJSD7fFoaU3FVxCs51ms/w+jhH8pbSVDfCt6KUiDTIXcnSLtEsvpDUthFaKvNQAAAAAAAAUA4QkQAAAICd3D26fXWZTp06ZeQMXVr1G9k5R/fU9Gcu0ap2MBtumszLO08rvjX/btIzvKxnbsq/TbhPSv6M834L2Fih85r51euu+W8+I5ZbPLlZC/IML+fUMl29y7XmDNaviNesZdilZ36DXliW03sO6GMs9+5VWhbL5b4n75KodFkuXttShNkPpdiGZVUL08kGnR1KU0v7AC0oAae2wziWooaneOkFc53ty1lfoocPH6aPHrsqX3Bz8xnezmeMI+pN7pZcR+PzzBqoNtb+gb7K7z2lEZ/yuJ168n/SP8mXitAeVwfWMeNjUeEI5gAAAAAAAJQBRCQAAAAAKsOr9mWcULUg03tp+kowFZdbnTNrgKYHyd5aHwAAAAAAAFAaiEgAAAAAVEYSRCRjNRvnJuiHKHvyEq2qWqCS3L3bdHV5hkb7tlCrsc2tuyao3BblAAAAAAAANDsQkQAAAACojKSISEGOVk9n6dDONke/jfakqG3rAZo4vYpmzAAAAAAAAFQARCQAzUpulU6MZ6ino1Dobk13Ud/oCVr1LWHLEYZrIB/yK7N0YIsaVKKbpq7JNwAA0cerP8q4w/2Eqv4lZZav3qa7FXVoCQAAAAAAAFBARALQhKwvjdJ2Hi011UY7M+M0IwraMzSe2S6aHPIotlNFw93ayF+n+QE50mvVIlKOJJzaRkMzvA5POUYariH5NXp2bpgOfc53yGIAAAAAAAAAAADUAYhIAJqNF6eoO9UiBllYWHNbv9yVSfn+CC0Xvc3NFidoPw/QIGtQVi8iF2mQlzOw4B4RttbIJqQ7shCRAAAAAAAAAABAI4CIBKCp2KDj/SwSO2nsspf6y9Plse2U7rqPZq7Kl5jFQSkgW2nXxMfpvlqKyDD6l4OIBAA0I17dcDwwTRc0P0a5WJ+nDP84Ve11en2JRq3+N1M0dFa+DiT+XZ7kVk/QcE/abLlg3IvTPcN0wr8vFWO+ZZrhc2CzmlfOv7mHMuNz9KzPebA4qOZxZgdVdkuV935ddmQJd2kAAAAg2UBEAtBMbBgFykof9BeHacuBCTotCj2lBKJsbu1TQHkxu8NVAClIwjxdXxilvq2q0Opd4MqvXaDpQz20Oa36mUxRW0cPHZq+aBTrJJZEVZHr5ikn5TbY95NYhjHfxUUaFn1aGp/T9RBdkm/nry/QaN9WauMCu7HM1nQPDZ8INqCFWcgbpAVjGVYhM9VGWw/MEreQz12ZKwyg0ZqmntGlwrZJAu0Hg6LPUsvkJvqHpt0jALPAGO6hjjZZC9b47K4D2WDyAgDQeNYXaUjUYudBdvro4OHDdPjwwcK1NdVNWa9uOAQ3aLZXfv+rEpF549rDy0nT3oljdOrUGbqCEccLBOjyZOPsMHUY95dUxx4a5a5MZoZlFyvdNOV1rzWu4XNyuam2nZQ5mqVjVr+nMzR+sI+28DLED4zF94oCKzTZZUzT+T7j3OHzx56H6MwdOVk5rExSl7FOne9zLs/IQ2eokkUCAAAAID5ARALQTJwdEgWSTWOX5QuVUr2I3Fg5bxSEPkG7eTm7PyEKRudXNox38rSS7TYLTlaBa5T2cGHaWWjeOEtDaWP+1u2UGZ8RyziWfcCc1ih4731CFmduPkOnHj1Im3jbDz5qTHeexEeVLSLbKZ1upW1Dxmcdy9IDn1kWTcrzK1mzOXuqg/aM8nrM0OgeLvylqDu7IqYphSkH02LZ2w9xQfEYZT+4TeyD9N791N1aWO7wdlOC9s7ekHMbBN0PBoXPSlHHnlHRP+jMaA+l+Tikh+gs7xeBFBCpbbTPWuYHaZu22T4AIIo8P9EprlsDJ9Zc16H1b3yUOvl7v3va+LbruTHbS6lUSlyLqhOR8praNUkr8hXABOzyJH+ZxjqN99KDtGizhfnLY+IYth9Zct9n1i/SWLexXOO+UPJHsfwaXcjuF5IzbXy2S0bmF2iAP2NkWb5QPfmFAXFe1nCRAAAAAIgREJEANBOyZmD1zZNr1aRas5wb06ac3GEUWO0lq3VjWpZtnRP0vHzpH790kNKt7XRkyVEEU8uwL1cnHcsWkcZrrkL7DZrebbzesoMmi1fYmCVtvN5JE2qFPVDN3nZM2qWlUfDcxK+308CCveRpFgrt/WqWsx/UZzkLnOtGwbDdeL1Trey1Keo2/u4y1snO+tx9lGrbTJ9AARKAiHONprr5+z5oXGl1bNB8xng/NUAL/yRfsiP6E05R91S2Btd7zTUVFO4rfl2eLI8Y1+cU9R+3fimSmPs1tf1h+h/yFZN149imPPuC1rFurEva+IzMvENFytqLvbO1q6e4MtllbHMv1XCRAAAAAIgREJEANBMxEJHXprrFOmbmnQUu9obcnHsT+Vfo1KxfjUSka1op7Foy80ax3oFcvl8NVFMOdpHD+cnXB2ihqBwpm8kFKtC794O5zN007aoCJcVn9xRd4z83jlO/qP24lyZOX6V7QdqYAwAihGoO7fyRJAD5Fcp2pyjVO0s3Slzv1Q8bJW8FlmyzpeQMGqxr9UW6mD1AW2V3EdwFxujSurG+a3RuXDUz5mboB2jWtc05Wj0xSn1dHVYXGqK7ib5RWrhun1Zt7wJdXximnfKzuGmzs6sL3sdrF7J0oEv1vWh2iRG0W46gXZ6Y4i5Dmtuilg1jn7c7jru9f0neluGF65QXglPdb/O0PJI2rvmj9JQ5iyBo7cX1+YxZi3/obNG9UL3eadwHzbXJ08KAsZ3tI4TfswAAAIDmBCISgGZCFjrq2zS7HNzLOTvEhb5NdPBR1Y+VLZ/YLQqEA8VmjvIbd2n10hljmhkaP2wblMG+fjUSkc7Ptpq7iybfznWWTc99RgU3C/Pupuzm687aTN41i4LsB/0ymTs022t/L08rs2ZzPVFgF4X7Pnoge9J3YAMAQDRg0SS6XeB+YDPjNHPmEq36/qogu8dI9ZLZA0SVIpK7xjj1KB3kHzo2HaRH+dr4zE35ZkDktbo9naZWW5cdPVxLPtVN+/car28/RNljp+jYhLxuObqQeHGKu/zgLikeENOJ62RmuykQi+Sb3F7js9L27jZ6uIZ7sWhT+zfdU1imOZ2j+4xA+OxnFnfrF0VfwGklXHceomlX575mLX17c22ztmNLYR+JbjbStHcv/7hn+xFM7Odi6WhK0N304IksHZB9i6batlLf6AIV+Vtap4WBdmPaNA2pPj5ErVpj+u6srYWD/DFt94N0wpLK5v1llOWonAoAAAAAyQUiEoBmIm8UdLiA5lebbnmUtnT00B+cXJUvOPEuMJWHezmqYFsqljjMXaHp/bKDfxE5CujRflFoLVq/GolI1ybravs447O/zW2uQkSWsR/0y2SkiEwNkX0w2/zGC7Q8M04H+7pk4Zen6aDhQmeSAIAIs34xSwfEAFvy+2tEiKQHsrImXjH5lUnjumEXabW43nv/gBIIea1uaR8ge08VG/MZc5scXXmY8sxey/wSPdTVRilX1xp5WjrC8sx+/ZXb25I2rvd2yadEW6G7DfOHM0et9fxF+vgm4xqcedzxWX547WfZxH5TN3Wn7TLxkByspoNGi4wr7ytby4Ebs9TrkoHG1sjuOIqv+cs00m5vhi1rL/J0rdvpUJYHGlJ9ILuXaXWhwhI4Z9aqdQ2mo7oXMaK2xeoH2jjvgvSrDAAAAIB4AxEJQFOxQcf7+WG/0yikeD3qb9DiIBe2NH0OWtSiYMq4l2M2zXY3U3Yjm5EZhcX+uat0d8O+rpr1KyEi3TVEZTPlICLSoy/FcqhORJa3H8xl6vrmMgugVtNsLTm69eTHTSGga4oOAIgoedp46RKdET8qFEb3F30TTtnEjxwUJWV8vwsKrhbX+xqJSGftcvm68/prNif2qakpMbv80IhI3SA+l8fEoGfdU+ZV0hwMKEXbhubo0ksbxetWNl77We474712Y/uL6j+uL9AAX7dtfSffme21NXtWolXTV7ESgr2ztlGqzdqKhfvkNXrsfW2UcgySw8tdmeT91kK7Hf18qNqX6Q5zwDZXn5PXHqP3taXcA+Pkjc8W26nrOgQAAAAASQIiskzunHmIDh9+iM7YC/HPPW68dpiOfrHEr7hymsefk38z8jVtxmfozKVb7j6G5DwPFa1AMeY62j6r1Oc4UrR+IJnIplL6TuzzdH2+XxQizL7BvKhFwZTRLOf5CTEKqHv0Tjn4S+tWeugS/60KZ84+FPO0dkLW9LAXsLTCUF/gzC0dMZszBhGRRvFvQjOaKWMWyFppq7nCnlQnIsvbD+YyjYKmsR/sk6vaMVahcvkTtDXdSvfNOTZqY54yvEyf5uYAgCjDPypkaS/XXrMklfxRI5WhYndUi+t9jUSkcx10PzAxntfrHN27fZWWT3GNwqO2mt4aEanb3juz1Gt/b32JRrfbapu2pqmHm8AvB+wjsgivz5WDCnn0j7w8wqKxm6Qbpctjm2z7Wf7ApNvvUkQW/RAnX3N1QaJD3QuKRCajfsxsodfef8JxHy+NquFay4FxAAAAABA9ICLLxP3LuYF84OXXPTuE1z0UW/N5p3XXVHGzFzmPuylpAXMdbZ8V4HNUdM/dIHmsLw6ZfWipfsO4z65jWXpANbfqGHJJtWJKFUyVGHOLNTe65cj+yYzXrWZbtnWzC0qzNoqtfy6erm8LtbakKOVqgi4/q3NQbK/ZRZmqIZqijv0TdMx4fWZ0j7Fv0pTmArp9ft13WJJfyZpy12q6doyyD/ByjNc0gtJJdSKyvP2gRKSoxfPBrLnNw2Y/aUXN7PJGAVb0v9ZBex4wpzs1M077tvH+StOIvSkgAGXwyiuv0Pr6elOlISx/gjpaC7X3dOSN6xpfV81nCnmN9Emp5w9vGi0i87R2btRsyiy3I9XWQV19B+kjvZuMv8sTkakhewcWObp16SRlj9r65TXSuutzZW6r9+eaTcD191Tnc6m4xqsfipzi1I6s3WmXjhvH+431d9SYz2/QvaKa9gq5vs5jqu4d/J7Vz2gx+Y17pF+k/zMuAAAAAOIPRGSZlBaRjoK8HZ3EKPHAxf2yzfVzc8sW6rS3pwnwkOYSkS6qLBCARJBfuyA6vd+cVrU5zNE+D01fIP+xSEoU1NT55VFoKsZrOWp00+KRSF3rJkdKVSOoihFQxQikXCPDeC3VT8et9sN5Wpk9IEdVtY3KnVulE8O2zv+3HqDsRa59afxt/47ovsM2eETSUVs/ilzI7Tk0TRcCDOxSrYgsZz+oz5r8uwUa3mkfeOAEubqLkwMjdKjlptqoo0c3OAIAwfnBD35AFy9ebJpcvlzt4GAVoiSUromxQg5gZsrKm/QM/+Dgihx0a/cnxN/nVyrplKHBItLqJ3GMnly9V1Rb0f1cJ+9Lrpp+BkX7Sw8/v/1VhmsE6msweuN9XxXNrY370+Ci836iRkYvjKgtakiq/axqLbqWqZpsF2pSqqbRqf7jhW43ZEsCbQ34G9PivEgZyy68V+gqZGhqwhyoxtG6wux+RV/r8sY0D0in204AAAAAJAmIyDIpJSLTaX74ShkPqJrHbJ3E8JOK6gFy0xhZz7J+8xhARAIAvPCSngCEBURkWKgmssZzyZjmB6bcFZrkwUR8rwelfngKSoNFpPzb1ZevsQ8e5PUqGilabi//YFM8EoscrEb1YXiDHs90UFvnx+miY9+K5tGB+jq2U2I/bxynfiFSi3/szq/NUYZf3/uEJU2FtLQGoJH7vf0I2bt8Vv04Wn1J5q/TPP/47WqWL7seSXVT1rEvRFcpjr6kN84Omf1DypHF1UjlRc/FsvsVz4FuHOsKAAAAgOQBEVkmpUTk4IJqyugYIZBxPhQz8jVvqSgfIO01onznUesIEQkAcAMRCRoNRGSIrC/RiOhOwfjei/4Lzf6gCwPWOAar0eItyMzrSannDYXHc4d8pvGVnNWKSCnyWlLbaN/4jKjZOTOeoZ1t3H2FuX8Ki1Yi0pz+g3Kk6GHRF2TxqM43ZntFrfLW7Rkan+Hao9wtR0+AfpZ1lBCRBi8+sVcs1z1qtkMSCtFXEIRKOrZuHxbdkoiuONLd1M0DsrUY58TBg+aI1cZyJq9oRlG/PE7beN+prkesrlJS1DFk68vZGjF7iM6qKpX5wsjZhXXM0+XxbXK/qW15QK5DBw359WUCAAAAgNgDEVkmJUWk8eyofg12PYA6H4oZ+Zq3VJSdjFujHxr4zqPWsVTBACISgGYFIhI0GojIkMmv0YXpB2xdXRjh7hv6Rmnu2TUfCckkQEQa5K7M0aGe4u4+hueepbUVcxmFbnDk9u6YpL9bGBayUky/tY9GTzgHocnT9QV7NyIt1Jruor7RBbruv2MdlBaR/FlrF7J0QH1Wqo22GsfwhKtPjRs0vZtrHE7Jfc3rqLajlbYcyNKFtRytTP2GKaN5OeI17xXmrkeGezqkvJb7rmhfqBqSaRqyLKRJfmVS7N/iGpDc/cpwoU9N0e3HsGZbAAAAAJBEICLLxE9EclOos0NmE+1eew/dmodi9ZpeKhoPjjPmL+3tR5YKBYWS85hARAIAvICIBI1GJyKffvpp12tJScNFJCgTJSLj+4yUvzxGncZzaPfYxUKNxQDkrkyKfh1V02oAAAAAgHoAEVkm/iLSQDVPsfe1U0JEtu+8XzSVKuQg9W2VvxK39tNx+1MkRCQAAIAYgxqRINrEX0SKH7Pn+83+GvdO0OkXNgo/aOvI3aIns/upgyXk3tnivhsBAAAAAGoMRGSZBBKRBusLA2JkxVRm3vw1uoSI1KU1vVk/4i5EJKgJeVp79iRlj7L4PkrZk88GGCnbJHfvKi2fmqFxS5yP08ypZbp6L0iTKv7cMzQzbs57NHuMlj2bYjnX8RLdqrLVVn7tWTozM26u99EsHVt2NrMrwNOezB6V63mSLlX74QAAAUQkiDZJEJEm6xeztJ/7XhTPlT2UEffrwojox4x7nNVfaKqD9mfLq0EJAAAAAFAJEJFlElREcn855uiK7TSwYDzWlRCRpaSiC4hIUC08OuYAdzTfQqm2Dtos+2hKdQxRqT7i1y9O0yHRWT8XWNqoY/Nm2ixS6HOrY88oLXh1jJVfodn95udy/2SbN6v+plppV/aKQwiu0+KQex1b0ntptqKqGnlamTVre/DnpY317hD9ZRmFs11ZcvbPv744ZE4rtlOtZ5r2zvoN6gAA8AMiEkSb5IhIkxzdujRH4wf7qMu6X5sR99ceHmhnmdA9IwAAAADCAiKyTIKLSIP1ecqwwGgfoIW5GolI2TF7auisfMHN2SEWLF00uSJfcAER2cyY53CKuieV/MvT2oI5qmZRf6QWeVqZ2iUKLzzq5tylW+5ahPkNeuH0hFnzIrWNRpbcRvPG9G7H5xrkVmmu3+zgfmS58MkbxneDaxSnBxesmpqq76ryRyI1uDFNu/l70z1pk445Wp2TTddGlgvbvWEUQnmQqPQgLRQ+nCbFyJ+9ZO/6FQBQPhCRAAAAAAAANC8QkWVSlog0WJ/PiJpc7e1cO7IGItIaSXuAuKKli/UFGnCOtO0CIrJpyS/SIMs81WWAhTnKZktqiJyK+8WpbuMcTtPeaWetRQ35NVrgkTNT3TRVdHJdo6luY/mbxshVJFeS0CbXTZneS7N35AuCPC0d4e+R83V/rhnb0NKyicbcH+7e7rND4jvb6/iQ/NIRIUedrwMAygMiEgAAAAAAgOYFIrJMyhWRLDpme80moLURkcYSZ83RtFMd+2ni5CVavX2bbt9epUsnZY0054jdLiAim5blEWo3zo/BRXe9x/UrZ+jU8gu0YX/rxiz1plLUnbU3Sc7R6rLZR+TR7GmzOdeNRfoT4+9Pn/9H4ibYk8b5VSw7c3Rv9RKdOb/iHokzv0AD/P2wfTnMkZ0zNO+Y+PLYJuP1wvfvpb8eE304Puysgfnil+mo8fpHZ56hHxp/5u4Z348z52nF/eG0MMCfNUjWp8vvZcb94bSpgu8rAKAYiEgAAAAAAACaF4jIMilfRBoImVM7EclS58r0AdrSai6zKK1b6IBvzTWIyGbFrBlonr+5W5eswVjG53QDwcgaiPbzZH2JRkU/kSlq65D9LHIT5r8aMF5rpxFZDdesPbibpgM0Y1Y1DTsnnpevEG0c76cUN+O2C1A1Gn3nBFlTqtfsNYTzK5QVzai7KevXn2R+iY5wDWL7MjeOUz/XGu3O2kYOXTe+rtyEvJNsqwkAqACISBB1/v7v/56+/e1vy78AAAAAAEAtgYgsk42V83TqlKNm1c1nxOiDz9yUf2u4+Yw5QmHRNHK+8+5qWsHIb9Ddq8tiGZzlq3eLa7N5skEr5415dLXTQKIxaxQO0OTsXtE3Io/OvjktB6BJ99O8faAZIeRS1H9cniU2wTdpdbSoBJ0xv70Go6zlOLDgc0Iqkejqe3Gdlka3U2sLD4BzkA4fztD2VrMWsHOwmo2zQ2b/lgMLxlx5Wpk0+8DMzOv6LrCj1t1dg3h9aVR+3h46ePgwZVi+8oiiGKwGgKqBiARB+M53vqPdn2Fk69at4r72sY99jF5++WW5RgAAAAAAoBZARALQRJhNnln8ddPYhTUp1fK0dmHMHAime8qq/ZhfHKSUTS6aXQJoBN+dWerlZXZNUmF8JLPWbcnavkpstqRp0Dlcd26VTgzvFKNVi1E95UifrVsOUPaiUzDmaXnEHPBmcOKj1GlMlx5cpNIaMk8rWe77UjdtjlZPDNNOru0pRwdPc+1jrm2cveizXACAH5WKyHPnztEXv/jF2GV+fl7UsCs3zV4jr5EiktPf32/eL4189rOflWsFAAAAAACqBSISgCbCHASmuBm0wqwtWRjQRXRDYDXLls3524/QkqtK4CINGsssHsn9LA2lSgzssn6RxqSE3OuqZbhO8xkpKBeULCXKb/yDHLla0+Q6f5nGOs0CIzcVd3rNYtbp4piUkHtnbc2vTdQAU/YRu7n28T9M8jzO/jIBAOVSqYg8efIkPfjgg02T2dlZuceak0aLyO9+97tCBr/zne8U9xb+99ixY3LtAAAAAABApUBEAtBEmH2cevRn6ujrVNSezMybzffVgDIDC24Jd22Kuo33iqTj8xPUqR2l2ljU9Xka4EGVUh00MH/dc3ntR5bc78kRtttVZ5QW67QwYI5MX9y3o4P8dZof6BBCsWNgnuwt0U3k6N5a4SpH2C45Ij0AwA+IyGCBiGy8iFSwgHzLW94i7jEf+tCH6NatW/IdAAAAAABQLhCRADQR+QUeVEYzIrTBxnzGeC9FqmKjEJHdU3SN/5By0D6yteLG9G5jPrt0lIPc2AeAkeRXpmgXN3PmpuGuJtaSF7O0w/gsfbNus/alcz3WFwdFP5Gdb3+7d63F/ApN7WoV29g95tXEuvRATmbTdtsI2wCAsoGIDBaIyOiISMXExIS4h3K4/0gAAAAAAFA+EJGVkrtHt2/fptv3So9PLVDT3t1wy5GKyNPGXWN5t4MMTpOje/zZgaYFiUeNCJ2Zd4i4DTrez82hC31CitqTm8bI9IvLNMKjS++epuIxZeRgMy0DpMalMaWgpt9HNW3rLpryrLJooPqcvG/OJQvzl8dEH5BFNSKt0bSNdc3LZt2u5ttqYJpW2jVVqmn1HZrt5e25j+bcH242/0aNSACqAiIyWCAioyciGW6uzbUiWUZyLUk01wYAAAAAKA+IyHLJr9G5UXMQDfWrOA+gMW2NIlwgv3aBsvu5GWhh2pbW7TS8oGmOGpDclWk6sIVrdcnlpdpo5+i5Ql92CmM9L2T3U4dtPVnCbB9e0DRHBc1Dni6PdRrnAtcKfJJu8Wmbu0VPTuwSg8GkR5YL5+bZIePc3U3TwjwWBoTpn1slMdvqCRra1kopPsfaB+i/ra7Sk1kejTtF20aWHBJR9fv4Wjr4pRummHfGkvobxkebo1l3f+w0vSAMeo7uXZ2jfjHCdjdNWdUVVZPsThq7LNf8xiz18jrtmLSaaKt+H1978Et0Q/fZt++JbWLUKNyp7o/R6RfMHw9y967SXL9cp8KHAwAqACIyWCAioykiFTygkGquzf1Hork2AAAAAEAwICLLQsmYFG0bOkFXb9+m1Uszsr+7Xpq1VxXLL9OIkCbbaGhmWU47R8PbWSK2Uv9xZ3WrACjBkt5L2SdX6fbtq3T6jzUCqWg9Z2j56m26vXqJ5oa3myMP9x931TQDzcQ6LY2a50JBUqeoY79j4Jb8Eh1pb6HOscvmuWU1bZbzpDpo/+wSze5lwWh7TTeytGraXSr25tbcl+OhLY515BG0f4MmbE26lWC01lFijvCtXpf9PtqW4469uXWers8foi3chNw+TaqNfmMCo2YDUC0QkcECERltEangEbXVfQLNtQEAAAAA/IGILAfZZNTZrFU1F91kG5lj43g/cV91mXmHtpByR9d/nh/LI46aXwJV06zQpNa7+a3su89YhmbQZNBk5O6t0qUzp+jUqWW66tHFwItTPFL0Dpq0DGWObl06I+YxayryS7fM5Zy5RC+p1/Ir9MWjh+nwp8/TP/Lfa/9AXz3Fn1Uiz9wUs9oprOMpWr7q7F5gjf7hq/zeU5pavjfpGbHcJ+l//pOarlSeMeZwkLtHq2JbjfeXr9Ld4g8HAFQIRGSwQETGQ0QyL7/8stVc+/Wvfz2aawMAAAAAlAAishyee5wOZ3roD886hYR7AI3Vkx+lvq599JgY6aOYogEv8ht0l5uGavqPzN0zm42ajugOnXnoIPUd+JxrEA1zJOQdZI3tsXqSPtrXRfv0Hy4elDVjjgDgRtWCTG2jkXNrrnNUB3dJMCHmcfbTCEAjyNHq8gyNH+yjrs2babORnsxRyp581t2lhYTP4emjGeoR03dR38FxOrGql/WByV+nhVG1Dl30J0/J15sQiMhggYiMj4hUcP+R3Eybn7P4X/4bAAAAAAAUAxFZA/IXP06bjIfOzkDVDOWovKkhMgcnvkGzvVyjMU0jy7ZS8foCDXDNyfQI2V92s05z9/Hy+um4eyBkF6a0LIyMDIAv3Ex6wOzrtHVLH42KrgYKfSoy+Y27tHrpJGUPmf2npjoGaB6dkYJGk1+h2b3cTQVfI9uoQ0jADquP31THELnHVBoy+9ZtTVNP5jAdPtgnm+mnqf+4u9ZuUJ4aNbvL6NhzkA4fPkpfbuKuRiEigwUiMn4iUsHNtblmJF9nuLk2+o8EAAAAACgAEVkxPBr1VVqeGabtXEhND7oKtDrMEYVbqP3IUqF22fo8ZUTfj0o6SjlZokaZEj8TYjCcFHVnS40ELFGjC7cfoSU4IlAWeVp7do5G+9z9NtrTmu6h4TnvmmYAhMmN6d3GeckDM10oPifzG/QPn+2lduOcTfUfp8JvOM/TBI+M7rye567Qg+IHpGA/+LiRP0BZo9A3NxCRwQIRGV8RqWAJqe6PLCdBnjbuykHiPLqEUahWQe44u2kJiGyBhG5WAAAAgMYDEVkpL2Zph3y4bGn5Bbr/mDmScCnyK1nqFsLRLS3XFwZEoZgH11gR/fKVlotm824zr+3+FD3pZ37yK5TtNmteDgYxpgB4IR7mr9KyrY/FM5dWfQsVAITLHZrt5WvkAC1oL483aHq34/2VSeoyrqldkyvyhQIb8xlxva2sWwspIndkXV1rNCMQkcECERl/Eclw8+wDBw6I6wc3127K/iNzq3RiuIfSjkHgUm07afiE7vk5TwsDxdMWYuuKqBzkc/uOimYGAAAAQC2BiKyUjev0wuptun11mWaGtolmq2mjhOql+PIrs7RXjKLtVctxnRYHzaZ7KW7a2jtrFJO9ubl6lW7fXqVLJyd8lmtgNU8MWHMSAABizwbNZ7jQeh/NeVyYRY0bR/+8XNtcV2NGJyK/ftTsc/Lo1+ULOr5+tNAcXDUPLzmDhmuP0T5jvn2PvUhrF6bpgb4u8bk9mXFaEF0g5On632XpUI/s0/KBabINLi8xazWPH1Z9X26mrr6DNO6swez4LHOZxmcdytJpTT+ZlfSnCREZLBCRlYnIb3zjGzXJc889JwahKTevvPKK3IJiWEC+5S1vEdcRFpPN0lw7vzJFu1hApjpoz+gMLV81azZeXZ6h4e2tYn+4n58v09gmY57dn7B+8CzkPK1UUjMdIhIAAACIDBCRNUGNXK0fjXr94phZE7J1F02VGrxj7a+oz3hIamnZSX++Kl8LgBq1u7iJoWT9Io2JmpCttGsKEhIA0DxsHO8XPxKlOvaYfZtqBgULhrrG76Zp2y9EqmZ6yVqSPMjZ4ftpJ/f5276T7j98mA4//px8MyCyAL2pu5vSqTbayX1XZsz+WLmGfXasm1KtW6jv4GE6uMfszzXVPWWrfZmnlSzXtDe7T8jwOhzso61tvE0tlB46W7h32D/LuG9s6TtY6Ccz1UFD9hr1L06Je1uqbav47MOHM7RTLDNNQ2e9TQFEZLBARFYmIv/iL/5Cuz/DyuJi6WrT3ESbv3ccbrqdaPLLNOLzI/zCQLuxL1LUb+/3YmOeMnwdGqthZxYQkQAAAEBkgIgsk9w9fd80+mZ7ebo+3y/6hORBERZKNp/O0+WxTvEwJmpEZuYdvw4bcJNYbfNX+cuxo9lf/vo89YsHQKPwuBBsxGMAAEgOOboyvd8cfMa4Dou0pqmr7wHKnrxEtwL1JlCQeEXCzkD1YebfK0GVTbNlAdpZmDf7wDReL+ruI08XP77JeN3WfHHjOPXr7it5497BfWJag6cZqM9ydOORX1sw+xi2DaC2PMICwVHjVA201jtLd+RLTsIWkTMzM/TlL385dvnbv/1bunnzZtlZX3c9PcSSpIpIhptrf+hDHxLfXx7UJqnNtdU1arf9FxwnN56gBzJHKfsN22Bgl8fEIJCZee8fNBRci52vw759P5YUkaX6rlTvFQ/SpzA/X1M2yN0zl+fxA5g5n7lM+/8BAACAZgAisgyuTXUbD1Qpo3DmfqRYmewy3munkWX5goEamCbVPaZpJleMqtWYHlygC6LQmzIewOwznaUhLkzrCrJ5o+BnzFtU8FMD0xgF1zG/DwcAgCSTu0WXTmbpaKaHOmQtQFNKbqfRpVLXxzytLajreJZKVWgvTY1EZGa+uNa77NPSWWtI+8OYUSjWFdTNWp2DZE0qP0tXw96UCoX7nCki09Q/94KtEM6F9tIF6rBF5Be/+EXt8pKaWvVt2GiSLCIVLCRVc23uPzJZzbVVP729NOv1q4QH5jVsE4lLm88gMy9md4j951vTUSci89dpYVjWLjfeU+G+K+33BvNap2v1JLfRPhBZ7gpNHyge2C/VsZ+yjmdxc70HaU6WFcS0e5+gf5TvAwAAAEkGIrIcbsxSL9cq6Z6kK7ZSVmEQmkJNEdVkrWXHg0XTalFNV1StlvwKTYoRWo2HN+tH5LzxIMT9PKZpsKh2Y6FvyV5r4hdpSjTH3kEP+n44AAA0EyzKrtLpCVlTsug6aydPK7N7A/+YVJoaiUhnG3CvGj5GwZYLtdom47JQf3X5FM2MH6QevvdoRKS2UH92SNQM7Z66Zv794ufMvt943bj/y54MjXMTeJ8qohCR9Q1EZHxEpCKZzbUXaZC3qWuS3MN/lebyGNfq7qHDY/Ya7Slq2zlK5xyti649tk/0Y7vvMXld8sJ1bSs8V++dOEmXuN93HohvZoi2iXuDcV1UH/X8hKgs0Ok0kTemabf9dWtgyFbaPjwnlrl6aY6GthmvOWq0myIyRanWbhp7cpVuXz1Nf33ZvwYoiDmqpqzv6O+la+Ka5OheUa1bW81evwSax2sdnZ+rQT5rFGoYV7JuAIAkAxFZFraCadtO0c/Wwb6t5i+pRf0/btDxfrPWTaqtQzwguXOUzOEKNoyynRSM9mZwK5Pigamon6/1JRrhhxnVbxcPOpDmjr5T1DFU6Ohb9YtmDYygSbljJQAAQKx47nHK9Gym+7/gXRVnfe4+cZ22xJrFOi2NyEHI9k77/5jkS+NF5PrFLB3Y2mbeG2Ra0120pRwRKZtLFr2XW6XT2QeoryttqwFkFMJHl9zdi0ggIusbiMj4iUiGB7pRzbU58W+uLUVk2dc9NdCYcY3aPkxzl1aJBaH141FRVxRl4Lq2XaVjf9BHXR9ZcNX+NkWofXTu52mCu7Gw13w0MGuJF6ZTz9+dY5eLRYrqsmL3tDUQparJ6ZKbIIHkzcHfdhbfg83y3KjHAG/yucF+f3bh/I6peQIk8DzGOh7IOn6MDfDddj23VLJuAIAkAxFZAbnVEzSe6TGlXlcfHRw/QcX3kP9Bn9njln/FMUXkPz31Keo2/u4ado4YmKfLf8qfsZXuP2arrpNfowvTXOgzl9OTOUrTF4r7f/wfn9lj+xx9ICIBAInm2hR1Gw+02kG8FDqxZqtl3jG0UDyidMU0WETK6VLbhmjmzCVavV3o19KrabZukAh9X8g28hv0kqr9U1SILwYisr6BiIyniFRwc21ups3fNf6X/44nlYrIO3TmE33U9b4pV3cY68Y1SIi+SuSd1/VSg9ndUvE1zJSOsrm4QF7XbXLRvJ7qr31m8+5CM3UlIj2vpyAhrNNFHlDOONbWwHlc62/1Ep3MHqLtolVBmvrnrxfLa3V+lSUiN2jlvGOU+U/Iflo/4Xj9/Ip8NlKfs5s+YX9fZIZGtQPgVSIiK1k3AECSgYgEAACQQG7QbC8LMWd3FgolHNM0+pR8yZjKHJgmRdtGvGv0lU9jRaRngVcNVtOSIWtMCPVZnWN0uWinyZHDU/1kDm57h848dJD6DnzOtU1mId5eYC8GIrK+gYiMt4hUcHNtHsiGv7tcUzJ+/UfKgRTbR8jWfbqGPOVyAX/xUX2id0+RT0NsNx7Xy9y9q7R8aobGRSunLls/wg6heGeWeo3X21UnubK5dq/VAaYSOp30PmNZhx25fyeLyMJ10bwue/9gUy1fP6qviGCPuzl7jlZPjIvWBPx+T2bco7ZeMMxm8/vIr9V8klHyPN0/T9d1p/n6RRoTzfk7jXPDPkElIlJDqa5aBH6fI+/9ReMgVCIiNfiuGwAgyUBEAgAASCT5lSnZhyH3Lcb9F5q/th/jpsRbuFsLHiDMVhtdPTi/dh89ZP913pZnbAO7qoKefw1zvYgst2+zSkVkfukItRt/p7o/RidFM8dVMXjPB0XNRd4/tsKw+iwjrbv+mE5f5ZobT1J2P9eKSFF3dsWSus9PdBrTpWjb0Awt83RiuX9s7nOXyCwAEVnfQEQmQ0QquM9I83vaQhMTE/LVeGDWAvT+UUKQX6TBlHG92fkZ+h/i7w26u3qJlq/ofgoKIEC8cF0vN+jyn+6SXUq0Utq4Fnf1HaSj2WP0+GF3jUieXnS71H6Eloxrm7j+WT/MMNdoqruwLCX7nFH3i3qLSLN2pnHv69CvB6fo3pNfodm9ZmsAvl9y90+qtp6966hyqPc2Rh41BoA8ZzxR911b7VrjxYiIyEJriMJ3ByISAFA9EJEAAAASS37tAmUPyL58+aFYhvv5PTR9sajWo6o5WCr2B2azoBfkIVovItXnuUSiE68HelfBWuJ6uF+nixO/UbwPUh20Z/QEPftXXMBopyOqlKQ+6/4HaVrIRzl96xY6kC3eX9xVyLlR54iz+gEl7EBE1jcQkckSkQw3z1b9R3Jz7bj0H5lfHhH9qrcPLBRfO2zcmO0V1xmrubWsediSmXc3z5S1EK1aieXgvF6qZQ2ccHXBoWuazZh9QPL18qKo7ens+sMUr7Ya5jbyG3eLRv4OR0QGX755HIp/bLL6tuycoEp6smx6EWmc//wjoK6rk2J0I8xHR0TmFwbEMiAiAQC1BCISgAbx3HPPIUgs8sMf/lCetXFGjvLI8RjVOXdPvl8i9lnV9D6DRBvI0SIdI0FywZTntxdOtbhGn5TI113zy1E5XeullmMfhdO5DEfhwdzGUqN2MoV9678vICLrHYjI5IlIBQvIt7zlLaLwfuDAgRg01y70uds9dsEh/PK0dm7EHKE6PURnLXln61bDXhPPGpG6k+xdRAa+jjpF5NkhIUA3ffxi8XX5+jz1i0G8UjR0Vr6oyC/RkfYWau/tNZZl+wFHosRretDRv/D6Ig3yMm01KKMlImUz+qIaeSbmwD1dNFnu0OcGzS4ir011G9vfQgMLPuemgSm/7VIuKiJyw1gEC3Y0zQYA1BaISAAahK4AhSBRDIsj0CQEKTxUCURkfQMRmVwRqeD+I7kAz+Gm25GGm/yq2tWtaerJcJ+JB6lPjuKf6thPs85RaW4elzKQRxU+SIcP9tFW0XdjmvbO2mrsGZRbs9yaTjWb5abUPdwU+TBlenj0/xR1dLA81QmSPC0dYSljzKdtbqv6GTZr3We4f8hMD6U1TZwjJSLl4G6F/i5t8A9bRT+iXaPH9nHTbv++H/XbmKPV01k6ahzTLtlEnAfezJ5eLfzg9U+L9AC/pxnRnI/bf+raTFv/8Az9k3yJa+c/O1fo27Kr7wHXQJ7Weh/9Oq0vjZuDfhrH/TPLmuqrNUKdm0Fup2pad5+jYYnI++mY/EHRytVlmhnaZp7PVQ9WowEiEoCmBiISgAahK0AhSBQDEdlEQETGPhCRyReRDNeGVM21eVCbaDfXztPas3M0fjhDPVI+cX+M43PPuppFW+RW6cT4QVMYbe6ivoP6gVMC97V77THa55xu/QrNjat1kp/xwgbl73yB7jdee9dDl+SENi6P0SZjn3s3t5Xbaom2HsoczdJpx7rXeyCXskSkEEJcA1Sue6ZH7NOuvlGae9Yp9JS48l+2W0QWasi2be2jg0WiNkXdU2pC2R+nppm72edxivpV1dL8Ck3tMvt8bt3CyzTOGdEHtKOZuVrvHd3U3dpBew4epszOnfSnlbQ5D0glIrJQezJsEemd1u3O7lYgIgEA1QMRCUCD0BWgECSKgYhsIiAiYx+IyOYQkQruP1I11+b+I+M3una8MPuJLG4eHkVMEek9WI1dyN6Z7TWm7aKBIR68p1ALdYusybn3Cbtukl2N3L5L/q3hHSJS9svZOXa5WG6uz1OGm+jbRkPPLw6KmnjFtTTzxnYVBgxi1KBp3ZNXbF2IKOGZppFlp9hLUWbeq8fS2mLu1xbj8/xrXZr9i9qbP4ctInfTJ5wD9J25RKuOLmVMICIBANUDEQlAg9AVoBAkioGIbCK8+qOsIRCR9Q1EZGUi8i//8i9rkrNnz4p+dcvNK6+8Ire8MmLVXDuu5FdokqWNpi/FqFGOiFS18VpS3TR5xXbtz12hSe6XM9VLsxVssLtGJN9i7mr6EpYyzC62ZH+cLb2zZKnIjePUn2qh9iNLUo4t0whPo2smvzFPGWObUoOLclol3AYoQJeNtUHKOOegRm6ep4lO3v+DtGit2wbNZ3h99YMfCeQ2Fu0jJ4FFZCnh6UT2KbppzPifB7LmcMmBpSAiAWhqICIBaBC6AhSCRDGxF5G5e3R12fyFf/mq38ArxfBgLWpec/5gA7I4Wb9yhs6v+NeKYHjaU6eeoZvy76pZv0Jnzq/4FIQk+ev0lLGdQde1EiAi6xuIyMpE5N///d9rlxdWvve978ktr5yXX37Zaq7Nicvo2pFHNuvukP1UFmrZRZdymmarmnsFwVdAXzMxGDoRaSEGVVulS2eOUfboPnPQIkcNO3OgnN00LSWoWRt1E1mt4q0R1h+nG87+DW8/RZ/qsi9TCrdS8qzmrNN8hs+ZTmOdvc+Z9fmM2MfpkeWi/W9uv3tQJIXZTN1nVO66iEglSQvHxsmN6d3ic0vWBoWIBKCpgYgEoEHoCiIIEsXEV0TyqKyjtF00LyuE+ztaKtUyS3R8P0w9abPfKXdaacuBCVefX17krkxSt1HI8h1MgVGjq5ZVKCiBqtFSqgmVRWGghUDrWiFxEZFf+tKXaGZmJnZ5/PHH6Stf+UrZuXw5vOJ5ECAiK4eba3Mzbb5e8b9f/epX5TugMp6jP+OBULr6aHThukvWRZHy+4j0uO7LWn2V3BPcIjJP1xec91autbmVOrhmo/M+JZty7xa2S8qvzgmyWsWr5r8lo+6lmlqXYaDu6a27aMI1gE6OVucGqMM1crxEbV+6n+Yczxu51Tk5oJPPMa6LiDSOhrFclqCp7jG6UNR/pPHcdWFMPPO0tBvLLPWbJkQkAE0NRCQADUJXEEGQKCauIjJ/eUwUYvhB+clb/BDPD/39lObXvJpKrS/R6HYuJLXS9kNZOn21uB+s3L1VujQ3Sns6uLlaBw3MlyqU2goZxmf6F+TWaWFAjshaAxHJBZUBXk9eXoDCV35l0irUQURepM9//vPa5SU1Yfdt6AdEZPVwc23VfyTXlET/kc1DWSJSjpqtbUa7PCKEUy1qRCp5ld47QceWr1Khn0kvSWh7XdZ+LLo35RdowHit5f5jjtqQ9qhWEA0SkQb56/PyXszStcccTZ0HM5JC1vvH0TytzO4VzyxiurRsWm+JXPco8i7qJCL5eWVpxBxR294FgFlr2HgttY1GSv7iawARCUBTAxEJQIPQFUQQJIqJp4i8Q0/sNR6I2wdooehZWI3G2UuuctX6Ig0ZhYVUx4Cr9oGL/BpdGOPag2njIdr9sL1xeYoOiJE7+YHcWKbxr5/cW18YMAppO2hHRYUCGxuXaerAFmqVBYSUpsmbC9n3WfuOHaJfJ4hIiMhGAxFZOyYmJsxrkZFG9h+ZX5mi920+Sl+XfzvJr12g6aOOUaw9rsX5tWdpblSOTN3VRw9MX/AegbsJKUtE0g2a3m1M77pf5mnpCP84FnQ5xThFpOc6qcFqNM2mzSa+O2hsLGP862wKfIdme3XrbWDcz4e7jHPoU+fpH8ULjRORgvwGvXA6S0fliOTq/J67dMu3u5jcrUs0Z40gb57vB8fn6JL4gdWHrx8V8xz1+tLRNXpsHy/X+3vpTZ42XjhNWes7a6QnQ0ezp+kFv5GMGN91AwAkGYhIABqEriCCIFFMLEWk7MRd13fSxvJn6PDRKfr7NfmCYIPODqWpJT1IRV5R9S955hKZz/wbtHLe+Fv0uShH5kyPkLPLMLMAlqKO/VljHwZo2iYKYinqzq7QgiisFYvIf/qfT4o+Ks9ccZS2ZJ+Op776D2Rtjuogv2M/ZY3j51/4ytPlsU5z24Osa5VAREYzURORfJ6sr6+Xnenpae32+SVMEfn000+7XquniGS4ubbqP5Kba4fdf2TuyjTtLdHtxPrikFl7vDVNPRmzxpgatbn/uKPHXNXcNdVBew4epoN95g8vqe4srUBGCsoTkcZdYHlU7P/UtiGau7RKt1cv0dyQWeMtPXTW1oJAiat9ZBvvRotTRKp+A9P9/6/ob/n27au0PDNKPeK84GjODVkTMmXcH3WDsuRXsqIZMN/vJk5zLUvu13mGhrbxD472HwobLCIBAAAUAREJQINwFkIQJKqJpYiUzcnMJj85und12RR5XrUPRF9U9kEIzL6sivqXbN1FUysLNMiFIqNgJpA1OZwdst9Zni/0Ienbx9YNmu1NWYVoswDpKJBtnKUhUfDO0LzlIlWfjinqnrIt+84yzZ9eDdwcLW/sq7QqsFXRH1hQICKjmaiJyEqZnZ3Vbp9fklwj0g73F6maa//mb/5m/Ztr527Rk9n9VhcVehEpRw12/RB0hR7k61eqn45bl1hZqz3VTVmbdVxfHBRNWOt57YoT5YpIZv1ilvar7jw4qTbaObxA14vkrmrK679sVx+R+es0f0jV1pdp3U6Hpi/Q6Y8XD0xTQLViSFF/4SQogtfbaoEgk2rbScNF/XlCRAIAQJSAiASgQegKIggSxcRRROYXBozCSBc9eFr1zWQroHQM0YKjc3XR/Gz3NKkykCrUpvvnzCZG+Q36h8luUSuDa4gU+suSHeiX6uTIR+69OMXLLRSqtSLSwBSGxvpn5onL6qoPzPaBBfG3Hp/ClxScaWP9xTIgIq1ARMYTiMhgcP+R6ppYv+baX6dh9WNOei/tFQJLIyJXJqnLmKZrckW+UGBjnpvk2vqRkzXk3CM8y+bFoY6KHF1y97jGYXEfx0EpPW+eNu4GW3Z+465+OjFitrGMuxuFYyhfu6f5pVCMHl0ko/WYn2csV7cQtd72zwQAANAwICIBaBC6ggiCRDFxFJGqaTSLw21DJyyZ+MKJIdrGNXN2TNqa8C3TSLtNLm4s0qAYwdM+DaNqgmwie4vvlcmu0rUsSsg9s1mZ2SRbfZSXiOSClGhC3dJOAycWaIRrSDprELkoJSJV03LbMiAirUBExhOIyOC8/PLLVnPt17/+9XVorm1cS7n59OgJ4gri3tc2UyLd1Zgtl4iUA1wMLLinFcKqpZumfJoMgxiRX6Ijxv3YLZ4BAADEGYhIABqEriCCIFFMfEWkUei11XJUmP1UpYyCrSzWCPnWRaoyzp3ZXvG+rhmYWZAeIHsZeHmknVq6p8iz7Osl9/IrlO3mJtlTRZKwVGGd5+FBZcS2taRp6KxPFZESItKs9elYBkSkFYjIeAIRWT7cf6Rqrs39R/LftSFHOVvltJLXNi3rNJ/hGu2FJrvmtb34xyCF+V476QZ/BvHizuW/plOnjtHE3rRxTMtrYg4AACD6QEQC0CB0BRG/fOMb39AWoBAkiql732OlkLVmtEJNNgO03js7RCnbKNpmYTlDjm4fDeQInUXS0WwO2F6q5Osh98xC82tp30OnRP+VKp/g5oVGwfsT/LcYFKeYtb/qE9vW0vUgXfGtIuIhIlWtz64H6AnbZ5969KAYNXvTwUfF3884xoioBRCR0QxEZPOKSIWzuTbXmKwl5YlI1Qdu8WAp5nVTL6bMLjlstSdBbFHHsqWllXZNFVoMJJL1JXr48OP0nPzTTY5uXZqj8cOH6fDhcZpZVn1AAwBAfIGIBKBB6AoifoGIROKUhorIy2NCqOn6HHOJSCEtlYhUza81heWN49SfckhHMchNOx1ZKlFM8hCRZqHcJ06BuL5AAywQ29vFYDw7jO0rXUDzEJFynbSfaUs9CvQQkdEMRCREpEI11+awnKwVwUVkntYWzH56nSNhlxKR5rUcIjIZ5Oje7dva5vqJYv0ijXVzrV+v74XsQsU4r1vTm2lz2hyUp3XXlKPrGAAAiBcQkQA0CF1BxC8QkUic0lARKfuV8m6abWu+J8RkoWn22SEuFDhrROaNybgAbBsh26NptYsymzt7F9bXaWGg3Xhvh7Gua7b/lyqNlOojUgOaZluBiIwnEJG1gZtnczNtvubxvzzadrUEE5HGtXZ2r5SQY3TR0QduKRF5bapbrC9EpA/5DXrp0hlR6/3MpZcCD2izfoXneYbqUFG+CcnT2oUs7eW+no1z1ut7cWO2l1ItKeqevCJrQRYkfefYZeMvAACIJxCRADQIXUHELxCRSJzSUBFpYBZY7Q/wRLkr0+aDf+cYXVZP8PkFGjAe6pVg3DjeL5oD7nhQzZej1bl+8eDf0vILdPRCnvJrF2hiVyu1tO6iz/k5uxqJyPX5jLleqhbk+jxleOAd+7a4gIgMW0T+zd/8jbhWxy1PPfUUPffcc2Xn7t278shGA4jI2sID2Kj+I7mmZDXXdX8RuU5LI9vM5th7p+mKpv2pOXhNiobOyhdslKwtCQT5lVna38E/tvGxkDHuYxNO4+tkfZEGhTTzE8nAn+foz7abNRtbUh3U4blfn6eJTuO9zgnjf3Y26Hi/cQxTxjwwkQCAmAIRCUCD0BVE/MKFRV0BCkGimEaLSMqv0cJQhyjUpto6aHNHm/g/F7qmimoRyod6q/akuylUqmOIpsbM2jYqqY79NBukbVQtROSNWep1jfZdkJPpkWWPmhEQkWGLyK997Wva5SU1URNoYYtIFs+1yNNPP02rq6tl55VXXpFbXl8mJiasax/3H1kJpUWkuu6mqGNogda8Lq2y2w33NSpPCwPG8iFnvMlfpjEWW6luGruwZuyxPG28MEf9LMLajf3m7JDYQtXGL3X8QHAWadA4z9t2DtOJ1Zz39+LaFHUb+3yTZmQmc1A9DMwEAIgvEJEANAhdgcYvEJFInNJwESnI09qzZ2hmnDt5P0rZk5folqaWTX55hNJGwSAzr2qF5Gj1dJaOHj5M43PPmoXi/HX6u+xRuRz5moHZXO08rXgV4u6coYeM5Tx0Ro6G48Nzj/O62juu/yE9M/NR8blfdInPG7T4Jzz9GP31S/KlIu7QmYeM9x86Y/wvAGWuayVARCYrzSwieR7dssJM7Ua49oc/S/UfybUkubZkOXiLSDUwTYq2jSxR6bp5spaY40cZVUM81X/cNcAXkBj3OdG3sEPiqlYAVrcjDtYXBoz5dtAOr/6TQZncpNUXNoyz3sTzeyEG0muhXjWSnh3ZH6rXMQMAgKgDEQlAg9AVavwCEYnEKdEQkUHZMJ7506KZ1MD8dauAUJo8rZ0boW1G4Tc9uOhTeAYKiMhkBSJSv7ywEqaIVPB2q+ba3H9k0Gu9p3CRNbFbXruPHrKP4m+LfQR/qyb43gk6ffU2rV6ao6Ft3FS1m6bqV5k7/kh5NbDguMM5B3CzIwRvirqzK7RQskYrqBTP70WpwZc8awYDAEA8gIgEoEHoCjV+gYhE4pR4iUhG9U/GTaYOUfb0VbqnqT1pdvQ/R6N7uNk31+A5592MELiAiExWICL1ywsrjRCRCh5Rm0UJJ0hzbS/hYvbtaC7HK8Uyhge0OUBbWgvvp9p+w7+fw2bnxjTtNvZV+8CC7YczNRBbOx1Zct7IbtBsLw/IZo5c7l2jFVRDRSIyhG5UAACgnkBEAtAgdIUav0BEInFK/EQkwyNZTtOhnbI/SSOin8jNMqqfSX59+yGaFv1sgXKAiExWICL1ywsrjRSRzMsvv2w11379619fsrl27t5tun37njV4mMJ8vXS8fhS6y+/fLTRzBaXI08rULmptSVHH/gk6duoUzQxvN/8ectfqf3Gqm1KpbsrKNvAQkfWhIhHp0cweAADiAkQkAA1CV6jxC0QkEqfEU0QWyN1bpUtnjIKa6F9S5mjWKLwt01VtqRgEASIyWYmaiOTrjm5QF79861vf0m5fqUBEFuD1sDfXjsp6gWJyqyfoyPbXiuNkZdN++qzjR7X8Spa6ZZNs9TpEZH3w3K9SNpbqI9LVzB4AAGICRCQADUJXqPELRCQSp8RdRIL68MMf/pCuXr1adhYXF7XnmV8gIuubqInISrl8+bJ2+0oFItINN9fmmpEsSbi5Nu4D0cEclK2FUt1j9ORLZi3S3K1LNNPPo5WnaXBR1onMr1C2m5tkT5G9vh1EZH3w3K93ZqnXOF7tmqGxr011G/NsIs2A2gAAEAsgIgFoELpCjV8gIpE4BQVQUEtYuOjOM79ARNY3EJH68yCsRLXmIUtIVeOO5SRoNHdottc4HqkMzTvbYOcv0xiPRL5pjNhrmX12vpb2PVQ8YNAndvPx3E2f4L/Pr2B08hrhLXhv0DTv800fp4tFFR/l6+0j5FaUAAAQDyAiAWgQukKNXyAikTgFIhLUEojIaAYiUn8ehJUoN4HmdTtw4ICQkdxcu1T/kaDevEjZHS3UsiNbVMtRYZdh5v994rEcUD6lappuLA6K5tnp/jl6YSPPVVjpybFuMVBe7+wNORUAAMQPiEgAGoSuUOMXiEgkToGIBLUEIjKagYjUnwfO/Omf/ik9/PDDNc+VK1folVdeCZRGwQJS9R/JYhL3hkZwjaa6W6glNUiLrm4FZQ271BCdla/oKCXMQOWU3q85upLlAYZ4GpUUbRtZcg0uBAAAcQIiEoAGoSvU+AUiEolTUNgEtQQiMpqBiNSfB8785V/+pXYZYSaXa+wgW9xEW8kUbroNwuXGbC+ljH2fHjxBt6xTIUdXJrmGnfH6yHLJ0cchIuvDzWe46fszdFP+rSO/8QItiybyGCwPAJAMICIBaBC6QoJfICKROAUiEtQSiMhoBiJSfx44AxFpwt/jD33oQ0JG8qA2aK4dJuu0NLJNSMeWVBt1bN5M6VZTDKf3ztJKKQtpABEJAACgVkBEAtAgdIUEv0BEInEKRCSoJRCR0QxEpP48cAYishj+Pqvm2tx/JO4X4ZFbXaZj2aN0+PBhOjw+Q2eeXStZE1Lx3OPG9Icfp+fk3wAAAEClQEQC0CB0hQS/VCsi/+Zv/ka7XASJWr773e/KbwoAJkkXkdzfn255SQ0fzygBEdkY7M21WYx9//vfb1ga2Y8mAAAA0ExULSJv3rxJ169fRxDEEb8Hfl0hwS8QkUizBCISOIGITFbiICKffvrpkvnmN7+p3TZdvETkt771rbpE91lRFJHMysqKqBWphCQPaKPbh/UOfwdZSAIAAACgvlQtIp977jntww6CNHt++MMfym+JHt08foGIRJolEJHACURksoIakfUNi1Lna1EVkQw3zR4eHqY3vvGNQkZys23dfqxXfud3fod+8zd/k/7Vv/pX9Nd//ddoKg4AAADUEYhIBKlTICIRpPJARAInEJHJCkRkfRM3Ecmw/ON99f73vz/0WpEsIVWNTHtYiPK68AA7ExMTYnAdPu4QlQAAAEDlQEQiSJ0CEYkglQciEjiBiExWICLrmziKSEbJyEblYx/7mKgd+W/+zb+h//gf/6MQkF6SkuMlKgEAAADgDUQkgtQpEJEIUnkgIoETiMhkBSKyvomriGQaLSNVdH1G8nn71a9+VUhHDktKNfq3Lvw+i0qelgfmQW3KZPLyyy/T+fPnIxEAAIgDEJEIUqdARCJI5YGIBE4gIpOVqIlIlnTl5v/+3/+r3TZdICLLI8oy0gteZ5aUXCuSxaOqTeklKtHsO1mcPn1a9HPKNWr5uPK/unOq3pmdncUI8ACAyAMRiSB1CkQkglQeiEjghAvnX/jCF8rO3/7t32rPMb9ARNY3URORlcCFfd226QIRWT5xlJGl8BKVOknJQbPv6MA1HlVtWK7Zyk341fHjEd9f//rXu44fv6Y7n8IIZCQAIOpARCJInQIRiSCVByIS1Irvfe972nPMLxCR9Q1EZH2TBBHJJE1GlkKJLpaOHDT7rj8sGHmf6SQjC0Yvycjh1/l9PgY8jzoOHK4Z+clPflJ7LoWVRx55BDISABBZICIRpE6BiESQygMRCWoFRGQ0AxFZ3yRFRDLNJCO98KpNiWbf3nhJRlWLkfdRUMnI86p9yMvjZZeCp9OdQ2GHZaTfugIAQCOAiESQOgUiEkEqD0QkqBUQkdEMRGSw8H3/3LlzZefs2bOu127evClEWrmJAqurq9r9GnYaKSNL4SUqdZKNE6TZNw98wtvb6DjXi1FNpfk9XnenZCwlGHnbnZKR9x0vj/djrcRdVGQk78MonrMAgOYGIhJB6hSISASpPBCRoFZAREYzEJHBcvLkSe3ywkxUmnfyyOa69Qs7cRQ7/H1j2cbSkVOqNiWH32dRxwOuvP/97xfCkqWdbn/UO7wee/fuDSwZ1borychCsNaSMSiQkQAAoAciEkHqFIhIBKk8EJGgVkBERjMQkcECEVkMZGTt8apN6SUqWfi97W1vE1JQSb96ikpej9e85jX05je/ueiz7JIxys2PISMBAMANRCSC1ClJEpH8uQgSZvjewgWLchOlAjOIBkkXkTxfHHPlyhXxfS03UYLXR3dMdIGIrB2QkeHCovI//If/ICSgqpVYqjalEpU8YItuv1WTL3zhC3Kt4gVkJAAAFAMRiSB1SpJE5F/8xV9ol4cgUQv3aQWAnaSLSC7g6paX1PjdW8MEIrJxQEaGD0tA3T5g4chNuFlSqpqULCO55qJu+moDGVldICMBAFEAIhJB6hSISAQJPxCRwAlEZLICERl+oigiGYid8PGSkWEHMrK68DnLNV0BAKBRQEQiSJ0CEYkg4QciEjiBiExWICLDT1RFJAMZGT5RkZGzs7NyjeJFVM5ZDmQkAKBRQEQiSJ0CEYkg4ScMEXn37l3t9wSpTW7evCn3dG2AiExWICLDT5RFJAMZGT6QkdUBGQkAaHYgIhGkToGIRJDwAxEZ/0BElhediHz66acTEed2cSAiw0/URSQDGRk+UZKRcThHnUBGAgCaGYhIBKlTICIRJPxARMY/EJHlRScikxyIyPATF8nD13/d+ocdyMjwAxlZfb797W/LtQIAgPoDEYkgdQpEJIKEH4jI+AcisrxARDYWlh9Bcu/ePe32+AUisjwgI8MHMrI6ICMBAM0IRCSC1CkQkQgSfiAi459ai8gf/OAHdP369bLzX//rf9WeY36BiKxvoiYig8Lrrdsev0BElg9kZPhERUY+8sgjkJFVBjISABAGEJEIUqdARCJI+IGIjH9qLSIrZXFxUXuO+QUisr6BiAw/cRQ7kJHhExUZGdd9HiUZefnyZblWAABQHyAiEaROgYhEkPADERn/QESWF4jIeAARGT6QkeEDGVkdkJEAgGYBIhJB6hSISAQJPxCR8Q9EZHmBiIwHYYtIvm/PzMzUJCx0uIuDchMFKv0e1zrNIiNZWnNfjbp9EHYgI6sPZCQAoF5ARCJInQIRiSDhByIy/oGILC8QkfEgbBH5xBNPaJcXVqIkMCAjwwUysnqiJCN5XQAAoNZARCJInQIRiSDhByIy/oGILC8QkfEAIrKxQEaGC2Rk9UBGAgCSDEQkgtQpEJEP0l/+5V8iSKjh7wh/98pNOX2gQUTWNxCR5aVSEXnu3LlY5n//7/8tpEK5aTR8ndEdB79ARNYOyMhwgYysnijJyDB+6AUANA8QkQhSp3ChoxS6efwSNxHZbDV1kPiG5WJQICLrG4jI8lLJdTZKhduw8vLLL8sj2xggIqMBZGS4RElGcm7duiXXLD5E6XrN3x8AAKgFEJEIUqdAREJEIvEJRGR0AhFZXiAig6XRIvJHP/qRdkAXv/Dx0m2PX3Qi8umnn3a9Vq9EeZALyMhwgYysnihds3lkdAAAqBaISASpUyAiISKR+CQMEcnfB/4OI6Xzne98R0ijclNO8/ogQEQmK3yOxJFvf/vb2u3xC2pElgYyMlwgI6sHMhIAkCQgIhGkToGIhIhE4pMwROQXv/hF7fcEqU1qLT4gIpMViMhwE3URyUBGhkvUZOTq6qpcs/gAGQkASAoQkQhSp0BE6gvI3DQMQRqZb33rW67zEiIy/oGIDB6IyPgAEVlfICPDJWoykr9fcQMyEgCQBCAiEaROgYhEjUgkPoGIjH8gIoMHIjI+QETWH5Ypun0YdiAjGxPIyOoCGQkAqASISASpUyAiISKR+AQiMv6BiAweiMj4ABEZDpCR4QIZWT1Ruo7zsQQAgHKAiESQOgUiEiISiU8gIuOfWosP7j+Mr2Hlhpv/646/XyAi6xuIyHATNxHJQEaGS9RkZBzP2Shdy/lY8jEFAIAgQEQiSJ0CEQkRicQnEJHxT1QKkTzqt+74+wUisr6BiAw3cZQ6TFRkJCeOIzuXC2Rk9UBGAgDiCEQkgtQpEJEQkUh8AhEZ/0BEBg9EZHyAiAwfyMhwiZqM5Otj3ICMBADEDYhIBKlTICIhIpH4BCIy/oGIDB6IyPgQtojkPk6/9KUvVZ2vfOUr4jtZbqLSVx9kZLhARlZPlK7rjzzyCGQkAKAkEJExyd/93d/RqVOnaG5ujh599FHxf910SHQCEQkRicQnEJHxD0uMKAARGc1ARAYLP2fqlhdWWGBEBcjIcIGMrJ6oychm6OsUAFAZEJERDsvGD3/4w7R161b6mZ/5GWppaXFlz549NDY2pp0faWwgIiEikfgEIjL+gYgMHojI+AAR2VggI8MlajLy/Pnzcs3iQ5Su73xfg4wEAOiAiIxguMbjBz/4QU/5qAvLSgjJaAUisnIRyeuJIGHmW9/6lhglOUh4WjVfOed4EBHJD+0zMzNIBbl06RL94Ac/KJlcLievwPUjySIyzufn7du3teeEXxoNRGTjgYwMF5aRlV4P6xE+/nEDMhIAEHUgIiMUbn7NAlInGoNm37592mUj4QcisnIRqVsWgkQx5XyngohIFia6eZHa5Lvf/a68AtePJItIvhfolpfk/OhHP5JHtjFARDaeqNXSawYZyTWYISOrAzISABBlICIjEq4FybUadXKx3PBy+EFS9zlIeIGIhIhEkh+IyHgFItIMRGTwNFpEMlyALzdce1u3PX6BiNQDGRk+kJHVAxkJAIgqEJERCDepLqcZdpDw8iAjGxuISIhIJPmBiIxXICLNQEQGTxREZCVU2pctRKQ3kJHhAxlZPZCRAIAoAhHZ4IyMjNRcQqpARjY2EJEQkUjyAxEZr0BEmoGIDB6IyHATZRHJQEaGD2Rk9URJRnKa4bwFAJQGIrKBYQmpE4i1DGRk4wIRCRGJJD9hiEgeHMf5GlJZwhCRfO3XDXril6mpKe054ReIyPoGIjLcRF1EMpCR4QMZWT2QkQCAKAER2aCUKyFZKL7rXe8Sg9l8+MMfFv/XTacLz3vq1CnteiD1C0QkRCSS/KBGZLwShoisFBYwunPCL40WkU8//bTrtbhGJ/0hIsNNHEQkAxkZPlGTkXz84wZkJAAgKkBENiD8kBe0OTZPx9KSR9TWLeeNb3yjdj5neAAb5/xIfQMRCRGJJD8QkfEKRKQZiEh9ICIhIssBMjJ8uH/BqMlIPg/iRNRkJA+sBQBoPiAiQ045EpJrPfrVZGRBGbR25J49e7TLQOoTiEiISCT5gYiMVyAizUBE6gMRCRFZLpCR4QMZWT1Rk5Hf/va35ZoBAJoFiMgQw9KQaybqJKEz3ARbtwyvBJWRPEK3bn6k9oGIhIhEkh+IyHgFItIMRKQ+EJEQkZUAGRk+kJHVAxkJAGgkEJEhhuWiTg46w31A6ub3S1AZyQ+ZuvmR2gYiEiISSX4gIuMViEgzEJH6QERCRFYKZGT4RE1G8rkLGVldICMBaB4gIkPKpz/9aa0UdKbcmpD2BK1xyf1K6uZHahuISIhIJPmBiIxXICLNQETqk3QRydvHx6tUnnjiCe2x98vnP/95cY2rNidOnKDvfe97ZWd9fV1ueeOAjAyfKMpIXqc4ETUZefnyZblmAIAkAxEZQrifxyD9QlYjIVVYRgb5LPQXWf9AREJEIskPRGS8AhFpBiJSH9SIrLxGZKXPF7XKd77zHbnljQUyMnxY/Om2vVFhMQoZWV0gIwFIPhCRIYSln04G2sM1GXUjY1eSRx99VPsZznAtTd38SG0CEQkRiSQ/EJHxCkSkGYhIfSAiISJrAWRk+EBGVk/UZCSvDwAguUBE1jlBmmRzDUa/0bHLDQ9Ko/sse+rxuUghEJEQkUjyExURee7cOfrSl76E+GRxcVHUtCg3YRTkkywi+d6lOx5Ry5e//GXXa88884z2nPBLo4GIbCwsIyv9TtcjkJHhBzKy+kBGApBcICLrmKDNpOtVMzFITUwe4EY3L1J9ICIhIpHkJyoi8uTJk9rlIbUJC8x6k2QR+bWvfU27vKSGBUSjgYhsPC+//HKk+i+EjAw/kJHVBzISgGQCEVnHBBkluxb9QnolqAjl2pO6+ZHqAhEJEYkkPxCRzRGISDMQkcESBRHJcK28cnPmzBntNvkFIlIPZGT4QEZWT9Rk5Pnz5+WaAQCSAkRknRKkn0YevbpW/UJ6Jch6oIl2fQIRCRGJJD8Qkc0RiEgzEJHBEhURWQlc4Ndtk18gIr2BjAyfqMlITtz2e9Rk5Be+8AW5ZgCAJAARWYewXOQmzzrpZw9LQt38tU6Qmplool37QERCRCLJD0RkcwQi0gxEZLBARIafKItIBjIyfHgbddveyEBGVhfISACSA0RkHTIyMqKVffbUs0m2Ljwqt2497MEo2rUNRCREJJL8QEQ2R8IQkSwquBZPueHBVHTnRKlARNY3EJHhJ+oikoGMDB/IyOqBjAQA1AOIyBqHa0Nyk2ud6FPhptD1bpLtDDe91q2LPWiiXdtAREJEIskPRGRzJAwRWSk8QrPunCgViMj6BiIy/MRBRDL84wFkZLhARlYPZCQAoNZARNY4H/7wh7WSzx6uMambt94J0kSbR9rWzYuUH4hIiEgk+YGIbI5ARJqBiAwWiMjwExcRyUBGhk8UZeS3v/1tuXbxADISAFBLICJrGK5N6DdKdaP7YvSrrclBE+3aBCISIhJJfiAimyMQkWYgIoMFIjL8xElEMlGTkXGTYpUAGVk9kJEAgFoBEVnDBKlxGNYANV4JOoq2bl6kvEBEQkQiyQ9EZHMEItIMRGSwQESGn7iJSAYyMnwgI6snajJydnZWrhkAIE5ARNYoQQRfVJo979u3T7t+9oQ9mE4SAxEJEYkkPxCRzRGISDMQkcECERl+4igiGZaRuv3ZqEBGNiZ8HY8TUZSRr7zyilw7AEAcgIisUbjJtU7oqXAtw7m5Oe28YSfIgDqcqKxvXAMRCRGJJD8Qkc0RiEgzEJHBAhEZPHxd/PznP191jh07Jr6n5SYK4g0yMnwgI6sHMhIAUA0QkTVIkNqQPIiNbt5GhfuB1K2nPVu3btXOiwQLRCREJJL8QEQ2R1hYRBWIyOgFIjJ4Kn2+qFWi8t2GjAwfyMjqgYwEAFQKRGQNEqQ2JNdC1M3byHBTcd362jM2NqadF/EPRCREJJL8QEQ2RyAizUBEBgtEZPBARBaAjAyfKMpIvj7HiajJyEceeQQyEoAYABFZZeJYG1KF5ahufe2JqkSNQyAiISKR5CfuIpI/78tf/jLiE7723rx5s+z43QdqgU5EshwslW9+85va88EvYYtI/jz+3sQtX/rSl+h73/te2VlfX5dHtXFARDaWqIkxyMjGBDKyuvCPQSz2AQDRBSKyyvjVhuS+GKMs8rjGo2697eHBbXTzIqUDEQkRiSQ/cReRvE665SG1CculepPkGpHc959ueUnN1atX5VFtHBCRjQcyMnyiKCP5uxgnICMBAOUAEVlFgvSzODIyop03SuG+IHXrbg8Grik/EJEQkUjyAxGJlApEpBmIyGCJgoislO9+97vabfILRKQeyMjwiaKMjHK3IDogIwEAQYGIrCJxrw2pwpJRt/72YOCa8gMRCRGJJD8QkUipQESagYgMFojI8BNl0QMZGT5RlJFf+MIX5NrFA8hIAEAQICIrTFJqQ6pw82vdNtiDgWvKC0QkRCSS/EBEIqUCEWkGIjJYICLDT9RrnEFGhg9kZPVARgIA/ICIrDBJqQ2pwuvKA9PotkWF39fNi+gDEQkRiSQ/EJFIqYQhInl00HLD9yfd+eAXiMj6BiIy/MSh6StkZPhARlYPZCQAoBQQkRUkyEjZcaw9iIFrahuISIhIJPmBiERKJQwRWQksI3Xng18gIusbiMjwE5c++CAjwwcysnqiJiM5fFwBAI0HIrKC7NmzRyvpVOJWG9KeIAPXnDp1SjsvUhyISIhIJPmBiERKBSLSDERksEBEhp84DQYCGRk+3PWFbtsbGcjI6gMZCUDjgYgsM0EGdolzX4pBto+bpevmRYoDEQkRiSQ/EJFIqUBEmoGIDBaIyPATJxHJQEaGTxRl5OzsrFy7eAAZCQBwAhFZZvwGdYlzbUgVvxqfHB6sRzcvUghEJEQkkvxARCKlAhFpBiIyWCAiw0/cRCQDGRk+kJHVE0UZubq6KtcOABA2EJG2sFwrJRG5SbLfgC4f/vCHtfPGKbwPdNtmDwau8Q9EJEQkkvxARCKlAhFpBiIyWCAiw08cRSQDGRk+UZWRfD2PC1GUkc1w7gIQRSAiZezyjWs1cq1Aloo8MI2a5oMf/GCRjHOG5Vzca0OqjIyMaLfRniRI13oGIhIiEkl+ICKRUoGINAMRGSwQkcHD114+r6rNN7/5TfrOd75TdtbX1+WWNw7IyPCBjKweyEgAAAMRKeM3Eva73/1uetWrXqV9TyVpYo6FrG47VVi8YuAa70BEQkQiyQ9EJFIqEJFmWPjolucXiMj4ELaI5Och3fLCSlS+25CR4RNFGfnII49ARlYZyEgAwgUiUiZIDcBSSVJtSBU/OcvhmqO6eRGISA5EJJL0QEQipQIRaQYiMlggIoMHIrIAZGT4RFGksYx8+eWX5RpGnyjuQ5bMAIBwgIiUCTJAi1+4BiEPZsNSk0ef1n1O3BJkvyRlW2sdiEiISCT5aVYRyZ/Hkggpnf/23/6b6IOu3LBYqCdJF5H8vXQeiziEn6d054NfolB4hohsLFGrpQcZ2Zg8/PDD9P3vf1+uYfSBjASgeYGIlPFrhlxpVF+TfgPhRDVBBq7ZunWrdt5mD0QkRKQ9H/rQh7Tfn6AZHh7WLhdpbJpVRLI00S0PqU3qXYhPuogM8l1LUr7whS/II9s4bt68KcoE5WZqakq7TX6BiHQDGRk+kJHVE8V9yOsEAKgvEJFG+BdoXcG/HvEaCCfK8RukhzM2Nqadt5kDEQkRaY8Ske985zupp6cncN7ylreI+SAioxmISKQegYg0AxEZLFEQkZXCA23otskvEJF6ICPDBzKyeiAjAWg+ICKNsESzS7Www2KSZR/Xmozi4C9cKzLIwDW6eZs5EJEQkfYoEcn/6t6355Of/KQQlr/5m79Jb3vb28R8mUyGPvaxj2mnRxoXiEikHoGINAMRGSwQkeEnqiKSgYwMH8jI6oniPjx//rxcOwBArYGINMKijWUky8B3vetdRYKtUYlarckgsjZpo4ZXG4hIiEh7yhGRLBxf//rXu75jKvwe15RkScmyknPgwAGxbMjKcAMRidQjEJFmICKDBSIy/ERZRDKQkeHD0kq37Y0MZGT14X54AQC1ByLSI1w7cf/+/VoJ0IhEYSAc7gtSt24qXCsyijU6GxWISIhIe8oRkfbs3btXzHffffdZ0pFrS7KIDCorVe1KyMraByISqUeiWmjP5XKucwEisvGBiAw/UReRTNSkDmRkY8Iyst4DoNWSKMrIOF9jAYgqEJElEmTE6EYm7IFwgvSlyeukm7cZAxEJEWlPpSKS5SHPV6qPSBaLvNzf+Z3fEdMHlZUcfl8nK8tdz2YNRCRSj0BEmoGIDBaIyPATBxHJQEaGTxRlJAcysrpARgJQWyAiPRJEuv3bf/tvRQ1FbtLtV1swjIQxEE4QOduoGptRC0QkRKQ99RSRQcPCshayksPL4W1p9tqVEJFIPQIRaQYiMlggIsNPXEQkEzWpw83Gkw5kZPVARgKQbCAiPcLNoHUFchUWj7r5uHYii8Ao9TVZy4FwuOal7nPs8do3zRaISIhIe6IgIoPEKSvVyN1oCq4PRCRSj0BEmoGIDBaIyPATJxHJQEaGD2Rk9UBGApBcICI1YWHH/R3qCtsqXBNSN68zvKykDYTD8+qWaw+LT928zRSISIhIe+IiIoPE3hRc1ZAMUrvyNa95jai5nSRZCRGJ1CMQkWYgIoMFIjL8xE1EMlETY5CRjUucmshDRgKQTCAiNfETbSwpq+mTkSUdi0wWgn7CM4yUOxAObzvPo1uWCm+Xbt5mCkQkRKQ9SRKRQWKXlap2ZaVNwZWs5Hzyk5/Ufl6jAhGJ1CMQkWYgIoMFIjL8xFFEMpCR4cOjLuu2vdGBjKwufO0CAFQORKQjQSQbi0rdvJWGPzOKTbp5fXi9dNKVa3nq5rOHp3HO10yBiISItKfZRGTQOJuCK1npdx12yspG1a6EiETqEYhIMxCRwQIRGTz8PFOLcKsiXvdys7q6Kre8cUBGhg9kZPVARgKQLCAiHfETbFzTrxZ9LfqFH3CiNBCOTkYGqRWpm69ZAhEJEWkPRGTlccpKDstKv9qV/B5PU8+m4BCRSD0CEWkGIjJYICKDR7esMBMV6QYZGT6QkdUTVRn5yiuvyDUEAAQFItIRP+nHTZh184WRRtWa9Bp8hmWpbnp7Grm/Gh2ISIhIeyAi6xt7U3AlK4M0Ba+230qISKQegYg0AxEZLBCRwaNbVpiJknCLmhiDjGxc4rTvISMBSAYQkbaw6NMVVO0J0odiWAlrIBxevu7zOdyEWzePPWHUII1iICIhIu1RIpJFGfdzGDQQkbUL70/ej7Xst/I//+f/HPgaBxGJBE2cRGSQxEVEPvHEE2Jd45alpSX6zne+U3bu3r0rj2zjgIhsLJCR4QMZWT1RlJGPPPIIZCQAZQARaYufVOP3dfNFKfUYCKdUX49cANfNY08c9ls9AhEJEWmPEpGVBiIyvDibgitZyeEalLrjw/nZn/1Zeve73y2ueVx7ncPXT649ztdKiEgkaCAizYQtIr/85S9rl5fUXL9+XR7ZxsFShmVkufnmN7+p3Sa/6I57mImi7IGMDJ+oykh+bo8LUZWRL7/8slxDAEApICJluKajrmBpD0s+3bxRTi0GwvGr7cPNr3Xz2ROlmqRhBSISItIero33/ve/v+Lw/LrlIo2JkpW/+Iu/qL3meeWnf/qnRVPwzs5ObVNwiEiEAxFpBiKyvomCiKyUSssfuuMeZqIq2SAjwwcysnqiKCMffvhh+v73vy/XEADgBUSkDIs6XaFRxaufxDimnIFwuFalbhn2sOzUzWtPkvZf0EBEQkRWGq79qGrgcXNg1SSY09PTI+SVCktKFlkcFmMstFR4OSy3OBCZ9Ql/p/gayD+28I8+XAOS7yd8fe3q6hLXPr9m37q0tbXRr/3ar4ljbK9Z6fWjDkRksgIRaQYisr6BiAw/URZskJHhE1UZyYMZxQXISADiCUSkES5E+jVjZnGnmzcp8ao1yX/rpnfGT+Ry4lijtJpAREJEVhoWhywhueYch0UWp1Sz4HLDy1LL5c+oVnyy9GxG8VnOd+qxxx4T+8je7Hv79u30S7/0S/SGN7xB1JbUHSuv8H3rV37lV+g973kP/ct/+S/F8tTx4GOgW19nICKjmaiKSO7/ip/7yk2lI/RCRNY3EJHhJ+pyDTIyfCAjqwcyEoD4ARFpxE+icWGPZaVu3qSGm2NzDZyg8pD3D8sM3f5TCVK7MkmBiISIrGdY9HGU+GPBpSQUiy4luzgsD5VIZLHIUaKRpaOSkE7pWSvxqZZVrvhUzdK9xKddejZKfJbznQoiR7hpNl9/nTUseX/88i//Mm3evNn3hzN71H7nJuRq//K+VLISIjKaiaqIrJRbt25pt9MvEJH1DURk+ImDWIOMDJ+oysg4jcgPGQlAvICINOIn0LgQqJsPKQ4XmHX7zx6eRjdvEgMRCRGZpHiJTyU9S4lPu/RspPhUn19KfNqlZynxqfZLPUSkbl5dWFhyc20Wlh/96EfFtmzZskXUrlT7VLd/nHnVq14l9tGb3/xm0Ryc+/399//+31uD7Og+GyKyvoGINAMRWd9ARIafuEg1yMjwgYysHshIAOJD04tILsDpCmYqXPPEb7AWpJBa9DmZlEBEQkQilccpPZ3i0y49S4lPJT114lN3jaokP/mTP2nJz9e97nUiLANZ7HH4uvj2t7+dfvVXf1UMUsP/cniE7fe+973U29tLH/jAB+j+++8XP3xls1lRI5LvPZwgNfJL9RHJ+1Dtu71794p9w8Ly53/+50V+6qd+SrtdzvB0ant27Ngh1p3XmZfdqBqpSQ1EpBmIyPoGIjL8xEmoRU2M8TNl0vnKV76i3fZGBzKyurCM5PsgAKBA04tIv5Gk9+zZo50P0Ydr0Oj2oz3NUsMUIhIiEolPdLU9vcSnkp4clnI/+7M/K35k4T4eWUpydNe+WuXHf/zH6dWvfrUQn62trWJgG5ar/C+vCwtXzpve9CZ661vfatUC5fsdy0NOX18fZTIZUZOSt59rYn7mM58R/x8fH6ehoSH6rd/6LVFDkpfFcvUnfuIntOtjD9eu5PXgeVj+snDlz2JhyfvWvs8R70BEmoGIrG8gIsNP3Gr2RU2MNYOMZOmn2/ZGBzKy+kBGAlCgqUUk1zjRFaTsYbGmmxfxDstb3b5UaZZaphCREJFI8rO6uiq/0XpefvllEX74PH78uCU5f/d3f1fUguRwjUiWg1xDcteuXUIAqr+59iLLRA7XRuSaliwblfxkIclhOcnRXXOrDS+XBSN/DteKZCnJ/3L4tVQqJeQri0qe9sd+7Me0y1HhaXgeXn+ukcn9XvKAOyx5P/KRj2j3c7MFItIMRGR9AxEZfuLYxDhqYgwysnGBjKw+kJEAmDS1iPzgBz+oLSSpcE0X3XxI6bBk1O1Pe7gPMt28SQpEJEQkkvz4iUg7QR6Ky+kj0h5n02yu4alqeR45ckTUROdaiZz77rtP1FJk2fkv/sW/oP3799P73vc+S352dXUJ8amatP/CL/yCEJ9c01EJSCUe/aRjNbFvTy3y2te+VghVrknKElTVJG1EuPYs72OWryynnesKEWkGIrK+gYgMP3Ht6xAyMnyiKiNnZ2flGkYfyEgAokvTikjuc4sLIrrCj0ozDaxS67Bo1O1Te5JeKxIiEiISSX6iKiKDhoWNbnl+YUGkWx7LTxafHK7d+Hu/93uiCTgLNxadHK79+Ou//uu0fft2IeNU352qhqdq3q5bfjXh+w6LSPt9iP9mQck1M7nGqWrGXs/w9trXgcNylGu/8n5ieQwRaQYisr6BiAwW7t+Nr9+1CK8Dy8hy4/dMGQaQkeEDGVk9fJ7otqHRSdp9HoByaVoRybVDnAUBe7hAFGSAAEQf3ne6/WpP0vvfhIiEiESSH4jI+ITvOzyYEf+f+/3kPj65xidLQPu9iV9TUlBXW7HacNN8/hxeB/4/fw5/pn0dWI7ye5/97GdF0/64AxEZzUBEBguLSN2ywswPfvADueWNBTIyfKIsI1955RW5ltEGMhKA6NG0IlJXI8GeZhlQpZ4JUiuS++nUzZuEQERCRCLJD0RkfML3HCUineFaiHY56bxXcU1Gfo+nqXaEcLuI1L3HYrK7u7vo87nGaJzFJERkNAMRGSwQkcVARoYPZGT1QEYCEC2aUkR++tOfLnrAd4ZrQzbDYCr1TpBakUnuhxMiEiISSX4gIuMTvud4iUhdlBTkeZw/XnItSiUneTrd/F4pJSJVVMHkq1/9qpCP/Dn2z1di8tixY7EQk0kXkbyeLKvilj//8z+nRx55pOxwU+FGAxHZWCAjwwcysnqiKiOjcE0FIGyaUkT6jeqc9CbDYcavCTyHxbBu3rgHIhIiEkl+ICLjE77flCMidbHXmnQ26WZZGaRJdzki0omXmOT1ibKYTLqI5O+tbnlJTRSkz82bN+m73/1u2dFtj18gIvVARoZPVGUk/0ABGVldICNBs9F0IpKbAtsf3nV59NFHtfMi5YdrRfo1g+f3dfPGPRCREJFI8gMRGZ/w/aZaEekMN9O2y0nn/Y1rTfJn8jSqSXc1ItIJi8mJiQmtmOTm5vx+FMQkRGSyEmfho9sev+hE5NNPP+16rZ6JoohkICPDJ6oykr8n3//+9+VaRhvISAAaT9OJSL8aekluKtyo8Ojjun1tTxJHKIeIhIhEkh+IyPiE7zW1FpG6sGg8cOCAZ5NuHp2b/9/X16edn1Npn1FRFZMQkckKRCREpB3IyPCBjKyeqMrIZjh/AWCaSkRy7Tzu/9H+cO5MEoVYFOJXK5KPi26+OAciEiISSX4gIuMTvteEISJ14dqP3GSba0i++tWvLrr/cX+Pqkk3S0yevlad10dFTHLBlPsRKzdf+cpXtOefXyAi65s4F5R12+MXiMjScJNc/r7q9l2j0gwyBzKyeoI8lzUizXD+AtBUInJkZKToQdwZlmEsK3XzItXFb4AgTtIkMEQkRCSS/EBExid8n2mUiLRHNc1+73vf69mku7e31xKFXJuwVniJSf47Sk25Fevr69rzzy8QkfVNnAvJuu3xC0SkP5CRjSFq+1wFMrL6nD9/Xq4hAMmkqUQkN7u2P3g7w822dfMhtYnf/k9arUiISIhIJPmBiIxP+D4TJRHp7COSRaBq0r1t27ai+yOH32OJWMvCdRAx2UggIqOZOAse3fb4BSIyGJCR4RPFfa4CGVl9FhcX5RoCkDyaRkQGqZF36tQp7bxIbcKDAOn2uz1JqhUJEQkRiSQ/5YhI7oScR5YsFW5qpftO+AUi0j98j4myiLRHNc1mEcijZLOEdNac5L95Wfx+rQrbUROTEJHRDEQkRKQXkJHhAxlZG6IqI/m5EIAk0jQics+ePUUP1c7w+7r5kNqmmWpFQkRCRCLJTzkiMghc2NR9J/wCEekfvsfETUQ64WbTXrKQw8KSheGxY8dq0qS70WISIjKagYiEiCxFFMVY0pu5RllGcmrZxUg9gYwEIDyaQkRyTUe/QWq4xqRuXqS2mZub0+5/e5LSRB4iEiISSX4gIuMTvr/EXUTq4AIei0cWg15yUjXprrb/Rz8xWWtBBREZzUBEQkT6ARkZPpCRtQEyEoBwaAoRyWLL/sDsDNfS082H1Cd+tVNZGiehmTxEJEQkkvxARMYnfH9JoojUwcLQr0m3kpPVUG8xCREZzUBEBhORvJ9qlf/zf/6PEE3lppHw50NGhksU97k9kJHVBTISJInEi0geBfuNb3xj0QOyMxikJtywZNQdB3uScEwgIiEikeQHIjI+4XtLs4hIHV7SkGNv0l3NZ3t9hlo+3xPKASIymin3OEYJ3fb4pVIRee7cOe3ywkyj++eDjAyfqMvIWj831QvISADqS+JFpN8gNVz7jmWlbl6kftm3b5/2eKgkoVYkRCREJJL8QETGJ3xvaWYR6SSMJt3VikmIyGgGIhIishwgI8Mn6jIyjHtcLYiqjORjC0DcSbyIfNe73lX08OsMCzHdfEh9w/JXdzzsifuxgYiEiESSH4jI+ITvKxCRpVFNunkd3/nOdxbdk7lJt11OVoISk85ley0XIjKagYiEiCwXyMjwgYysDZCRANSHRIvIIAOjPProo9p5kfon6bUiISIhIpHkByIyPuH7CkRk+dhrNb7+9a8vuk/za7w9LC8rWWc/MVnpPSsuIpJFFd8n4xbe5suXL5cdFsuNRncc/AIRWRsgI8MHMrI28HVPt/6NDh9bPsYAxJFEi0i/QWq4tqRuPiScJL1WJEQkRCSS/EBExid8T4GIrB5u0u3V5JqjJCJPU26Tbp6Hm2w7xSQPcsfPdEF/PI6TiNQtL6m5e/euPNKNg8VTufn617/u2haIyMqAjAyfqMtI/pEiDkBGAlBbEisiWXJxjTr7g6wz3H+kbl4kvCS5ViREJEQkkvxARMYnfE+BiKwPfK3nWpE6ici1KCtt0s19WH7wgx+krVu3Fi3TT0xCREYzURCRlcCFfOe2QERWDmRk+EBG1oaoyshHHnkEMhLEjsSKyJGRkaKHVmcwSE00kuRakRCREJFI8gMRGZ/w/QQiMjxq0aTb2Uck/4AcRExCREYzEJHhJ4oikuF9yk3edevcqEBGNjbl/lDVKKIsI6P6fQdAR2JFpPMh1Rl+YNXNh4Qfv1qRnDjWioSIhIhEkp9ai8hcLkff/e53y87Zs2e16+cXiMjw0ywi0kmQJt1co9LepLvU9+G///f/Tp/73Ofo93//9+ltb3tb0bLe/e53W7JTt3+9AhFZ30BEhp8oiwn+nkNGhgtkZG2Iqozk7xNkJIgLiRSR/Iu5/YFUlzgPgpK0JLVWJEQkRCSS/NRaRFYKiyvd+vkFIjL8NKuI1MH3CG5+XYsm3Sw1VB+Tb33rW4uWxaIyiJiEiKxvICLDT9SlBGRk+EBG1gbISACqI5Eikpvo2B9AneH3dfMhjUsSa0VCREJEIskPRGR8wvcRiMjoo2pNsoR0Nul+y1veIvYfN+kuVVjl9z75yU+KZfAxf+Mb31i0HC8xCRFZ30BEhp84CAnIyPCJuoyMy/6HjASgchInIjFITTyTxFqREJEQkUjyAxEZn/B9BCIyfqiajkGadHPzb0ZXOLSLyde85jVFy1Bi8k/+5E+057tfICKDBSIy/MRFRkRRRn7hC1+Qa5dMoi4j47L/ISMBqIzEiUju+9H+cOkM9x2pmw9pfLgDet0xU4nbCNoQkRCRSPIDERmf8H0EIjIZ8P4p1aT7ve99r29T7FJi0m9UbmcgIoMFIjL8xElE8LpCRoYLn9s8yIlu26MQyMjqAhkJokziRKSz+Y0zGKSmNuEajLXOwsKC9pjZ83u/93vaeTm69WxkICIhIpHkp1oRyd+RWgAR6R++h0BEJheuEclNtlkuOlvGsJxkYfn+97/fU06ymBweHhY/ijrn9xOTEJHBAhEZfuImISAjwyeKtVHtgYysPqq1AABRIlEi0m+QmrjVqItyggwIFHZYQnNhgZtwc4GB13Fubk67/mEEIhIiEkl+qhWRfO2qBRCR/uF9DRHZHPC9h8Ui72OuGcl9S6pnBRVujs3nA0/DNSt5v9v7iOTnxbGxMfFM4SUm1TMGRGSwJElEBkmlIvJP//RPhRiqRe7cuSPWv9w0EsjI8IGMrA2QkQAEJ1Eikh8M7Q+KzvD7uvmQ8qNEJD/cd3R00Ote9zrX/lbh9375l3+Z3v72t9O73/1u+u3f/m2RkZERVx544AHtMuzh42if513vepd4nQsL6v/OOCUl12qot6SEiISIRJIfiMj4hPc1RGRz4FUYZOGommPr5GR3d7dnzcdSYnLHjh1CeHKNSt3negUiMh6wmNNtj18qFZEstnXLCzOQke5ARjY2kJHVBzISRInEiEiWSvaHQl2C9vWD+EeJSOc+5Qd1fo8f1vlhXsnBUk3mnZLwPe95j3Y6FS4A2D9T9S1pf81rPZzL4ugkZS1qzkJEQkQiyQ9EZHzC+xoisjkopyDIx4GbbHMNyXQ6bT0bcPj5wP5sYP8O8HMC/xjKzw8/9VM/VTQfL4uX6ScmISLjAURkY2AZqds/jQxkZGMDGVl9otK3OQCJEZH8kGh/CHQGg9TUNl4i0i+ViEpduK9ItUydiCwV5zpwIaIekhIiEiISSX4gIuMT3tcQkc1BpYVA1TSbnxH4nq97NuBnAn7u4GcIe9Nsrm2phKZuVG6dmKxURPL2xTE3b94UgqvcNBpeB91x8AtEZPVARoZP1GUkj/QdB/iap1v/KATPGCAKJEJE8kAlfiKLHxh18yKVpVIR6RclCX/t137NdQy98oY3vEH8y4UGJQsrXS+npORCCEts52dy/CQlRCREJJL8QETGJ7yvISKbg0oLgPY+Iu3hezs/F7CA1MlJ1TSbjy33TcnLCiImp6entZ/nF+5D0LnuSU6ja/BARDYWyMjwiYOMjNI56gVkJADeJEJEKinmFW7KG8VRleOceolIFX7odx5HZ37rt35LSEAlIr3ChQYWhnZRaReGQVOupNy/f78olExMTIgbkbNfju9973tl58UXX9TeTIIGIhJBahuIyPiE9zVEZHNQaxGpCz8PqKbZP/dzP2fd/zk8SjfLRiUnedn8r05M8jMqL4OXFbTvaojIcPnRj36kfSbzy9WrV7Xb4xcvEckjddcjus+KmuSBjAyfKPbTaQ9kZPXBswZoJIkQkbpfp+1haaSbD6k89RaRHH4wdx5Le/i483T2ptlOWRik6XctRKVOUr75zW/Wfh53kM+d5ZeSlF7wL5S6G0nQQEQiSG0DERmf8L7evn27qKnWyLCA5HWBiKwfYYhIe1gc8fx8TL1G6ebXWISrJtr8DMAD9DmfdYKISYjIeFCpPAu7RqRORkZR8EBGhg9kZG2Isoy8fPmyXEsAwiX2IjLIIDVBf2FGgicMEcnLdh5LZ1gAltNHZLWikgsGQUWlaprNkvGrX/0qHTt2TIhHr0IKR0lKLqzqJCVEJIJEKxCR8YnzehuVcO04rkHH13+uLffOd75T9IPM9wC+b3D4Wuq8HwBvKi30VSsineF7Od/Tve77//yf/3PrB9BsNmvVsLRPw2KSn1XsYhIiMh5ARNYevgbq9lkjAxnZ2DzyyCOQkVUGMhI0gtiLSH44sz+wOcMPdLr5kOoShojk+PX9yRKyHBHpl3JFJRcQvESlXx+RTLmSkpt7qxoVXJuCCzm6G4pXICIRpLaJiojkgsL58+fLzvLysvY76Jc4iki+frLo43R2dopa6z//8z8v0tbWRj/90z/tuu6Wyqte9SoxD8/7i7/4iyK8TF42fwbXvvz1X/91es973kPvfe97qbe3l/bu3SvWg//P13K+5rN45Os7h4Wksz9Bryh5ycuw17J3CkyW1M0mMHl7dee7Xyq9Z3mJSF241iTfw/m4//Iv/3LRMeVnDfvzhJeY5PNK1azUfUbSAhFZ38RJRDKQkeETBxnJ6xh1oiwjed0ACJNYi0ju95EfyOwPaM6wVNLNi1SXsEQkC0HnMbWHj38tRaRfyhWV9kKiKhzyhT5IoVAnKb2ae6v+qPwkJUQkgtQ2URGRlcL9mOm+g36Jo4gsJzzgCF9DVRNalkd8HefrK1+LlUDk624lElGF5+H7B8/Py+Jlcv7dv/t39J/+03+iP//zPxf//pf/8l+sv//oj/5IrBuvl1oX9fm6z3BGrS/PpxOYfN/ha3czCkwe1Vl3vvulHBFpjxo1m58r+JmCnyecz7X8Gj9r/PZv/7Y4D/lv+/t8zvE5w+emGignaYGIrG/iJiIZyMjwibqM5HWDjKwukJEgTGItIvmhzf4w5gw/3GOQmvokLBEZRDa/+93vFv/q5g87dlHJBQJVUORCn3O9OaowqAqCfqJSNc3mQqgqGPsVQpWk5Ok+8pGPWDU2devvFYhIBNEHIhLxCl+nlTBU12t1zeYo6ajEIF+rg8pEFZ5e3UfU8n/3d39XXOv/4A/+QPw7NDRk/c1Nvu+//35RQ9Pv3uEMT8vrqz4niQKzUSLSGfUsoeSk81i8/e1vF8eC33P+QJlEMQkRWd/EUUQykJHhU+k5HVYgI6sPZCQIi1iLSK8Ri1X4AU43H1J9whKRHFXj0SsdHR3iX928jYyuaTY/NNlrOQYRlfYCHz+c8v+5cKu7eXCUpFQ1d0oVNJ3NwLwkJUQkgugDEYnUIyzz+Icnvmfw//m+oe4dn/3sZ8X9QElOvkcosVmOWOQokcnzspzkpuN87/jABz5A//pf/2u67777xL/8N7/O03HNe6/a+bqo5cdBYEZFROrC92e+T/MzEXcDYN/HLB/f9KY30Vvf+lZXCw0lJnnfx1VMQkTWN3EVkQxkZPhARlYO39fVPZ1/HOSKInxt1m1HI8NdlQBQb2IrIpUI8wrXoiu31hcSPGGKSK4V6Ty+uujmbWSC9BHppFxRyYVI1ZyPp1OFPC9RyaPL8bFThRmuTeHVrNwuKVls+glQXSAikaQHIhKpR7igUi12kcnXYnVvUfcXdY/h+wbfP9S9xut+4xWWkizB+D7E/WFy/4XcUuE3fuM3xGv8f3Wf8rrf6OInMHmbOLyNtRKYURaR9qjBavjezMeN96+zW4D29nbxQ+3P/uzPFr3Ozw1qv8ZFTEJE1jdxFpEMZGT4QEaWxi4c+Z7F11u+7jp/KGRfwT/u6bah2vA+4HD/mVz+43PyK1/5isji4qLVLzIPVMNRPwZyeN/xNgBQT2IrIp2ddzvD7+vmQ2qTMEUkJ0jhRTdfI1OJiPRjZWWlqEk239S4sOZVA8YpKv/wD//Qs9ajagoWRFKq5fIyVQFRJykhIpGkByISqUdqISJrgSqUKJFp/6GMo2pl8r3Afk9y3jNKhQtib3jDG4TQ5PsKh2v2veMd7xDL4vsQDyikm1eXcgSms6AVNxHpDItFJSd1x4H34+te97qi1/h+zgVhni+qYhIisr6Ju4hk+Pus25eNDGRkYxOGjAwqHPlvvi7zPZNbNfA86v7D9yO1vtWIQyUP4/bdBc1LLEVkkH4DwxJkzZqwRaTfoDUc3XyNTD1EJN9gnDdae/gGpxOVuv3FUbUeuSN8rvnI+1knKj/1qU9ZhRsusPAynTdZFbuk5MEVvMRnqei2DUGiGIhIpB6JioisFr5ncSGJt0fJTJaBqnk5R1cr0+v+4hWensP3NG62zP9yTcCgy+Hp+HP5/qbuieq+aL83qvuZs//xqIhIXdT+5W1z/rj4kz/5k67R4nlfRE1MQkQGyze+8Q362te+Vnb+9m//1vXa//pf/8uSHOWkkfDn6/ZnIwMZ2fjU4ryshXAEABQT2xqR/CDINbd0QpL7jtTNg9QuYYvIIM2zdfM1Mo0QkX5RTbOdo37r9idHicp/9s/+mbix8o3XWftRyU9+X4lPr8IffxYvr5T05NjXGUGiHIhIpB5JioisBUpw2EWmvVYm35M4dpHJcd5/SuXVr361EHJcW7C1tdXVzNkrPC3fJ/m5U7U+YInH68F9aqp7prpvOsVeGCJSF75n8/rxOju3leWk/e8oiEmIyGD50pe+pF1emGk0kJHhkyQZCeEIQHjEVkTaw2LFLlNYcuimQ2qXsEUkx/lLvjPOGgqNThRF5N/8zd9o11WFpaBdVPL3qpSo5BuxKnzxDdkpKo8fPy6WZ5eeXsfRLinVcqLaTAxBVCAikXoEIrJ28H2TC6GqcOmslcn3K77f8H2HhRtLRb5P+bW8sefHf/zH6Sd+4idEdO87w+KTpWdXV5dVA5O7ROH7H//I7lcDs1oR6Qzfa9UPin4Sl/sDDVtMQkQGC0SkCWRk+ERxnztjl5EQjgA0nkSISJW5uTnxMBc1IZXENEJE+o2ezcdfN1+jEkcR6RU1ajbfiLnwwTdsVWApVWhRNSpVEzd7LUglPYNISv4M1dwbkhKJUiAikXoEIrIx6PqI5GdKvl/xMwbfv9SPdSwM+f7FYTHH9yh1T+TCq7MvxlqEay9yk3Nufr5lyxbRj2apGpi6cytoeH4lZ0v9EMzvKTGpW04tAhEZLBCRBSAjwyfKMpKvZ3yd3Lt3L4QjABEhUSISCS9RFJFhrkuQJFFE+mV4eFgURrhAxDd0vtmXqlHpJSo/8pGPWLUzeBmqYKdbBr/nrJGpWzcEqUcgIpF6BCKyMdRrsBou3HL4/sT3NvWD3u/93u+Jex8/36gf5LiWJA/c43XPqyTc3JolJg9WwzUaeTCg7u5uIRD3799v3Tv5Hs7rqdsGjrq/s3TlZum6z2JRyoKUp9Uto5JARAYLRGQxkJHho/Y5X0uiVGGA14WvT3wd5GsfhCMAjQciEqkojRCRpYQWh2ss6OZrVOohInkkNL5pVhoeZU23rn4JKiKd4T697MvhGiV87qiaJHxMSx1XLog5m35zIegDH/gAJCUSiUBEIvUIX69B+ER11Gx7rUy+h7LAVPdDvr+9973vFfKPBSaPQM7Ckfu8dPb3WG64CTmH77G8TDWyOd9X+TP583ft2iUK9j/3cz+nXQbXDO3s7BT3bd0+CJK4ishKn9kqLVtBRLqBjAwXfu7nHzhY+PH1Qbf9jYoSozwyNX/PAACNBSISqShhi0h+CHc+3Dqjm6+RqYeIrJbvfve72nX1S61EpF/4fOJziwtWHBaJHN3x5jhFJReM+vr6xEOQkpTOjvhVnIITkhKpJBCRSD2CQlJjiKqIdKaSPiKdtTL5vse1IdX9kpt6s2hUo42nUikhIZ33ziDhPjM5P/ZjP+Z6j+Xoa1/7WnrPe96jXU9d4ioiK+UHP/iB9rj7BSJSD58/unVtZJIkI7lGIfe7y9cR/o7z9eOP/uiPRK1I3baXE5aGnEceeUQMuMn77Stf+YrI4uIinT9/XoQrWnD43snHmwU0h2sl8/px+IcBAEB0gIhEKkrYIlJ9Xqno5mtkICLLF5EqumVxVAHKLiq9akNy7KKSwwUf/pdrjpQSnJCUSNBARCL1CERkY0iyiKw2XJuIRSbLhd/+7d8WNRz5xz9u1cD3zM2bN4vuVlhk8kA/LDFZOupkJIdFpe5zdIGIDBaISG9YUunWt5GJu4xk0cfXBPUczs/M3NxZNXXm9yuRhiwMIQ0BSD4QkUhFCVtEcj+C9gdYe37lV35F/Kubr5GBiKy9iPRLpaKS+8vif9X/uQ8t3fQcSErEnriLyFwuR3fv3i07X/7yl7X7A6lNICIbA9+3dee7X1go6I6jX+IkIqsNi8wjR46IZuXcZJMlpm46XSAigwUisjSQkbWBn+2dApJfAwCAcoCIRCpK2CKS+0XifgX5l3f+xZ1/bWc5yeuhBrHRzdfIQESGLyJLhQtBdlHJzdGCiEpuqsbhaflfr9FDldSEpGyuxF1EVgrXbtDtD6Q2gYiMF1zjR3cc/dJMIrKaQEQGC0SkP5CRlcE1Fb/61a+KZ1z1zDsxMYF7FQCgYiAikYoStogsFYjI4DSziPSLU1Tyw5afqORRQ3/xF39RyEkOd+avm84pKfkzICmTEYhIpB5B4S5eQETWNxCRwQIRGQzIyOCwgGThqLoz4n/tza8BAKBSICKRigIR6R+IyHiJSL/YO/kPKip51FKWk6rzf900OklZiw6+kXACEYnUIxCR8QIisr6BiAwWiMjgQEaWRtf8mmtEQkACAGoFRCRSUSAi/QMRmSwR6Rd+YGOJyE2yg4pKHtG7ra1NdDWge98pKVmEQlJGKxCRSD0CERkvICLrG4jIYIGILA9+RtVtQyPTSBnJkpH3CT9zqmdQfu7E/QgAUA8gIpGKAhHpH4jI5hKRfqlEVPKoo1yrUveeTlLyZ+g+G6lfICKRegQFv3gBEVnfQEQGC0Rk+UBGuptf8/Ml/43ajwCAegIRiVQUiEj/QERCRJaTSkWl7nVIyvACEYnUIxCR8QIisr6BiAwWiMjKaFYZydvNz4bqOZOfF9H/IwAgLCAikYoCEekfiEiIyFqmElGpCzcDh6SsXSAikXoEIjJeQETWNxCRwQIRWTnNIiN1za/5WRD3HABA2EBEIhUFItI/EJEQkWHGS1S+7nWvE9+PIGFJ2dnZCUlZRiAikXoEhcJ4ARFZ30BEBgtEZHWcP39eu02NTK1kpGp+bReQ/PetW7fkFAAAEC4QkUhFgYj0D0QkRGSUohOVb3rTm6i1tVV8f/zCA+u8+c1vpne84x2QlLZARCL1SJxEJI+k2uxARNY3EJHBAhFZPUmTkXwv4Wc1NL8GAEQNiEikokBE+gciEiIyTnGKShaOb3jDGzwHy7GHJeUv/MIviHl6enqaSlJCRCL1SJxEJNe8bvZaNRCR9Q1EZLBARNaGJMhIfv7mZzkWkGh+HS3w4x0AJhCRSEWBiPQPRCREZJJiF5UsG7ds2UI///M/LySkEpK6vOpVrxLTsazgWpj3339/oiQlRCRSj0BExguIyPqm2UTkj370I/EMWW74mUu3/8JMUoijjORajseOHXM1v4aAjBZxfe4DoNZARCIVRYnIf/9bv0WfO3Soofmtd7xDrIvuvUbmK0eP0lc/+clI5f9/+LB2Xf3y/xjH+T++971lZ/I//2dxrpQbFl5IvPKBD3yA3mscc34A5ibc6XRaNPv+iZ/4CfH91IXf+6mf+in6uZ/7Odq8ebP2wTvqgYhE6hGIyHgBEVnfNJuIrJTLly9r91+YSRJRvM/pZCRff9H8Oj7E9bkPgFoDEYlUFJZFSiYgCILUIrqH7qgHIhKpRyAi4wVEZH0DERkMiMjaw83dh4eHRXTb24goGck1YJ0Ckl8D0Sauz30A1BqISKTi/NeZGTpu3BCjkEeNG7Hu9Ubmmbk5+rZROIlSeJ0ufP7zZefhD3+YPnX//WXnib/6K5ozPrPcqIc+pHnCTb51D9xRT7OKSBZPLMuQ+iRONVkgIiv/PjzzzDPa5yu/QEQCHRCRlcHXW/4+ct99XJOQ5R637lCCj8Pd0Oi2txHh9eOWJ7xeaH4dPyAiATCBiESQOoX760kKs7Oz2ochv6CPSCTpaVYRCYACIrJyrl69qr0H+gUisgBLJF34nNTFLoLt4ecVZ1hM6cL98DnDAksXlkTOsEjShQd604V/qHOGa7858+u//uv0xje+UYQHkOMuUriP5vb2dtENSiqVol/6pV8S/Tzr9nMtElX4nODjzMePjwvvb95nfP2yC0cl91hE8n7m46WOr257G5FPfvKT4hj/7u/+rtiuuMH7tJnBcx8AJhCRCFKnQERCRCLJD0QkaHYgIiunViLy/e9/P/3qr/4qve1tb9PmrW99qzZ87Fi42GUWSypdeDpndJ/Fy9RFCTJnWProwjXQVF772tdar9uFUVJi3257dPtR7fvt27fTO97xDnEcuXYc//uGN7xB7K9Xv/rV9OM//uOuz+F+mblPZp7ffv7UMo2GxRw/e9plI28v70/7vuC/+XWWjTwdT8+ispTY4+VyNwwcbh6tws/IKo888ojIww8/LKLbR7UMf2ac4P3L+76Z4fMPAAARiSB1C0QkRCSS/EBEgmaH5QhEZGXUSkQ6JUutYpeBzvBnOqMTjRynTOPoJCZHiTZ7fv/3f9+qNWiPvZahCgslXZw1GFWctR05/Oyii70GpT32GpcqLFx0CQpPy8vm9eH15G3j2pG8P3j/8f52Hq+f+ZmfEfub9yPLYxbUPA93f8K16OznTL0SBmrfqP1SjmxUx7GcY1FLXnnlFRF1PnC+//3vi6hzh58rVNQ5xuGm9xz7Ocmje/N7cYG3l49LM8PnJgAAIhJB6haISIhIJPmBiATNDksRLjyD8qmliOzs7KQPf/jDRTly5Ig2SuTx9ae7u1tIqrBEVSWp9jobRVjIsEBi0ciCjI+HXTQqmWaPn1hLUh+Ruv2jk41qv7CM1O0TEC0gIvHcB4ACIhJB6hSISIhIJPmBiATNDkRk5dRSRHItOPtrQcLXH645p3svSomjiNSJNK+aeyr8ejU1+OImIr32kU7E2vcN14LkeSAb4wdEJJ77AFBARCJInQIRCRGJJD8QkaDZgYisHIjIYImiiCxXNPJr/F2pRjT6EUURydvF1wfeTr/9xH879w9kY7KAiMRzHwAKiEgEqVMgIiEikeQHIhI0OxCRlQMRGSyNEJGVikaehptYq/4qayka/WikiOSm/bzdlfbbiGtIcwARiec+ABQQkQhSp9RbRPKDGz/IhQFEJILoAxEJmh2IyMqBiAyWWotIliF8zpYrGu0CjWUbSzdeBi8rDNHoRyNF5O/8zu8U7S8lZe2ysRG1G9XxAdEAIhLPfQAoICIRpE4JQ0TyQ14YQEQiiD4QkaDZgYisHIjIYCn3OqtEIz+DqBp65YrGRsqzSolCjcio7S91LEE0gIjEcx8ACohIBKlTICIhIpHkByISNDsQkZUDERkszussywwWXqo2I4tG+4jTSRWNfsRtsJowgIiMFhCReO4DQAERiSB1CkQkRCSS/EBEgmYHIrJyICL14dp1w8PDornv+9//fvr93/99SzTqJCOnGURjEL7//e83NFEDIjJaQETiuQ8ABUQkgtQpEJEQkUjyAxEJmh2IyMppVhHpFI28DnwevfGNb6TXvOY1lly0B6IRVAJEZLSAiMRzHwAKiEgEqVMgIiEikeQHIhI0OxCRlZNUEVmuaOTXeBt4GpaNPD3Px7UgeTnPPvus3GMAlAdEZLSAiMRzHwAKiEgEqVOSJCK5uQ0XNMvNM888o903ftEVbBAkioGIBM0ORGTl5HI58axQbv7sz/6s6DoUtoisRDTye7yOLBr52YVrNrJoZFHEy9N9jkq119lq4H4oUeMyvkBERguISDz3AaCAiESQOoULC/UkTBFZKdxxum7f+EVXEEGQKAYiEjQ7EJHh8/DDDxddh2opIlkKsrxhSciysFzRqGoz8rwsK4OIRr80UkSyOOXm3yCeQERGC4hIwv0SAAlEJILUKRCREJFI8gMRCZodiMjw0YlIDkvAcsLXnze96U1CtvFx5GUEFY2q2XS1kjFIICJBpUBERguISACAAiISQeoUiEiISCT5gYgEzQ5EZPg4RaSqqVhJfvInf1L820jR6BeISFApEJHRAiISAKCAiESQOgUiEiISSX4gIkGzAxEZPk4RmfRARIJKgYiMFhCRAAAFRCSC1CkQkRCRSPIDEQmaHYjI8IGIDA+IyHgDERktICIBAAqISASpUyAiISKR5AciEjQ7EJHhAxEZHhCR8QYiMlpARAIAFBCRCFKnQERCRCLJD0QkaHYgIsMHIjI8ICLjDURktIiKiMxvvECnsw9QX1cHtaXUwFytlN7cQ4eyp+mFjbycMv4sDvK27aDsi/IFED3ya3Ru9BB9rugYLdIgn5c7spTUQwcRiSB1CkQkRCSS/EBEgmYHIjJ8ICLDAyIy3kBERovGi8gcXZneTx1SPqbaOqgnc5gOHzaS6aGOtpQpJVPbaGRpXc4TbyAio8/ZIT7vnMcIItIXiEgE0QciUi8in376ad/oCiIIEsVARIJmByIyfCAiw6ORIvLaY/to8+bNAbKPHrsmZyqTrx81l3H06/IF+jodFcs8avwv/jRMROav08LUKSo6LF8/Kvb1vkoPVgJorIjM00q2m1Isd1p30cSTtygn3ymQo9W5fkoLGdlNU5B3IASaVRZDRCJIDcLyzPkaRCRqRCLJD0QkaHYgIsMHIjI8GikiX8zuEPcI/1RegDULwC00uChfULVwWgaN/8WfxojIp+g/daTcNZkWB8W+3tHEVdMaKiJfzNIOPrdT3ZRdKdX0Ok8rk+Z3L9V/nDbkqwDUC4jICoGIRBB9ICIhIpHkByISNDsQkeEDERkeURCR9RRXEJH1wKNJZe4e3b59m+4mqP/BcmmciMzT0pF2ca53jl02/vJh4zgNpLdS3x8co6vyJZM8rV2YpkM9myndan53RN+SXX00emK1uIalFM8thS+XDf33LL92gaYP2ZqIq2XPXSFnQ/Hc6gka7esqrEeqjTp6DlH2764XbZ+X5FKftTndKj+rhVrTXdQ3eoJWizbkRcruMN7n8zm3SieGC+uXajP20egCXS/nlOZljPZRl/pcsd7DdKL4QwWu/dGapq6+Uc20cn8a+5rnyR4o7JfWdA8dmr7o2H8VbpMx7ensIeoJsO4Cv21Vcrwo6lh5XEesc7DQv6n+uBnIc3BwkefJ0oGuNLWKzzDOK+Ncmb7o7n6gnHOwWiAiEaROgYiEiESSH4hI0OxARIYPRGR4QETGm0iJSNA4EZk3jomQNtXUOrM37d5CfQfNviUP9m2VQihF3fa23OWKyHXjtTS/1kpb+g5a/VYqoZYeWbYEY34lS938mak22in7uLSvR+/sDTml+o4Xb7c1v/2zDvbRViXjuqds566Udl37qX+b8b7admP6LXLdUr2zVPjEEljbaArCjFjvLVKOpY3rUEF1rV8ck+uYoradGbmN+mmt/bm7n/qN5afadoplH87stGSdff9VtE3rSzTC0/J7ruU718cgyLbeOUMPGa+/r5Nfa6ed9xvLPPwQnbnDC5DbVHQdWaeLY/IctI79QerbIkVn2jifinaLeQ7u7ufuBgr7MbOzzVyGsR4jyzbjWsY5WAsgIhGkToGIhIhEkh+ISNDsQESGD0RkeMRLRF6jx/Z59xmp+py091FYExGZX6Nn58aNQnaXWP7mngwdnb5Aa541io5Spsfsm7LLKOyOzz3rnvbaY7RPrSvXKho3CttdPE8X9T2QpdPOqj8e07/udT9HW//5B9zTK3K36JKx7kXr46zZZke3/rbpzX0sax1x7Sdjms37HjP7iizRR2Ru9TRlj2aoh6fnbTw4TnPPrrkK/ebyzePLNeLGD/ZRF8/T1UcPZE+7a0RFjIaJyJVJ6uJj0j5Cy/KlsrkxTbt5GU7ZY5A3li9qtm0ao8vytXJF5OWxTcbfKcrMO4XWPGVajXNp6yfkum/QfIbn7aSxy8VnSP7yGHW2pmlz5nFLorlF5A2a3s2vaeRZfoUmWdC1bDKWLV9T0s5Y31T3JF2xn2PrCzTQ7ly+Fxt0vJ9FXoq6x4prKK4b+0r0y5kepaf4hY2zNCSEmLGOC8XfA0tQpnqp4FvV/myh9OBC0fXEOjYpY19br5e7TRvGfjRr1Kb754tqSxaksH19ythWA32tVbeI3Dg7JOcdpIWii2ZBUBYJVHUOuvajrfsBrkUqXw1+DtYGiEgEqVMgIoleeeWViqIriCBIFAMRCZodiMjw4cJ8M4WfCxpFvESkKlzrpYBueVWLyPVFGuL+EHkeKd5U7ZlUx1CRsMldydIu9V5bh5ByXtOqJos7PjohajipZVvNBVMdNHzW1nufx/Q/k/oJ/fQG+evz5rT8Pssb2/q07poiZzeC+ZUpa/3N6W1NI+X0JUWklALFxzNHV7K7ZC2pFLV1sIhUzSdT1DG0WCQxzGO4gz46YQ6oYu7HwnqkOobJsZmRgr/PDRGRSshUUUv1xuMZ0YxZ/33UfG/UZwYUkeZ3sZ2OLDlOPBfqe76bpouq7OlxSa4bj1OGzzGPfeG+JqjPSxmvudfNPb0Hd2apl7d508fpomsxLFdbKb15D33mf/CkvWKZ7cZCdafz8xOd4v1Nli1V+1N37dNdF8vcJiWhU/10XLNCN6Z3i+k7J543XyhjWxnXMRI4ReQdmu3l6dqN9dLuFZoQNSttErnUea+ahdveM9cjyDlYG6ouAd29e5e+973vIQjiSL0fnOMgIitFJ3wQJIqBiATNDkQkSDIQkaW4YRSMzVo/20bOFWoh5ddoYTAtlmuJBKuWUZr652w1Do1pz41sM2vy2JuD2vpOS/fP2Wr65ejKpGya2DtrFM0lHtNPTIzRg/+/t7inz1+mMVFo91739NDZggTJL9OIrKG1d/pKYf1zV2iy25Sjhf3qFAgSjYhcXxigdp423U9ztuqM+bVzshlocXNfdQxd+9G2Hr2z1lZGjqiKyMJ+dUb/Xcrdu023ry7TqVMzNH44Qz0dqqlr5SLyzhN7zWWkOmjPA1k6tnyV7hVOiSKeGjXP0ZbW7ZQZn6Ezl14ir65H9ZJLIvotvUrLp07RzPhhytj6HSystrqudNOUuzKvte+0m2lH7o+U8b3y4+yQeS4PLHhslKrhalwzzFWS+zM1RLqlu/dBedu0MZ8Rf7cMLLhqKQuc61PGtjLBRORZGhLHZoC8d0uX+NxutVEl18N9nSrnHKwFKAEBEFMgIhGk8YGIBM0ORCRIMlEQkar2oD5H6ety+kLhOiQR+fwEdfK0OrmTX6IjLB5lc0hVg0nfx5hqKmqrnWSJxQzNOyv/5BdowLmOZU6/cbzfLHDvni40Y1RYkrJQ40xNr62hpWpLWfvBXcAXSClQ2P+qBpOjnzaFVQvL3IeMJcwy8671yC8MmO/5GqHG0TAReXmMNvG+sTedtqG6LShECbni79L6xSwdUP3xWeHBPLaYTWarEJHG0mlxWPUhqMKDimTc3RfkV2jqN5T8lEm10da+Byh7urhrAa3kWr9I2QPOzzIHPdkiawkXVltdV+zrWiCoiNRdf/Somn9dNLkiX3KyMU8ZXmfreHp85yTeIjLYNqm/X7v9gNlvoivvM6+FcnnBt9UkkIhUtSy7Jsl7t5jC1Kop6rrm2NHtszLOwRqAEhAAMQUiEkEaH4hI0OxARIIkEwURWTr2gnS4IvLaVLeYt0trC/K0cW9DSkclFuz9zhWjCtDtI7IHMk2zwQIrNNnlWMeyps/TwgD/3UIZl7U0MftKK9QuLF1DK0f3iqoNeUgRpxSwmm/q5ZixV2RfgO1U2C0lBIeqleVnhBpIw0SkNVhNsObMuu9SfnnElI2qttiZS7R6+66siaj53pQtIk1y967KfkhVE30zqe6so7sA4zv20iWa4/5QtxZLybTxmapJv0tyWbV7U9Sx5wHKHjtDl1YLo7m7rwkRFpGWlPP4zklqJSL9Yy4v+Laa1FpEWtfkskWkSfBzsDpQAgIgpkBEIkjjAxEJmh2ISJBk0DTbGyXn/AQE0WUa28TL1NRWVCyPmE2UVdNHJRa1C9dIhLKmV3+/lvY9dIpOnXLny0d7xLaZTRqv0VQ3T6/fr248CvhOKaBq6WlqNyqWR8wBMpQAdQqSIkrug2jQMBFp7GFz8JCg3yfnd0lJ4Ra6b84xkIdA872pUEQWkd+gF05P0F4pDvt1HRQqxMBLQ7RNCNdOUt0VOiWX1cz4vrmi/kcV9RKRqsZuTZpmq++Os2l2nUSkS/D5UM62MoFEZICm2eoHFGfT7HJFZBHlnINlghIQADEFIhJBGh+ISNDsQESCJAMR6Y17Xi9UAbrEMlVtPiXlQhGRASKWV3q/uvEo4DulwNkhsxZbiR2o+nxTNTchIqtA7Z9UN2V9q3U5j3npcyC/dMQU6bpz0t43qUKJNGv6S/TQu3iwJL2sV+eBOHf+8Ut0kJuOd+pq0hZq+6rTwCm5Sl5XVJcKtvkL267//pY8J+2orga0A7iYEq01vZkeWPynwIPVWDWovb5zkmpFpHUsdV056ChjW5lgIjLoYDWFGtSua04RzuWXcQ7WCJSAAIgpEJEI0vhARIJmByISJBmISG+czZe9Kb1ejKt/w7LEokFZ0xeafn7qqdt0+3aJiCbX7ibSpQkoItU6e8gTnVQqKX1K7oNo0FARaezPlawc6Kh1Ow2fKO5LUcHNUk8Mb5dNUtU5W6gRuWNyxay1K8mvLdCg7Fex6JzcOE79LOCd4tM2sFBh+jwtHTFrv+540DYYEpNfoUlx/qrzT/Wp2k4DJ9aK1qUwKFQvqa+lU3JZNSJ3TBY3s7UN1MQpnEZlSjtP1o19yNudou7J4m3MX5+hXrGv5KjUG2dpSOzTNA0uFG/j+sUx6pb7tTCOU51FZIl1Nw6oHECrldIHv0T/KF4rY1sNzPVzXl/c27RxdsjsHiA9SAtFHTau08Ux89wuGvSrLBFZzjlYG1ACAiCmQEQiSOMDEQmanQMHDkBEgsQSLxGpxJWuL8a8USY15UetRKQawCVlzFwkQwR5OvuHXdR38ON00rhNqqaW+mathQKw1aSwriKyIFG9+oh0ompg6Y/Fi/S59/VQ5vBf0lM/5L8Dikirpuh9pN8tqnZaYWRfiMhqydP1hWHa3sr71UhrmnoycsCRg33UZY1+ze9tp+G5K1bzZUsCtaSobWdGzHOwzxzYo3XXH9NHhRy0y648LY9IsZdqo538OcZnbOHPTg/S4H08ve0cXjfOGyk0W7f00UExCEqGdraZ3x17v4/5lawp42zrYi2b5Ve2IEtdEs6SfMZ3t22ncd7a5m3dRX/80d3ivcJ5WisRaaDbxsxOOTBQ2lhG4YtgCUfN/hajxs9ft7bR8zsnqV5EGhjrPtRhHovWNH/fzePTk5aDFxn7bspudsvYVvV5qW37jGX+CS2Kape6bSoIR+ucOnyQ+tQASul+mr9uW4eyRKRBGedgLUAJCICYAhGJII0PRCQAACSXeInIQp+CrlpblrgoXl41IpI2jGmFKNtBk46mrvn/NW3W+mk/QkvGW9ZAH2ljuY6SbO7KpKzh1Euzqt1jnUWkNb1mfbiwvzjcRZt7MvTp82b9Jmv69gFacK7/0hFz26xmm3IfOgehcUmBgqhyF/BVLasWSvXOWs1BISJrhOhPcZwyPdwU1dhnvN9YeHVspp7MOM0svyAHobGTp7ULWTpgDQxjTL+1j0blaMJ6ub1OF6cP2WRVmnoOTdNF42Cb373i71l+7QJNH+qhDil+zBG5+TMKQlSRWz1Bo31dhfVPtVFHzyHK/p1d0KnPsUs483OyB7ZKKcZCciv1jc7Rs+aGOPouraGIZHKrdGK0j7aqbRTrPUxzV1xfRPf+MPZfV98onVgtrpNofefqKSIZue5d6nga4ZHGrX3nJOi2OkYxN/vG9NomPg+n6VCPGtldrcMJcu+WMkWkQTnnYLWgBARATIGIRJDGByISAACSS9xEJD0/QZ2y8Lj9UJaOnTpFM6N7qMMosKa7u4VgqJmINFg3CrrWSMKjM2Kgl2PZQ7LGmb3WzwYtj24zBU7rdjqUPSamVevG0+59wrad9RaRRmHe3kw3M26u+6mZccpsV7WLjOmtknfx9K71L2p+qwbnSVP//2ss8/yKKXR0UmBjmUa3mQX+1u2HKHuMB8uZodE9HeZnpfdS8W6BiAQAJAOUgACIKRCRCNL4QEQCAEByiZ2I5Noy50atpnQi3IRv9BytrZiiqpYi0qydk6UDqmmgTKpjD42fc/RfRzlaPTFcvG5GWrccoOwFx7R1F5GM2UzXuT5if8laa8V4rP/2QzR3pbgq0otP7JeClSM/16t2EteaGlZNNlVaacuBLF1w1LKCiAQAJAWUgACIKRCRCNL4QEQCAEByaaSIzG/cFQOm3HW3EQ1Anjbu8oArdwtNTPMbdNexvNw9c1AWMSaLIEf3xEAt94oHKwiAWpY5wEtp1LSe2ybXVb8stW22dSx3egfWugfabrU8n2OTuyeXKY+B/Nt7HrXvbcfMgTon9JtZah9EA4hIAIACJSAAYgpEJII0PhCRAACQXBopIgFIGhCRAAAFSkAAxBSISARpfCAiAQAguUBEAlA7ICIBAAqUgACIKUkWka+88gqCxCbVABEJAADRBSISgNoBEQkAUKAEBEBMuXXrFn32s5+VfwEA4ghEJAAARBeISABqB0QkAECBEhAAAADQICAiAQAgukBEAlA7WES+5S1vkX8BAJoZlIAAAACABgERCQAA0QUiEgAAAKg9KAEBAAAADQIiEgAAogtEJAAAAFB7UAICAAAAGgREJAAARJcPfehDok9uAAAAANQOlIAAAACABnHgwAH5PwAAAAAAAABIPhCRAAAAQid36xLNjR+mTM9m2ryZ00OZw+M0d+kW5eQ01fD1o7zMffTYNflC0/F1Oir261Hjfwr52r7HqGl3CwBJJ79BL5zO0tGDfdQlrgGbqavvIB3NnqYXNvJyomrAdcS4wZj3raOFq6t6bV/z3nQAAACAwEBEAgAACJF1ujixi1pbWkSzZF1ad03QxXU5eYUsDvKydlD2RflC07FIg2J/Dhr/U8jXdmSpaXcLAAkmf32eBjpSRdfToqQ6aGD+OlWnI3EdMW4w5v4cLFxd1Ws7mvemExPu0JmHDtPhwwHy0Blj6gq4c4Yecs7/3OPmMh9/Tr4AKiK3ShcuFx+V5x7n4/UQnanoYAEAGgVEJEg8+Y27dPv2bSN3qSaVASJHnjbu1mr78pQvWoZa9r2a1FIDYMMorLVzAa51F/3x6at0T51Y+Q26e/UEDW0zC9Gp3lm6Id+qBIhInYjM0T2+Ft7dqFJEgMpQ19MEHwP+Htdq+4ybUfHtSC7bumiAYl6k7A7+zqeoY2CGLr1UOAa5e6v0ZHYvpcU1IU0jy9UcHYhIrYjM3RPf7bvJfNBMEOp7EiCVnuMvZmmHc37dOQPK48UnaH9HyiX78bwHQDyBiATJJL9GF6YPUU+6tfihoqWV0j3DdGI1SQUZ9VBV3U04v3aBJnbtcixDLdsuMwColDs028vnUzsdWfIorK0v0EC7Oc3IsnytAvBgqhORoBHwtXX6UA+lW/l42NKapp7hE5Ss25GmAF4BudU5GugYKj531bJRkNezPGL+yLN72uNHnDytTO4wz73e2cpqegkgIiGV4ox6ru2iTz0lfxjySqU/qEBE1ge5D50i8uYzp+jUqfO0siFfAADEAohIkDjsTZNat/TRwfEZ4wZl3KSOZemBvi2ySWia9s6uVF9jIxLURkTqxc01emyfs585ACol2Ll6eWyT+P4WHjbVeajv8/HaY/tcfXMVzuccrZ7O0iHZF2VPZpzmnl3Tfvdzq6cpezRDPdz3lxHuV2187hLd0omi/Bo9OzdOhzM9Ylpz2Ucpe3rVUXtYrrvoT22drhjzqH4xeV2sH0Vyq3Q6+wD1dfF7PZQZdwoq23LEZ4/Kabuo74EsnXbZLJ2I1PTtdu0x2me8JvadsQ4nxg/6LNckv/YszY2rfVVYX/TNaSdP1+cHqCPFx6GVtvD5NMMFplN0jI/1FvlDWXovza4kpBZVTUSkxw9g8lwt6pcPFAgiOjbmKeO8Lqj9qu3zUXfttYlIcd06ZLsOzNGza9qrq7gOH7Wul8b15aB3n8Dm9cXRh/BRzfXIfv1av6K9JjHi2v6A7DOzJ0PjJxzXadtyxGePymm7+uiB7Gn3jwW6fa3pI9K8HvLzU57WLkwXtp/XYe5Z0u4qua8e6OsS03b1PUDTF4x7Fs7/GlGbZ+aSQETWBw8RCQCIJxCRIFmsGw/IaX7AaKVd2SvaB9z1i2PULQqG1TZPigr1FJEA1BJVI9J4kHxQ//1kRHcKRTURSp/jL2bNWj72h1PzfN5Emf5uShnvtaaNAlxHm/g/N13sHrtI9m4o140HXLPZYiulubC3uYPaxHWihVIdw3TW/ku7cZ0Zkj92pNo6zIKltewWShsFjcKy5bp3DdDI3rT47LYOY3pVWzvVTdkLC+byUm3UYSzLqjmXHqHCJUouZ8fHKTvIy3Fuk/PHFZ2I1NRkkgWmHR+doH6+dsp16GiT/cylOmi4aOONzV8aoW1y3xStQ3qQBu/j13EdYaxzqnUXZa9o70Z0ccw8P4uPdYypp4gEpVE1ItsH6ITecBmY3TMUtW4vecx01155HdmUof5uvk6Y18zCNaObxoo6+V03rsfmNYtrARdfL1PUMXyWii+vQ1Ley2ulfdnGdW5w0bZsue5dAyO013b9UtfQVHeWLiyYyzOv1Wmrf+L0yHLheqmugx/PWs+QRdvk/LFAJ5U0ksS8D2VobMJ2H7KtQ+uuzxXv8/wKzYr7hPF+0bYY96zBAeri1yGyqqT084QWXZ+PFqrPSVsfhbrvlO6c8YMHnlo+RtmjvPzDND5zxkP0G+Ru0aUzMzTO63n4KGWPLWsHp7pz5qHCuuZWaflYlo7yPEezdFLzw0C501uIdS9eH4/fNU0061+YXu7j93WKfdi+835jmsP0kNzhpfqI5IERz8yMi+l5nY8tv6DpxkouXx7f3OoyHcseFfMczZ6kS9pfowEA1QIRCRJEnpaOtIubVNEDpgYlLjybMKm+qEr0jWj2PWnrl7FE31i5e2YzD8+urWTfQtq+r9R7nk1ENA9VJdaF91NRn5Jy2mP38zJUUxW1Xf59RKpt8+wXSS6/8H4ZfVqqba9J/5cgCuSNwrIp/LhguJMy4zN05tKq93dDULrg4C0iOWnqnyvUfuEmn0K4Ga8PKcGWNwrWXPB1FuDza3RiwLym9M6qJ9wNOt7PhVOjYDhZLFPzawuyEGtfT7XuRtL9NGc9Wefo4sfNh2pOun+u8NBtFEazooCfov7jqohuW05qG42cU7U6uaaN+nFFIwsCikjXOhjrd2VSSjJ7M871ecqIz0rT4EKhZmlhv3LKKOAllfwSHRFdDPj94FU4rrunPXpF9b0HyGuq7X3VN7L7uiz7CQ1wTdd9J60+l72+sLoCeKn7m3N9xLRP0ae6eJ/cT8fs76n7mufFotAHp+ckcl0K7/vvD4XaL97HodHcoNleJey4Bu4DojB/1W99dcfMQnftVdcWI45r2upcv3l9Tw9ZP97kFwfFdaR94ERRDcD82gnZDUcvFS6vx6mfry+pbposkvfGdW5Biv0g16/cRfp4p/m68x6QX8ma18tUPxUur4XlpLaN0Dm1otzNj/qxwE8qeYpI/qxuGuNajfL1wr3C3gVJni6PmfeEVPckWZtvXwfnZ4IKKP08oaXc74hu+jJFZH5lVvSHKOaxJ9VB+x0tutaXRmm7s+sPj2nN56Ud9OCJLO3SzJPeO0t2517u9Ezuin7altbtNLrkHonQc/2t6W3PP7ao75q+IsU6LY1u1w6MmOrY72iFIJe/40E6kdUNppikVnQARAeISJAcrCZHu8mrPGexsUB/KJrJPUM35UsM9+WVPaCab8uk2minph8vdXPOrlynheHimx0LllFLFBjFg+nd4vVNY5flK8XoREpu9QQN7yzUshJp3UIHshcczXkCPgRZOKa3PYAXopalpnXWTDEKHCeGaadVS8FM65YDlLU9bAvk8ndkV+j6wnDxwwbv29FzruZJ+esL7m0XBavxQgEBxBSjQHlO99DJtV/Mpn3uX58157iNUiKy0/jOOc+Y/NIRs+aQEmzqO5CZL6qZI8jdK5Yad2apl6fdNEa6b7P63EJZQ617C2XmHUtfmTRruGiuWXdmex3bVFiOTlipa0y7VaotV0RmyLl6lF+gAccy1L7W7ldLMuuPUzOxMZ8R+8m7v74CGwt/KJqpzjxjvxtVcA8wjusKXzu3y9q2Isb3aueo7bp5g6Z38+ubSH870n3XWHZn6YBqSi7D97lhZxNX3b2nZAHccZ6qaYsi31PLdi6HRU32AG0puqbwdmv6g5bL51p1F7P7Zc07Gd63086a2nlx33Le63javnH3vavh5K7Q9P4Ox72T1zctmvhmT2tqA+mOmYXufFDHrNM4h1xXAesHYfXjjbpmuK5/Brl7xQJYXff0z0qaa1qJ69fKZJdYlvs7qGrm27bJWo7u+VF9Z2zSUHdOy9d09yHd9qj1s6ZXP4iljG1xuRqbZA4osoAXunPah3K/I7rpS14HHWycpSFZM3fXxJP0kvjS5ujWk+pHx8J3L395jDp5ualtNHRCfb952gkpA4trEZvfxxSlUinaNnRC1prM08YL6sdE+w+g5U9PN4xnJLGO5rqbj3O2dedWIEWmc0q+3kEDc3IAw/wGvaB+1LB/HzTfMcYtIm1Sf9sQnXhB/hiTu0VPTkjRmDauI9ZukcfQ2MaUfT8WrYfthwsAQE2AiASJIb8wIG46+geFALz4hNm0x7ipduwZpRnZr+QhWajjJj7uXwk30dvfbrzfup0OZY/RqVPHKHtISclOmnheTlxSXqiHmMID8MbyqGz62ErbD2XpGK/LzCjtkb+Oapt++j0EWTim31ih88byPyELpwcftXf6rKa1PfjTBi2PbjMLOtZ2n6KZ0T2yUKdvOrXp7W839kthe45lD1kyqtPaUQaqeb3xULJn1Na/5x5ZuOqcINvUIK4YD3gvXTpp9tulGVRq14S96bTmHLehCrruAqCXbDEe8sW5OkAL4mFTFgD5/ByeoeWrfjVwc3TPspNmjaqry8Z3YPwg9YhriE5EatZdfU+7JmlFvmTheuBWy9EIQ0ZdY6xlaQrt6jX7daHktWKFJkXNNLUMVYDvoknXCjOXaWwTv68/Ts1DnhYGeD+4C0xBqegesOnt9Hbjmtq6/RBljxXfv+zXzZKyR50PNnnz4hNytGXbNblw/U5Rd9ZWU0R3PpUsgDvO05vPGMt/lA6K82g3fYK3+5T8wVAtu2g5L9ITsilrqmMPjYo+OG33YWehV65Lp3E/Stm2p3D/Ki5Uq+b1hWWb/Xuq41B074oQuXtXaXmG+6MtdDGhkuoYoPnrRQ8zJa4BuuuXPGYeP8bQ2SHzXj2wIM4LVSOSnxeGZwLU0LT/+CNqsBrbcmqGxg/2yB86bNe0Euuu7gtdmouVS1yo5eh+jDJQ3xlrWbpzWr7mvg/Z7wcF1DOrNb1qWi/3mxPrxw3t9wgER53T7bTzfm7S65HHn5PTG5T7HdFNrztnPFA/LOpad60b5wF/n8wfHZUkT1HGba8LkjI9Sk/J19T3QvcjmTrHUkNn5SvlTl/4IUL3Y+W6cc7zOZ7qPy6/Z2r6FPXOOpdekIlWixTNd4xxfZ9vTNNuYzq91C8sNz1q7RV5DHU/9G7QfIbfS5FttwAAagBEJEgM1s2yooc09Wuzo1DF5NdoQfZvZL+xWp/XPkALRTe6wk2uUNBTNzKNGFEPLOomn1+mESEzHEKPyV2hSdlks/DQEfAhyEJXsNDcyAVq2sKDv1XrqejXRJPclUn5y6bt5q/WxXjoGyjeUYWHJFuBRj3wux8GbtATB3oo49EPDIg33OTz6vIMjSrhXPRgqj9nFeq76C4Aekg7zfIKfUTKpNpoK9cgOqkZrIZrYGlH5edf083/Fy5D6rOk9LRT6nvqeuCWy/Eq/DuFjutvpoSI1F43nd9/49okmlJqtkWgrnP649Q8qP2mFxC+VHwP4OavCzZBaZC/TGOiiart3qNaD2jOJfVdsq6/qnaLU+gZFJqX2mrG6c7pkgVw3XnqPO8kmnP1hnG/4OuF84dCY+0KTXk7je1U76l1Mc7RScf2qAJ+QUYp8V74kdDixhN0oCfj0Wdc1MjRvdVLdNL2419Rn6SlrkPWsbB/p+Ux85B27uXZ+oiUSbVtFYNh6fqY8xxlnmsrif/bzgv1WRp5p7svKFzPO3I5Xq1WXOew7px2XbM1n2PHMb1Ldjoxnpc2OT8TVEDhelky9v1c7ndEN73unNGirjseP6RurNBTl1bNLiLUtbzd+D7Lt4tx14BX3wvteaY5x8qbXv3Iq7lmCtQzhHo2c/7t4OYztHzV1h2H5jvGOL9nSpAWWog4UKLSugeqY6j/kVUNoIivHgC1BSISJAZ1s6zoIe3aFHWLm9LH6aLzaZZRNy3bzV59nvZGp2oE2NZl43i/eM35oKuWYzVbUvPeN1dcoJS4mpUGfQiy0BUsvB6Y1bSFB/+zQ2ZNkPvmtGvnapZlrYv2QUk9tBSWf22qW8zf+fGLrgIKaAbydH3GlAvuh0Tn+WmivkPuAqBDZFjol8ejpXJhvcc28IxI6y6asqTFi/S5XbKWNBemD3Kn6qesfi7Nz9WJSM26lPqeuh645XK032kmDBGpW2YxJQveTYPabxUWXCq+B9j7myugrtmFdVH9nDoLumo5hUKhuh5vMq7H+lujo0uAsgvgunPKed5JXOfqNZrq5uk20cf1N27vJrX2fk8V6jnAWne1/E5j+Qm5G61/gz4q+060njlKXYesY2H/Tstj5nVya5eXp7VnT1L2UI9t4BkzrbumChL5xc/J5qQpatvaRwd5cIlTZ+jSKjfh1pwrJa5fuvuCwnWdksvRTStwnsO6c9p1zdZ8jh3H9KXWV1BiW0E5qHNa9YnuEatarkG53xHd9LpzRovfD342VPcuuuuZxHn9V+fZgG7hmnOsrOnV36/9Azqp26e3X6BH+3jb5DXZ1ZLDB813jHF+z1S3B4W+vZ04yx7qGOr3udoH+OoBUFsgIkFiUDeKih7SfB8Q3A8a6vO0NzrNzdzqhL2oFsrzNMGFAlvfI2q5ng+jrofxgA9BFprpDfQPzGraEp/lxLkv1bpoH5ScyzdQ/cUY87SmeygjRuZ7CYPVJIA7Zz5BfV2b6X2f8zp5JK6+CUufd7rvTGkhppoQe9WYNFaBa2ietjVtlTWWrWaGO4wHZ805aX5uHUVkash4hNbgqhmhEzzyNfvn6a5VFs51V021u2nqmnjBQenj1Dyo/VBZwaXye4Bt0A8bannFtyPND2PPT4ga6oVmc7rz2YHzHNad0yXvr7rz1HneSVznqm7eYlz7Uq6LvqaM+/vx4pQaJKSV0j0ZcwTVl6I4WM1z9HimhzZv/kM667Nyrr4JS12HrGNh/05rriN2VC0prxqTxt7buHuVTttqaJo1cPPG4TGlyY5J3cAQmuNd4vpV6nvkuj/I5dibpNpx1bDSndOua7bmc+w4pldNtb3WQfuZoAIquE+V+x3RTR/4+Plf1ywCLFN9D5x9tmpn0Xyfyppe/R0gYpaS+1WD5jvGOL9nvvct65ipe6bHPUdSch8AACoGIhIkBtWsxfvht5hcrvBrp6r1oXtgNVF9fhVqVpR7M+dlmLUFbbVQNAU/d+0VJ0oGOG+gPg9BFvqHMP0Ds/Pm7K7B6ML5C612Xyj0N//1pXGrDy4roqnsqHvwARAb/CSehasGsvr+6Zoq2QqvtpPXPJ9TxvdI80GO78c/nv80HT7YRwd0gtQaBMs8R9X3Xt98Tv6wYLxfON1LPOCW+p66HrjVcmx9z9pQteRSxgebW6wrzGgEQlnfT7WvHZ3TK6x95byONBuqaZ1+gA43OeN+JP9rUPk9QHOOGWjvVWpUb9sPY89PcJci9mOragSWOJ7qRwP1XdWd0yULy7rz1GN7nOeqqwajGyV3nAJJf6/XfD9onZbGVf+RhYja0KPuQewah9pn7i5QitG0WlDHUNNUv9B/rv0ckPspZRwf7eXVPN/MffyPdP7Th+lg3wHSX17t/R6qbfDog1Y+LxWdFyWuX8XrUYzreUctR9sHdaEfO+t+ojunNeeW/rlK4py+5DoUmofqv0cgOOo8K+M+pbuuWWiWp5u+5HXQjk9zZTvqc7qnjKu1HqeUK7fsUtb0Vuuyg/So6N/XO2JsNp/B/1x4XL+d3zO1zt36X0wNnPcdj3uOpOQ+AABUDEQkSA7qV3htE2AnZo2oVFsXPXSp8DDs2T+QVbB03+iC3swZJQvU56iCn12WqIdN7wKsekhx3kB9HoIs9A9h+gdm583ZvyaZ1eG62naPfWFS6ubPo/Iti4EB+rrShVHJU9005d4oEAesvu9aKN0/J0dfLIb7BxsTfeAVDwSxPGIWnp01ZfIrWasGrbsAaMQpPfNrdGLALFRafVCqAq69LzmJ1Y+p/C6pHzxSmXlHs9k8rWRV7Sn7daHEOV7qe+p64FbLaaH2gRPFo/XmVygr9lmaRqyO33SCRyNayvx+Wn3Edn6cilur2vuBc15Hmg91HdfXvHMg7l0paut6iIzbURX3AN11lA+x7l7l/GFM1c43lmGdWyX6NlaogqQ6p3TndMkCuO489dge57laoq9Lhfq+Wt8jj4Ksieb7oeDRU5ePuQbXSnVPuadtEKq/TL5Hjj3p7nuRhfeqdgRYdT45+820X9Ps32l1zDTX47UTNMDLSvVS4fJq9pntHryi0J+2eTzUc5Zm4A3rGsfv286LEtcvdd7rjrXreUcth0XuibXibVL3GHu/mrpzWnNu6Z+rJK7pC32VZ+Y81sH5maAC1PWljPuUutZomxAXvj+u8ynwddCO+h7ou9oQtZ+5y4Kv/oPxHCB/RPD4UaDwzF6Q++WWXcqbXu4Lz/VxovadvjY/3TlDD43P0Kkn/yf9E//tcf12fs+s2sXGeul3iywvWsezknsoAKBaICJBglC1kdrpyFLpO6DVx1b7ERKTKnnmWZvSfbMs92ZuIpcjCk5yfdU6SFTByVOKqoKfJVw1D1Ula4qo2jTFD2HBRKR6QPIumKr1d/UZpt9R2pt/fuOeqyl2fuMF+quMKaMCFe5BJMmvTMk+wDitlO7qo4NylMpMT0E4F/Ubxli1YQqjCKuRbtPd3eKh0l0ATFM6bTyMqhFvPUfBL0i04tFxVdNB26AhavAOo7BYNLr+Tu5XMk09PWbBuvBLfIkHXF1hReF64FbLMQfEsUZG9hxJuT4iUr+v1CBDarAer6bbTYQ6Xx3XdzeFGmrtR5ZEoanye4DmHDPwvFfJ+574HLm+ah0U6gcATymq7p3OGvD2c0zTZ7KFqxsGxmN7XOequi97/zCm1t+q/edRkDXRicg8bdxzNsXmH8n+ijLis71kQSMwvptDarAv4/vZ1kE9GTkC8ME+2qr6Zkx10JBjECQlC3lk60PZY7bvdJq6u1mM258N5H5KG9dX23XQcyT1dWN68QOU7ZppH9ncNvBdQaYWj9C+k9c93UM94hnPdn0pcf1S573uWHuKSDEgjvse4xo4SieVNOdWeSKS749KOBbWQe3XlLFuPL1n020QEHV98TguWlRroPvI2T26Gglaez7ZryW6c8aDO0/sFd8D9w+eRBvGcvjzzJZUqr9f4zxy/CjAWOtmq2VbbtmlvOnzxjWXnw80A38y+RX64tHDND5znlbkNfupUfN5QjfKtvps60dpuQ+dLVJc3zPVFZbxmnNQMr5OLogfo+0/dld4DwUAVAVEJEgU1qiXrto6Nmy/rFsPgOqmZfsV344lLqu4mSvM2i6baOzz+oKfVYC1//puQw0OUGjOrXuokgUFXX9yqtmr4yEsmIgsFFjSRunLvXpqcABb874S+8K9fFUDRy+TfftQAvEgt0onRm0FY1u4X9BD0xeKa/wJ8rR2btQskKrpU220c/QcrRmFNz7H3AXAQVpYO0ejqp9HNc/wAl13Lj9/nRaGd1KbeHgtpHVLH42fK66dsn4xS/uLug5IUdvOQzR90SgySDFT+CW+xAOurrCicBVSC8uZc35+6xY6kL3oKLDIa0DR52pES1nfT0l+jc45mqum2nbS6LkLNOm1rU3HunEtM49RqYG3CuLBdu2t+B6g3+/e9ypZW2bTGH1eXNfd113Vl2Sqd9a4ujspSFSrQKc7p9VrmuaD1r21aN09tsd1rqpCuH2EfRuq+bm9OwON/Cng+H5Y/a7qZLLqLiJF0bodGdfJC9N0yPajjpVS3Zvwd3q0+PpnfqfXaEWcP5rni8EFcU22RuKW8wwvXC+6XjL56ws0LH6sKUzLsm1L3zidK7rYr9PF7P7ipvB8zT40Tebllc83WyuSEtcvdd7rjrWniBycc31+65YDlOUPt6OTSppzq1wRyeSuzBX6JhYx9tOBabpyotS5C4Kjri8ex0VL4VqX2jZEczxq9e2rdHqCz5XU/9fe+YRKdlx52jstatHqhVZayL3qhTHjhRfG0IzRwiAMQr0yBjeIMWO0abAMs2mENBuBCga0mBJaWV54KYEb5EY2tLChsIQFlkfMQBmr3cLYIGOwtFAhFUhwJ7/38vfeqVMn7o17M2++fFW/D4LMG39OnDhxIuJG5M3M4YEHSCv8aeFB5Pk+5b7hwW88M7zKP0f/4cbwxkvfGf4e37zvy8NVHbD9/gfD10/G4JXhi0+8NLxxg7zo9vXt088PDt8PC8ncvcvsvc7ZBw/o88Lpv17/4Q/D26+/MDxx9kFweJL8tvyF/uGDirMPvv7L/xx+uZH5p+0TC9U4+/0Pvn46B1754vDES28MN/iznLdfHZ75uj5I/X5YX5euocaYXfBBpLnLiE/rfHN44a3bDxBuvvfvwzP6x9vbnoi6Nbzz7OlCc9+X/2X4Rbgxvvnra8PXTxbJ2397afbiLLabzCtX0KN6muL80zq+vnq+Z+Ag5p9Pb0I2C+75p3zVTdX5geCXn/312SY4fu01L9qnC3n+Tb1icf7zK6dfvfrcA8M/vnTjfIPNRuaf//50oxG/Djtmi0L+2cY39cOmI4Zntzdm5e/TmUsJfwqjf1TUTeU4/NEB+f90/tTsrQ+GP6XyN/9CHv5pNV6HMk1uDn/p0kd6pHxbXf7wJz1FpXznupxxR97Azb8k2XmsjMg9Qe2I6du4WJ90iP8QesZEHXfov31CrevnMe4B4pNg33xheOu2A5ebw3v//szZPwTf/vTI0jVg/ibq9IOlK5v1qNFvmw3x6eHyRsd/iR8Q3Bx+fW27Ufzbbw5nS2O1AT87EHxg+KdXztfkmzdeGv7xxD6EqHu1pm0o1pJb7zx7GsfXkX8R1vvNenFtu+H822++srHoljkHkeGg8/a2I/7Z0wPk277ifGycz2XteSJRzEmnc3ScO7dyz+aMYl5psZ3XJudi6ZHyab04mxdH5q878gbuWA+yb43OixvUjph+x5xd1BMp8kdOy/5hUBV6Urr9T8Cmj8b8MsWfr5/tH87DleGrz1wfrp/MsUFeNQ/OOYiETX1Xv3H+hLPCfX/zX4dn0sH4yUH/bYfX27wPfuOOQ/TVDyI33MofAJ+F7aF6GlYt/a/83beHH8VPjW+9Pnw/fAirhxLqA/9bw+9eeeK2D0pOA4e7V08+2Dhn+RpqjFmODyLN3cet3w0/+mZYvK88MDz00EPDQ+F3na589Zm0CG249cfhlbOvNV0ZHqDMg/oEf3OzcfX8QA8WH0SefYV8Exo/Ss4m6qpueO77m+HBqD9fqwqbufMF9PZF+IN/e+LsU32+ovXQQ6dPSNz34HeGq/8j3TRt0E3uqb2+Mjx3oli9ON/89dWzr9eeyn5oeEDXG/mvxB3bqC0q+fG35rb98NCD2yc1NjcQ34lfPzXmXmH8RvlQ/PbFR4YvPPyt4cX8Y5ob9DWw+Odb9zq3fvej4Zth43TlAeaz8/lSG9k75rRFa8CCTZSevtyE+JuskVt/fGX4jtqwXU8f1JPJV746XI27ymoDvuH//e+vnv/sAjbYrq38BMOzJ08W3q67vlJ9ur78t+EVHKpcS24Nf3zlO2dr3al9tV4g/+rtm95ZB5Ebzg6TN0H3ErovKL7ibC4po/cph+Jnw5Obe69Hn/rXO58+PvtAoP6zMjOHD4Z3fsrPA5x/PbifW8MH//HG8BN+XuAnbwz/sT1E/uCdn94u74N3hp+S56fvnK+F//nLk58aePnkX1r6ufmXt4fXKbcJr789doDPB4fK+5PhjRv1hw+num7/LCYjvUPi3PznRH024fW3m4fupwTbjujPhwT/9/WtzO1vR/7nL7lu9CcfKrz9+mn+TZ/dKD9ckE/8cqibOWIDY8xifBBp7lpu3nh9eOFpfnfudOP30ENfGB7+1tPDS29UP+IuNpuat14anv7Ww8MXTspsbgqfvDq8WnyViQ05cp/82TYi8tsXh0coXybylYHHTsp+5fS0r8HN4carV4cnH/3Kqf78lt7TL6WnauC3w4uPoOsjw4vpe283b7x6+uP62/LfvfrqydM1p7qn/Lf+OPzi2reHr5B3E777ryzvkv3k5hY5cfPG8OrVJ8/sy+HE0y+9dedXakdt0ZKf2t7Vd8bczRzHQeTZn9Vc+eLwKD8if7JpeGF4+tHt771d+Wr577j3Npv57PWNjf77o2fzq+bzN94bm9HmrgHFPL1hdK0afj/84DHK6sOnBpv14a2Xnh6+9fAXTnX5yqPDk9v15DY03z/yYvoa9nZtVVs25bVe/OzJQvftzzecrsP/MPyv/7OJG1lLbv3xreGlp781PPwF8m/W1kefHK6+Gp7YFz978iT9kbxYnsBB0KZ81j2tdX19Zy4VR3EQef5nNQ/+w3dPfwN4M7/yh32nvwO8iX/i3/whjzHGmLsCH0QaY4wxR89xHERyoPS7V74/PPx326fzzgK/9/b94ZU7fnzTGGOOnKM4iNzw5+vDtW/f+VvF/Pbmt68VT04bY4wxlxQfRBpjjDFHz8TvNV4A57/vOfF7b8YYc8xM/SbkwQm/73k0OhljjDH7wweRxhhjjDHGGGOMMcaY1fFBpDHGGGOMMcYYY4wxZnV8EGmMMcYYY4wxxhhjjFkdH0QaY4wxxhhjjDHGGGNWxweRxhhjjDHGGGOMMcaY1fFBpDHGGGOMMcYYY4wxZnV8EGmMMcYYY4wxxhhjjFkdH0QaY4wxxhhjjDHGGGNWxweRxhhjjDHGGGOMMcaY1fFBpDHGGGOMMcYYY4wxZnV8EGmMMcYYY4wxxhhjjFkdH0QaY4wxxhhjjDHGGGNWxweRxhhjjDHGGGOMMcaY1fFBpDHGGGOMMcYYY4wxZnV8EGmMMcYYY4wxxhhjjFkdH0QaY4wxxhhjjDHGGGNWxweRxhhjjDHGGGOMMcaY1fFBpDHGGGOMMcYYY4wxZnV8EGmMMcYYY4wxxhhjjFkdH0QaY4wxxhhjjDHGGGNWxweRxhhjjDHGGGOMMcaY1fFBpDHGGGOMMcYYY4wxZnV8EGmMMcYYY4wxxhhjjFkdH0QaY4wxxhhjjDHGGGNWxweRxhhjjDHGGGOMMcaY1fFBpDHGGGOMMcYYY4wxZnV8EGmMMcYYY4wxxhhjjFkdH0QaY4wxxhhjjDHGGGNWxweRxhhjjDHGGGOMMcaY1fFBpDHGGGOMMcYYY4wxZnV8EGmMMcYYY4wxxhhjjFkdH0QaY4wxxhhjjDHGGGNWxweRxhhjjDHGGGOMMcaY1fFBpDHGGGOMMcYYY4wxZnV8EGmMMcYYY4wxxhhjjFkdH0QaY4wxxhhjjDHGGGNWxweRxhhjjDHGGGOMMcaY1fFBpDHGGGOMMcYYY4wxZnV8EGmMMcYYY4wxxhhjjFkdH0QaY4wxxhhjjDHGGGNWxweRxhhjjDHGGGOMMcaY1fFBpDHGGGOMMcYYY4wxZnV8EGmMMcYYY4wxxhhjjFkdH0QekI8++mj7ztyrfPrpp8OHH35oXzDGGGMKWCMJxhhjLgbPw8asz70+ziYPIjHOz3/+8+4Qefnll4fvfe97w2uvvbaNORyffPLJUXUutvnc5w5z7vvDH/5weO65505s/+67725j23Ao9tRTT52U8QHZevz4xz8evva1rw2PP/74cP/994+OizyuFN57771tjmVwEJplMlYi1Zjv8aN9gk7omjlW/3z//ffP5jsFrolHZ8bWxx9/vM19OUB35hLawuua4F/Uc+3atW3McTB3LoU8ft58881tyjmM45iHcCxr1RRLbEK+2NYbN25sU875zW9+c1seQjUHXGZ+9atfndlujfuiteWvDT7w2GOPnayTX/rSl07acVFo/uPeKOuxj3umi7w/vldYauNjXY8q8pw5FdbgkPcKmSXr0UVz7P51TPOwWY/LNM/djRzbOMMPLmIe7T6IRDkO0jhEyQsbCwHGJJ0FSWBc4ng9JOiIPs8888yJ3hz68P4iN3o4GbbgMGpt6BP1B++nIA95W/kPefhzrAdNu8IYwb74IfB+bNLRuPr85z9/kpf3xO16EAnIiX0unSKka+xQNxPmEpb2JzqhG+1n/tBcwjg6NtAVOz3//PO39Q92Q1/mI3S/LAdNAn3lt/uYwzlUyofeQv1NOCYYB3PmUtCaGduUy+In2BbZ+AjrwmXxjyU24caGvBoLhNxe5hiNGeSTf8lB5JifXTTYQX7B675ZW/6a0N/Mo/gIaM6/SNBJa3CEeOIIvF+C2lfNrcfsw5eJpTaOc/exg/8xb8pPuc6BexPtQdZYZ/Z9rzAH2jd3Pbpojtm/sOGh5mHPcxfLZZrnDs3aZxGHHGe9yBeqe0fG6ZL78R66W43RWgoKFkIWI6EnwA5x+CbQLx/waIFksboI2HTicOhwqEVaE0zvwoxtWvY51OEPNzOHvok5FHmSoV/ioX2LNScnfFJ+2Rqju/THLv0ZF0gFTdjHBGMDG47NcWtuAA4Buu9jXGKj1vrBHEkdY+vLRTF3LhXk1+aQ1+ownzzH2OYpltqEcrIJ/V2NCfLMlRsZ87NjgLbR/rV0XFv+WqBv9CnmhF38YF/gp+iVGbtn6gE/RXa1dhy7D18Wltr4mNejFi0/FbSJuXfNMUX96HFo8txx7Fym+50152HPcxfLZZznDsEhziIOOc56QSfajS4ZztXW0m+vB5HkuchHfHEeDgUqHTEu+lcGXhsOQuMnlofQITv5UtAVOYcAG609+C8K+d9clpbrAdk6pMc3q0OBXfpjl/7Ef/FdhWNEY2zqgFQ2vtcPIrHT2PpxrCydS8lPWRZwyleHFspz2VhqE8pRRofzPKmTUZ6lHLuf0TbavpaOa8tfC/Rd4lNrs+Ya3OKyzpWXibvNxj1+SnurQ9l9Qf37uFeYy7HOHZeRQ9rS85w5Rg5xFnGZ5iz2rtyzr6Vr990VCmC0PGkQT6eJsa+brg3GQkdCRpvBi+h0HJoDPSbdyoZL4TFZ2kzIj7fvy8m1YZyCx5jRY+zRXXSUvjkfcdRzETcx0mkpY+0SPTeJFUvL9fSHbM0BCXVUY3esP9Tu7HtA/C79if+uPVZloyWP4FOG9hF6PlhoHfRKhyrtEPTUv0s/CjY/yFk6943NdbvQ4wNL51Lyq736ECpvAmOeirExthZr2oRylMEOeho7jx/laSH9qrltVz+bYh9+SNvW1HGp/DG7zkH2mQv6jvnUWnPAFEvX4KX0+LDmBdl5bKwupccfoh6tfBfVb2Mc63q0Cy0/jfd17NWqNs9pi/JWPkf9u94rLGFs7oh9NebLS5D/79MHsr5zx7bG7dI5oXceHpMvuxBarL1Wt+jR/xDkfr5MrGXDXX0XZFN8kPDZZ59tU84ZG7fE7zqPxb6t6oCpcXZM9Jyfxb6j/a12V3TfXaFANWm0FjaejNTCmNNRlkM5NmccdHEQQh7eE7fLU5XoEw9GhXQ5dKcz2ar9siHtbIGe2rSSXzYhxA0bjsFGjlfkk44d1fnEIYMnTohHLrKwe/yKIH1BmjaFgjzSg0AeQu5L5Ktu9SHvI+pvPa0q3SSL/lb9sS79wDi6cK00QdsVH/WKvocvoA96cc2Pl4tsQ95XT+i0iH5MecmLukh3tY/3hOprmhVqRy89/SGQLaRfHjsxj1CbkE0dyOda/jnVnz0gF//X+CH02mwK5KAfOiGXV2w2Rz5toW3I6QG7xsWVumQ3dJAf0V4hO1OPbM0rcboWlMP/FKKt6Q/F60eIe+oX6kMR9YrxyFd81E1xBOrgmqCfJmjJE8ilHK/I5T36Mv4E5YhHxtScB1z3+gDpyJ27dpBfdkCubBDriHki5JH+pOcxpvRss1Yf9MyJyFzbJpRTGfpJ+se+jHkiU3Ob2kcgH9cE/Ay/xzYKmudiHAEYOzkOeI9cXqmf92N+SBvQkWuC+o540pEhpCshxi9B8tENXdEBubxW43vKrmuvp9gP+bKb9I1r0b5sPwaykI8cAuWpQ20V0pf2xXiBjWUj2SPKYR2INs1+QBwBvaWL5kpeka2+kl3IM0WcMyhHkE7oJ6b8AbABZWgfeQi5LSD9eCWN97nfAN2IJ115CLIh6VF/4qUbcbqOLLVxrIfXTE+bKEc8MrAn9iMOPclLHftGOkfQiTpboBtl0EltQdfcP4BfK5281EfQPQUgq6dvxsj9LD8dk0Uc+fO6QVniSee9xmNkrl+Byihd/dszz7T8i72b2opMQmuOqUAu+ZFJWV7pL+J7oM9VRrbgWvMw6bRTY162jTaXvyGDPPgVeeJcDsglnkBerglTY5A6FY98QZz0bs39PfqPEevQHE+c6qKNqiPHZ1S39OjxS9J5lezY/iW0bDzGrjZssavvAuUowysBGyEzylCbZT/1lXyENYMytEl2IczZu2IT9OBVOsX1gVe1T33LdeUnkVZ/UY/iqU/E9Y/6s//ke0HipZP6U7oSR1B5giBvbG/0j166TzioTA3iPYGGjFWIYSmT07lGcXW+8nGN3Dmd3oP0pM5DI3sJdWqMy9Ch6nRsgr1iGd08RMeVXNlUZWhzrIvr6ESCusifkdwKnBS70kZB/eSPfU57qDdOCOTJcrmudBNVesvHFI9+9D824BpdgDiusaVQmevXr29jxkEW+miCAbU/9g2M2XGMOeV6+0NEW6rtcVxCtjcQR4jtphwTUAR5Vfke0JeyvNJ30i/211KQmW2C/9OG2KYxpM+S9lEH9cV+Ao3rvPgSJ32ln+qPY7ulU/bJJfVX7azipUO0LWj85fhIJU86RZ+ESn/Nm/Tj1Jw3xweUN8rsgfxRvvRjnKiOnEegLyHq0jvGWn2g+NacqHbGcvu2CeViGeTnOnMeoP6euU1tinFC+bO9uEaPCDahLrV7iR+SRn504VptqnREBnH0y65IfpZHHfRlbEOvXad8Zx/rabaT2Kftx1DZOP+pXYQMdeR46sRG0XbYXHrSFj39smSu5J+6q/Ge48ZANjpJPtdqR68/YCtkjN3Tzek34mIbSEc+0Ac6oADVgz4an7Jl7uclNhakZ7su8UXaEfXiek5/9YJM6qMuBenQIpYRXOe20M/Zr3lPXmwsuCb09M0UUZZABnFZP+mS65AMIV3iPCWUt0d37EZQPqBf8/o8BnKjH2Bj4qJMIK4H2SDaC79EryxzjJYtK9/XOJV89U9sl3wwy1PeqG8my4LWmFY9tBkdczt69J8i1hH9nmvi6f8qPs8XxBFEr18K2S6PgyUgJ9u4xT5sWKG+im2c47vkoXy83wFkxDWKdhKiTOrY1961sg+gR++cNUWlW2tMKJ4QbaM+y3q2xuqYrvl+AZCTdRmjb4bbIMfHAFRAkNKtClUmpxOXFZfsNUAu8qtBvjY4YESOkdsfwV45jxwAx6Hjs61wMsrlJyLpo4hskZkbD0qLAx00+YKeHMvtpS/ihA3ky+2KVOktH4v+KojTU2mkYccM8c8999z2qo3aldsAtJW0ePM8Zscx5pRT3rH+iGRbymfipJzzyH/zWNJcEJ/64zqX74V6WnVoLCyF8rQ1TsJzZcsOS9qnuvIiACwOhAh58dW4eMq/s99TlvgIeeP4W1J/1c4qvqVXKz6S5Wmuq+ZK2T9+aCX/pX0RZBIfQZ9eH5DcHD+F6ohIF8VXeXYdY8iMdQjFx/zESZ50WdMmlMtl6OMoq8oju03NbZTjOrdd5Py6gSXE8UW7deO21A9j/tierCP1oFesfxckP99cS6+oy1y7tnyHtF3WU6j027ftW0gWr5mx9TPHq/7sS8RlPWRT+YFoxYNkZV+J/TIF5aMM+l5jvscfeu7p5vSb2hs3SspT2YB4ZEcbzLXlmI0F6dGuS32RPotUfrMPJJd6FVq+K7A5ukdbVmV0X5FtTh35icjevpmCMpVfI4e0uD4qjroi0SdFSy7xPbqrr3vW5zGyHqort6HStYJy6KmxDK25Z4zKlsgkLq8pssWbb765jTkt3zOW1d4cHyE9t79VTnpXc/8c/cdQHXlMT8Vn+8/1yypesrMfzqUlP7MvG1ZgH9qz1HcZs+RFRoRrrWXSs2fcct077gS6L1kfetoXqXRDBvG5/YrvuReEufFQySdf1mWM7tVQDcrCuW5V2CrTipvb8T3IyeLEeChwPtqEHRTkkIS44EWwDenkzTCoSGvZXEgGdUbQh/jM3HggXnXEEG9k+ARfeaYg35gPVOnIJT7bQ/HVpKBJCz2lswLxeTGpGGuXbB8n5jE7jjFVjrbIj8gnnWJo3YxWtmYiJa8m65xH44n0WAd2Jj76NNdVHfH3M2KYQgthJXMJyGNeoL9kZ9rSg8Yxtp2LbFwhPXY9bIrzHX0W27WP+qGKn9Irx0eyvLG5TvLiIQf5iIttBbWposcHWnKnIH/WXW2SvCrPrmOMvMRnuYqv5sTImjahXC5DO9UO6q7ykK76YshzG3Fc57YL6a21lVfFxTLxMH6pH1brN0Qd6WvGo26W90HLBtKLdMG14mJo2XWt9RQq/fZt+xaaE/G/TGv+qOJVf6RqF0j/3LZWPGjdIVA/ebLcKSibP2wSko3MGKI/9NzTzek3XUd5iqvKE0/bI638c+MjuZ6lvkhapOVPu1LJxZ9zX1dzDXGaC3XoKDS+s80rqnw9tq5o1UmbSIvtatlaEM86o3yV3Cq+0n3O+jxGru/GjRtn8xDx1DN3HoOe9XuMypa8J442qr0Erd0xL2AD4pAlu0QbAulVfIT0nj4B6V3ZTGV69W+hOnL+ufFCOihfbiu04qtxsISW/Ay6kndXG46x1Hd5AEvzFmsVOqJXHIu73ldPsa/1YYpKN8nPdbfiW3XPjQf1E/bHluSlH+fQvRqONTTHiVYZHIUTVHU8jkc+HGSfaIAsmcz3Ae2k7TlowNDuCvKQXnV6y6aZlgw5TWZuPBBPULtyAA3+qi0Z8uUBFqnSW/YYsxN+Rxr9EPVViJ9atBhrFzJy2pgdx5gqRx2qh3yE3B6FTMvWksPEmvOo3Zpwcog/UEu+qg71TQ6CQ7DKrpDzLoH+Z26g/9GP+UGLUavejBadXl2oQ4d7Y+XU31OLomyIzTNqF7Ag0LbIPuqHKr6l15i+IssbK6M0/FGQjzjSImpTZI4PtOROQf5Kd+pCHnXrpiuy6xiTbcgbacWLQ9iEclUZtZn7gioPadK9CmKqjUD7NCZ41RghHpDRK1NpPX4oVEZ2ntJ3Li19K724Vt4qiJZMwG9Ioz25PKFnPQXyZv3G6lXaHNu3oAyhopo/oIrX2I73duQjxDkVWm0bazMgW3UrRBtMQX7KV0gedVcBNFbHbDzWBqVFnfO17t2rDQ3xWf+5thzTT+R65raJfMSRFlHf7ZuW3Gwr9iVCcx8HYLziW/HQGTS+s5yKKl+PrSvG6iSNIFq25pp2qX7pUsmt4ivd5f/IJT6HuD6P0apP+ipw3SNzzvo9Bm3IZaLdcnsJcZxSL+s4evCedMryGqlsm1GdkVY51VO1dY7+Y5C3qmNuPNe7+KUgjbALY/IjUUd0zqHXhhX79l0F5jI9sT1n3JKvxyYR2QdZGaX1rA9TVLq16m7Ft+qeGy/Ud+QhaOz30u3BrQaNMWYEFMUhaIAO7PaJDBM//cPZ883gWlBPflxV6BCD9leMdXpvP7Rk4MDEZ3rj40AivioT0eCv2pIhXxxg2ZFzOrTsMWYn+qaSNYexdlW2b9l3iqly1MUnqUC+OXW02i/98d+cZx/9OfVE5FgdxBN2gXYhI/pG1WdTaOJtfaAQYY471EFkXMBJr8YRoaK3fqjiW3rleK7j3AxZ3lgblRbnI/IRR1pEbYrM8YFW/BTkr3QH6cRrzrPLGAPKEZ/ltuLFIWxCuVaZ+DRIzkM8YYrcRq6zn2l8EK9Nucrxih3iTXWWGVFajx8KldH6L7tnPZfS0rfSi2vCFC2ZwFxBWvbDuVT6jdWrtDm2b0EZQoXGaqYVjx8TtJnCx6q+bbUtx3Ot8vwAvcBHqUP+E59iH4O8rb4ijTBGz/zUahsoLfYb+rCekp+AzbiHr6Bs1r9VX28817mPcj0tWaC0Hl9s+c2u9MqNbZKvRltnOXPGd5VvzG5jtOqUPgRR2Zr+JC7vA1tyq/hK9x7/7yHXh45RT+Srrp6vvc5Zv8eoylR2qGA+Ih/t0j1kq2yO53pqDEJLXqW3aJWZS6uOOfH78EuoxsESWvIz+7Jhxa6+iy3iPocyzGnMbXoKcc64zTbJe6iKMfsorWd9mCLrBq26W/GtunvjsYfuN6Js/Jk01m/y99Kds9WgMVplMCJpKEza2Ek6hyzk1Q9890B+BnmWS11x4P/1r389ycvrvqFtrRspYIBgmzzxAnrGTo/QJtK0gYqQpja3ZGB74jO98XEAaENXHcRo4PJKnjgABbqNfQqRy+R0kPzsY8iu4oUOkbRYRnqe4CAP5dXOCHahfw/xG5Gkq897+iOSbRnRwpDzqN376M8W5KvGr/o068RNWu/vk2jxRn4kjhfeV36RkT7Yaiw/tiePUD/l9gF6Zd2qNo/5t9KoJ8uCfdQPVXzveOSaEMnyGD+MI/TNqJ44VmMfRpBJvJjrAy25U0hOC60BOc8uYwx6+yByKJtQrlVGN+iV3N65LbeR6yxL9mENVT6g7dg8r6378kORdQT5QjUm58xvUMmHSq+lds1gO9KreTDaZoxKv33bvoXmiOp+rHVTnecVoF7i6Ud0iTbMtGya47lWe/DP3FfURf7K9hXkzXOG6PEH2X1sfprbb1wDbc72yFT699pS5HiuCZFcz1rrkZg7zjMtuRH0lP6yQW5PlKP2c02YGt/kiTaDbOteKlkgu8b9VWVr/JM4+a2IcqMPV/VVutNe4sb8v4dcH2UrmcytsV0Vc9fvMSpbyven9p2UI8R1DDnEIZf3khvjda00QXruE401lROV3qJX/yladcyJ34dfgmTHccA5CXXp4ZQeWvLzfDTHhnPObfbhu6RV+zDmNtlyzrjNNqnKZPa1PkyRdYPWmEB2Fd+quzeea9ma+Dx+pE8v3TlbDRqjVQYj0lmkK1Q3gEDHIqN3gUaOnIF6YyA+ot+6id/b3wfowGClXS00GWXbAHGktcqrbL5ZZEKSHVsysD3xmal4OV2eLLgmT5wAcMo4WfGefLmPKZcHv8ohIw/+mA6afIjPdqTdVbxQeq6DAdTrD+iSJz/0riajln2naJWjTvlBnAR6+gNIz3EZZCMrQ5ur/kTenP5swYSe2wDYNLeXPMTl+BbKn/1YdsYv8JlcdwvGIOXQrSqjuSDrRpnK94jP45q43A9T/o0+tDH7odi1fiA++hDtnxqPqpO6iItU9VQ+Tj3ky3mpk7xZLvmIF3N9oCV3Cuw45vOyd9WHS8YY9PRBVd+hbII9sn9F1N+VXHRDH+kA+EVsv9o45mdAHkKUpbqrMaO0XfxQSMfYD8hV3VEn3hOf6x6jkg8tvebYNcsUSpfdBfbvXU9b+u3T9mNQjrETUbsIGerO8cqPj/NeIdpWKG+2qeIrHyYuj1H5ztIDkEyPP/CefGP3dHP6jXzcv8pehFguUukvm7VsOcfGoqpnH75IPuIjyCCO0Ptka6aSG0EP+k22kA3ifIce2nOB2qS1SvYS+Hkc3+TJdlA9uQ+moAwh2xr98D3ei8rW6it0F+hLnHSM7YnxoqV77/o8Rq5PdWW/r+rJyH/IG0E+8cimDdFmLSpbAuOT+Lx+k1/6kU6IbVA58iFTctXeqTEY5x30b93ftPQWPfpP0apjTvwSvyT0jAPaofy9D2/FeoX8iRDno14bag7pObdRXbv4rmREmwJzW5yfltxXY/fYH2Oob3Nf0ZZs45bPTBF1A+TPvedv1T0Vr36P96nE53tm+iH35xjtVWsLHYYB5ViEyqgZvkKCIioTHUTOXIXs4Bic+J4TfgwT9cwh64yOxMevu+xKtlUeGFDZUwMjpmG/rLPA4cmHfJyGfDgDgynLwImUR3ViV+rUIInxceKhHDKoD/nZoekvyjMQyEs6MqIj856yyOc9/U8eXiNcq03SL6L02B4NQALXkH2P+CwLNFhom3REHgdhPagMMtQuydMgZULK9s02rKBMLKdrBfUvITLVH7m/0belD/Yhb4Y25f5ETvb1qf4cQ3XzKjvrOoK/yxZq4xToRX7ZiFfkoyPtIMx5Alv2iL5JHchVHRn1E+VIp1y2IfaKfcV74vAv8sb4DHJIa9lkaf35pqRnPArqkk7UpTFS1RPJcx1lkRX9iTLyA9KjTpIbfbDXB7Lc7H8VuT3IbJWjbeiamTvGpvqgZ05c0ya9czKQRv2ZnrUGWn4WIQ+yIrSDctRTscQPcx2VHUB1q5zG2dz5rSU/6xX7bMquh1hPs37SW+zD9lNUduAaWyCTV+qbumeSDjmgr3wx2zT2B5CXeLVV5bAD9ZFOGmOW62oOyaB71Jv3uV7oGWe8jzpKD14jPf0G1CW9YiC/NnWV/sRRR7Ql8bDUxlU9kSW+iA3JG+ViL5XROCfEDXEPyMz65iBdCHF+k91pE31HXrUP/UgXpNMW0mkLbSY/v8E2t296ID9l0QNbq35kyCeA62hr9TPzDmWJJ07jGZ2kCz6yRPfe9bmiqg/kS+jMe/kLfdDjE+iAPI1bXtERecglTN3XUl+0ZWwzYEPSo+9jJ0GcyvOe+tGDIJmx76INSctrNW1CXhw/kkWQflN6iyn9x8h1IGtO/Fy/FCo/NQ6AcxLyE7ItMy0/hLH5qMeGal/vk5m7+i5tRSfySS+Vj//q3ztu5XfEIwNb9YItsn2oM8rA1uRBD9Kj7afoHRPVfRvkutX2HB/1xW6qh/olC8iPjaQPr1yTr5fJg8h9c/369ZPG5pt9GiBjxsdXTRsGH3YjrA11jA1G0siTJ8YIaVNy1KaWnNhm8kzl70HykLUEtWsXGfumpz92Re3epT+nkF2nyjOfkG8Okq0+k667IJm9ctRPu9ZbMdYvYtf6ZTMCfTTV3/IZ2byXXM++kEzpo3qOBdmrZ4wRevpgCsk6VpvIZ8faJ7upDRniq/JTY0a2mKp/3yyZ3+bSY9ceZJ+W7ZdyKNtnO+h6CjZK2vxktHnjBr8X6s921KYq2mLfdhZzxtnYuJnqN23ccpo2gewJ1mpjZeMeptp0mYhtEbSp1afKu1afCPpdG17GD2GJrdXHsT2830e/VbKXgj1lU9l4iY4qK1nq330heWMysQfpUf+oU0Q2rNIg1kdeXS+xDfTofwjU7im/XDIOmFNb9twHa9lQMqW76pmCtVc2ke+NjcnK9hnVPWXrimifJeWnyPJ30XUOsm1ENpRNc3oPBz+IZIDE09QMgy6fThtjzBjMK3N+F8UYYy4Lnt+OHzYD3L/mD9kFfUi6uR1swgFuBXsB0tfeYJnjg34f2ysacy+wZBzwAc7U06/GHAsHvyvS7zJy05bhBo60l19+eRtjjDHjMJfwpMncrzQZY8yx4/ntcsDXs7l/5YmVCjaTPPlnbgeb8G2oCr7ehU17fy7H3D0sOYAx5m5j7jjgHKU1nxpzjFzIx7MMKgI3GXqUk5s3bkhaN3HGGFPBJn3J4+DGGHPseH67PLAJ5D6Wpx/1dXqe6uOJP/rRT/bdCTbBNmyesRU2w3bYDFvy3tw78FU/9occwOgQZuwrlMbcjSwZB8ylPA3pdcZcJi7seyJ8ys8/KnHTQWCA+RN/Y4wxxhhzWeF+Vve2BG8Mp+Gpx2gz/xSBMcYYc3fjH6wxxhhjjDHGGGOMMcasjg8ijTHGGGOMMcYYY4wxq+ODSGOMMcYYY4wxxhhjzOr4INIYY4wxxhhjjDHGGLM69+xBJP9AxT/18W9U/DC2uR3+eQvbLPnHwvhvX7waU8E/ZF6kj/DPpvLTY/u3/osaQ3P65Jjsdwhf8rxmjDHGGGOMMbtzTz8RyUaaTaUPIu/k/vvvP7HN888/v42Zz9wNOxt9cxwcoi/wLXwEX7sodIC15CDtEDaaO4Z2ZW6f7GK/JbRsfkhfOnSfGGOMMcYYY8zdxD19EMnmmU2lDyLvhCchsc+HH364jZnPnA37e++95839kXDIvsDHOMy6KBj7Sw/SeKJ6bS7i0GtOn+xiv7lM+eWhfOki+sQYY4wxxhhj7hZ8ELnZVPogch3mbNj1NU9z8dxLfbH0II2DesqtzbEfeh3yIPJY/PLY+8QYY4wxxhhjjhkfRG42lWymefKP16lDSZ7KUb7W1wSR9eabb57k4ZUyhMyNGzfO8rz//vvb2MNAfWpHpZva2XoikviY9tFHH92RN27YsVWrLuKnNvdRX2RhO+rshfKxT1rt+uSTT87S9J7XzK59h1y1p6UP9UpnQktn4mUL9UOlMxCvviCge/Rj4qb6gjpyfZ9++unJNVCH7EPgfQvSq3E0p00Q28Ur+Vson3yc9s45SKOcfl92DHTq6b8xesdQhHpUL7Yfs1sF5ao+Eb3269GDPD39rHrG/HJMb+mqPLkOrnt9dkoPY4wxxhhjjDFtLu1B5LVr184OA/hdML6SxwZTf7JCHE/QgH4LkkA5oYPIxx9//EQW18pb/UnLY489diKfNDarXFMuHgwgg7qVh98u4zpu1MlPHLLIg+5qwxRs1tkEqz3IAPRGhuKwBUE24lUbftKjjlzLVoB8ySI9go7IwmYEymIH2TDmV72kkQcbKL+gP9Qe6uQ94bXXXtvmGM7qQTZBfdQ6dMigM/mRQ3n1e+7j2G7pKt1U1y59J1Q/voEM9OL6+vXr2xynOiNXeXQd/Sj6+5SdgfxZJuVoI0z1RfQ9XlUfMoiL/sW16pAdI1H3mDa3TUAd5McvqFO/F6g+E8igXcjkPfWqPdGuYyCTMsiR7oQ4fqCn/3qQHWi79G7ZAUinTPQtyvfQ6hOB7F77Tekxp5+n/HJKb2RRLs531BkPhlVefcV1JQta9RhjjDHGGGOMmeZSPxEZDyAEm0ttbCNsHPPBkzbLeVOpjWqEvGxmM2xyY3nk5U05m3HF8eSNNsYRHa71Pl2njXPcTEtGPBhlYx31U5vjIY3s+PLLL29jzvNRXlCXDiJErJMQnzQinhDtrrp4jRBXbe6pk7SoB2DD2IYx1JbYL9RFXLQVKK90iYda++g7tR+5QnXKJjrsjHkAHciX/Zi4HF/ZGT2zjZGZ4yhX9YWQ7eQHsokOIqWPoO9i/kirLsmYahM2Z6yiQ0TjQ6gfow+A+i7HTyEbVMztvzHIT8j+j3zi4/iXnXO9xFW2b0H+3Cdz7DdHD+IIu8wRokrX2IrjXH0X47gmiCU+a4wxxhhjjDFmmkt9EAlsyDl0EGxeOZjIB4kxj9AmNW94tVEVbFhbm9K8aeY9dcdNLgcQSucrf5UsbXzzgUMLHQzEDTxtxB6xPbzXgQBfn5V+GeI5VBCyTdSn0lFx+YACqrpa+YmrNvc6iOSgKR660K533313ezVN1Bmq9oHiq/bso+/IRx9lKKuDPPlfbK/IB2zAdY+ddWiU9Yz9DuQZO2iRfq324u/R/6Elcyy+p008LUdc9HlQP2JDArKqeiqZPcgGFUrr7b8xyFvprUPNqLdkZ9uP6VqR65xrvzl6cL3rHCFyuubtfEhNfPbdffisMcYYY4wxxphp+nenRwqHQmwM2UgCm04OJYjT4QSbzrwZBR1W5E1p3jC3NsagNB3mSKYCsqJ85SeevApqR9alBe3lQEuHYpSjjYS4sedatuFAgTpIj3UTiF/jIDJv2Fv5q7yCNpBOQHfanA8NeqBuylIPcpAX2wJqd/XUmnRf2neyf6udgjyECsqSFg+5KpmVnfnqt56kU5n81BpU8iKVDhXUTd8pfyVzTnzVJh2u0gfqD0LUUeU0ViKVzB4kv4L4VlrUqwfyVvaBXI+uox0I8vVeyBvrnGs/rhU3pQfXuX2tPqnyRnL6kr4l71KfNcYYY4wxxhgzTf/u9IhhY6gDFTaRwFM5es9hRXVwxaaTsmxYI9qEirENrdLiIR4Hf2za46GP0pWfdN7n0HtAATqEAQ5dOTzTIazkRb2QTRrti3Uq6MASWrahTXETrnZWqK4I8ojPtqzyRiinflGIv6k4Bu3SYSb14gut9rXiQbov7bto/zHIQ6iQDWJdlUz0IT7bWW3XoVBVtoqLSIcW+BzyCegBLZlz4qs2aQww/tUPMfAUMK+5nBhLG2PMBsS30lSud5yTt7IP5Hp0HdsfQy+5TsoS12u/OXrkuoA8VX1V3khOb8mp2IfPGmOMMcYYY4yZpn2acInQVwHZQOqpHX11mQ1/a9PIBlXlIuQnXvDPr1yPbcSfeuqpk+tcFwc/OgQDyYoHhEvRoSN6Ua8ON7AH8omPT/bx1V/ykz5FyzaUxcbUR+B9PMCMUD7bQ/bKtsx5ZR/alPNSHwcGPTbkIEr+oSdkIbaPPtK/PrfaDbv2Xa/9dYBdgY1I09e4geseO1d6q71jB5u5nHSo4A9EVL4lM36lPtclqviqTYw74qonWEXP+K3Sxsg2iLrO7b8xyFvZB9uSxjgQqjfafQm5zrn2m6NHrgtafZLzZr/M6b3jdV8+a4wxxhhjjDFmmnq3fMlgo8nmkBCffOSagzIdTmbY6JKHjW+ETSbxgs0pG/5Kjg4DdRhXyVMekKx4gCAo1zrUa4FcQtQNeyA/b6xBbYt2Eq2DuojaQXxOy5A3b9gpQ3zvIYPslfUlfeqAAShf6RHbx3vZqdVu2EffURb5Y/aXDpU8DnlyW6r2VXbGXvFakC/6SZaX7UwaeSrIS1o+GIwyoxfiqmUAAAQqSURBVLxcl6jiqzZhI+JaY5MDP/7QprIbaGxWdhkj2yDKntt/YyCnyq86qjEb40TPWBG5zrn2m6NHrguqfoacd0qW9J4ar/vyWQ7Fn3vuue2VMcYYY4wxxpiK+jThkqGNa94cck086RWtDaieoIuHM3rCMuZlI8smV18BB/LEa6Bs1E2b97i55WCKPNUB1RhVG1v2ENJR7eOVTT9//CFatmFjT1nyK9CerDcyKZ+f/kMe8bHtEA8MkKcnTCUnH0rQtp5NPwdRyEaGdESmDgTRB11ki1a7xa59p4OzaH+gfbE9OviJeaRbPOCSfXrsrPJRpvLpiVBo9YWoxoegDblejR3KUUZpLd3ntAnULnQVGptql3SIfkQetSXLnIL+oxx9TpC9RG//TUF+Qm4b8vM8A7QHXWIdtL3KW9Gy/Vz79egxt5/H/HJK7yiLvOjBn0/BPn2WwHtjjDHGGGOMMTV3xUEksLFk8xjhOh8QCDaWMXAIx9fvcnz8Sh6bXw4XePqKDTmyiYsbTzbLlFMe5c+HD2y2oyzeEzcXHUpkiIuHFxEOTqiXOqlbttPXRcdsQBk24bwqUBcbcNXXsiM2zvEi2oNXnmYCbEt9xJMffbnmfe/XWyWb/pKt0RWZ6K3DkaiXQsWufUefjdkf6CO1VXVQJh44z7Wz3lMfMqVD9pNWX3z88ce3ySTEw2tBPLamPHVxTR06SOK3PSvdYYnvAPLVv6RRb/4AIveb7I4PUJa0XuIYQk62YU//9aB20CbKE5CZ57pIrFdtRZ8pWn0i5tpvTI8l/dzyyym96ZuoN+9zf1Fmic8yJgQHo4TPPvtsG2OMMcYYY4wxJnPXHER+8skntz3VJYjfNxyOESrZOpQkTfnG6MkzRVW+55BOdVd2q2CDzqY818fBGmkcSOhwYCnIznbVNW2SzktQ2djepbJgF12g0ieza5sj8dCkp27Sl46fyv9py1J5PVR1VuQ8vF+q11R9++w/yekZr6p3DXvn9ozVs4YeS+VlvTPIzHnW9lljjDHGGGOMude4aw4izfpw0MgTRRX6OnjPk1fGGGOMMcYYY4wx5t7DB5GmG77SWD0RCa2nJY0xxhhjjDHGGGOMAR9EmlnwG2r87pt+b02/18ZBZP4dTGOMMcYYY4wxxhhjhA8izSL0W2qE3t+YNMYYY4wxxhhjjDH3Lj6INMYYY4wxxhhjjDHGrI4PIo0xxhhjjDHGGGOMMavjg0hjjDHGGGOMMcYYY8zq+CDSGGOMMcYYY4wxxhizOj6INMYYY4wxxhhjjDHGrI4PIo0xxhhjjDHGGGOMMavjg0hjjDHGGGOMMcYYY8zq+CDSGGOMMcYYY4wxxhizOj6INMYYY4wxxhhjjDHGrI4PIo0xxhhjjDHGGGOMMavjg0hjjDHGGGOMMcYYY8zKDMP/B9xiEkX/fQ8RAAAAAElFTkSuQmCC)![изображение.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAWcAAAF5CAYAAABDZOtmAAAgAElEQVR4nOx9eVBUx/p2zxxmamoGKESKBKVEREqR5YqAqEFEBb1RUIlibiASiIg/NVoajXGPMYmguKAmKghqEBdUjAtb1IgIGhWIWbwBd6MxbhFuQK2gMs/3B8zJnFnPDAMD8/VT9VQxffq8b/fb3c/0coZDQEFBQUHR7kDMXQAKCgoKCnVQcaagoKBoh6DiTEFBQdEOQcWZgoKCoh2CijOFWUAIASEE1dXVJrO1bNkyk/mpqKhARUUFnj9/3uLyUVAYAyrOFGZBa4izQCDArVu3TOLHlOWjoDAGVJwpzILWEGdCCMaNG2cSP1ScKcwNKs4UZgFf8WtoaMCSJUvg7u4OoVCIPn36ICMjA3K5XM2WgidOnNDpR5fN+/fvw8HBgb3P1tYWu7J3mT4AFBR6QMWZwizgK87h4eEghEAkEiEgIAACgQCEEHz++edqtubPnw9CCHr27IkXL15o9aPL5u+//64m9lu3bm2dIFBQ6AAVZwqzgI84l5SUgBCCTp064enTpwCACxcusPvLtbW1HFu//vor+vXrB0IINmzYoNGPoTbptgaFuUDFmcIs4CN+n3/+OQghEAqFcHBwYKm499y5c2q2zp8/D0IIxGIxHj16pObHGJsUFOYAFWcKs4CP+CUkJKhtMShz586dGm3FxsaCEMK5X3HNWJsUFG0NKs4UZgEf8Vu6dCkIIQgPDzfI1v3792FlZcURXcU1Y21SULQ1qDhTmAV8xO9Y3jEQQmBtbY1Hjx4BAH744Qd4eXnB29sbDx480Gpr9erVGsXZUJuXLl1qlfpTUOgDFWcKs0DX1sLQoUMBAHK5HIPeGARCCCQSCXvYRwjB3Llz1Wwpi/Pff/8NFxcXtWt8bUqlUhBC0KNHD+Tl57VRVCgo/gEVZwqzgI84A8DTp08xe/ZsdOvWDUKhEJ6enti2bZvG55xVZ+H5Bfkar/GxuWPHDtjY2IBhGOzZu6cVI0FBoRlUnCkoKCjaIag4U1BQULRDUHGmoKCgaIeg4kxBQUHRDkHFmYKCgqIdgoozBQUFRTsEFWcKCgqKdggqzhQUFBTtEFScKSgoKNohqDhTUFBQtENQcaagoKBoh6DiTEFBQdEOQcWZgoKCoh2CijMFBQVFOwQVZwoKCop2CCrOFBQUFO0QrDj/+OOP+Oijj7Bx40ZKSkpKyjbkRx99hB9//FGzOFdWVqK4uBi1tbWUlJSUlG3I4uJiVFZWUnGmpKSkbE+k4kxJSUnZDmkWcb548SIIIWavfEfgw4cP4eXlhatXr6K2thaEEFy8eLHFdo2xY+g9V69ehbe3Nx4+fGj2OLYk5q0RW31jQJ+N9hhbQ+Nmqr7cllQuc2u3ARVnEzEtLQ2+vr5gGAbdunXDypUr8eTJE/b6H3/8gWnTpqFTp06QSqWYMGECr078xRdfYM6cORo7h6k6mSnvycnZh99//539PHPmTKxcubJDx7w1Yqs6BlTjxsdGa8ZWQW9vb979xNC4dXRxbu02oOJsAqanp4MQgujoaOzdtxcrV66EWCzGF198weaZOnUqnJycsGXLFnyd9TW8vb0REhKi0+7jx4/h4OCAH3/8UWvnMJa///47R8j4kI9v1Tw//PADHB0d8eeff3bYmLdGbFXHgGrc+MS6tWJbW1uLR48eYdWqVbz7mzFxswRxbs02oOJsAnp7eyM4OBg1NTVs2pdffglnZ2f2s42NDQ4ePMB+vnTpEgghuH37tla7Oftz4Ovrq7NztCWNEefa2lr07dsX+w/s77Axbw2aQpxbK7br16+HQCAAIYR3OYyJmyWIc2u1QW1tOxFnQgjy8vMQFhYGsVgMDw8PfPfddzhy5DC8vLwgFArRo0cPHD16hL3nzJkzGDp0KKytrSEUCuHt7a0WoJqaGmzcuBHu7u4Qi8V48803UVBQoPbFkLM/B97e3hAKhejWrRuSkpI4g17fl4m/vz/Wrl3LScvLzwPDMOxnGxsbHMs7xn6+cuUKCCG4deuWVrtTp05VWyYSQnD06BGEhITAysoKXbp0QWZmptq9+uqk2sn4xEqfb8VgVlCRPnPmTEybNs2kfaitYh4QEIAxY8awnzMyMkAIwccff8ymzZ8/HxKJBI8fP9YY21OnTiEkJAQymQx2dnYYO3YsjuUdY2OkKW5827k1Ynvt2jWcO3cO586d4y2gmvpqTU0NUlNT0bNnTzAMA19fX84Y5lNHPuOcEIJTp05h7NixkEgkcHJywvbt2w3OY8y4aa02qK1tR+Ls7u6O9PR05Obmon///hCLxXBzc0PWrizk7M/BwIED0aVLF9TW1uLBgweQSqV4/fXXsWHDBuzesxuTJk2CQCDAH3/8wdrduHEjGIbB8uXLcfDgAcydOxedO3fm+D569Ajc3NywZcsW5ObmYs6cOSCEcJbHv/76K8aNG8e7fk+ePMFbb72FESNGcBqwd+/eOHLkMIqKihASEoKRI0fqtOPp6YmsXVlqnez111/Hpk2bUFBQgIkTJ0IgEODevXsG1Um1k/GJlT7fCvErKCjAlStX2Pt27NgBb29vrfWsrKzERx99xBkEK1euRGFhodlj/umnn0IikbDL1vfeew+EEPj7+7N5/Pz8MGHCBI2xPXXqFAghCA8PR2ZmJnbu3Inx48fDysqKja2muPFp57aILV9x1tRX161bB7FYjFWrViFrVxZGjBgBQgh+/vlnXnXkO84JIejbty/Wr1+PI0cO480331SLFZ88xowbPm1gLNuNOO/cuZP9rJhVlJaWsmmKTl5bW4vbt29jwYIF+Pbbb9nrd+/eVQtc586dsXnzZo7vuXPncnwPemMQfvjhB06e6dOno1+/fkbV7ZdffsGIESNgZ2eHS5cusek3btyAWCzmzJAqKip02rK2tubUURGr5ORk9vO9e/fU6s2nTsbEio9vTZ23oKAANjY2Wut5+fJlODk5YcqUKaipqcEXX3wBiUSCkpISs8f8/Pnz7KyrtrYW3bp1Q1xcHAghuHv3LtvvlGdhyjEYPHgwYmJi1PzExsbq3dbQF+u2iC1fcdbUV+3t7ZGRkcGpQ1BQEHZl7+JVR77jnBDCOZSrrq42Ko8x44ZPGxjLdiPOFy5cULuub2uhvLwcmZmZmD17Nry8vDiBu3nzJgghuHHjBueewsJCjh2pVKq2rCSEGBzsR48eYfHixRAIBBg3bpzaUwGBgYEYO3Ys/vvf/+LatWuYOHGi3m9bQgjOnj2rlqbaJqodhk+djIkVH9+aOm9ZWZneM4aff/4Zr732GgIDAyEWi3n1u7aIeU1NDbp06YKlS5eiqqqKnfkJBAIcPHgA+w/sV9vHVo6BRCJBXn6emh/lbQ1tcdQX69aMrS6ffOJ248YNjX3K0L5cW6t7nBvSL009bgxpA2PYbsRZucKahFg1bdq0aXBxccFHH32Ew4e/wbVr1zh2rl69CkIIbt68ybHz7bffcuyIxWKUlJTg4sWLauRbn99//x1+fn547bXXUFRUpHZdIX7Xrl1j065fv66388pkMhw/flxv51VN41MnY2LFx7emPEVFRbC2ttYbx4ULF4IQgrfffpvzxWzumE+dOhUDBgxARkYGevTogdraphnxjBkzMH36dAQFBWmNiY2NjUZxzsvP0yvO+mLdWrHV55NP3BRbNboOX/nUUd84b0m/bOm4MbQNDGWHFOdbt26BEMKZKak2Wk1NDWxsbLB161aOnfnz53Ns+/n5Yfee3eznmpoaJCQkYNasWbzrExsbi+7du6uJm4J37twBIQTl5eVq9bl7965Wux4eHuwS0JBOxqdOxsTK2EGQtSsLffr00RnDjRs3wsrKCps3b4aNjQ0+/PBDnSLSljE/cuQwCCEYP348EhISUFtbi+XLl8PDwwO9e/dWe85VOQZDhgxBdHS0mh/F3rUhcWyr2OrzySduNTU1kEgknH3oBw8ewMXFBUuXLuVVRz7jvCX9sqXjxpA2MIYdUpzv3bsHgUCAhIQEFBUVYfv27fD09AQhBFu2bGF/sbNu3TowDIMVK1YgNzcX8+fPR5cuXTi29+zdA4Zh8Pnnn2P3nt2IiYkBIQRfZ33N5tF1IPjgwQMIBAKEh4dr/M9SinwjRoxAr169sGPHDmRmZsLd3R1vvvmmzjhNnjwZ8+bN0ztYVNP41En1Hj6x4jsIkpKScPr0aTZt7ty5rKhp4tFjR8EwDDvDvHDhAmxsbLBp06Z2EfOHDx+yB3iKGBYXF7PL3srKSq0x+e6770BI04Hg9u3bkbUrCxMmTNB42KocN77ibOrYqpKvOGuK22effQaJRIKUlBRk7crCqFGjQAhBWVkZrzryHeemEmdjxg2fNjCWHVKca2ubvq2cnZ3BMAz69++PU6dOITExEVZWVvjpp59QW/vPozzdu3eHWCxGREQEDn1zCDKZjGN7x44d6N27N4RCIVxcXPDVV1/p9K3M8vJyjftUCiry3bx5EzExMbC2toa1tTViYmK0zvoUzN6djYCAAL2DRVOavjqp3sMnVnx8/9///R+kUilEIhGb1r9/f86MRJWPHz9mB6xyXO/cudMuYl5bW4uIiAjOLO7PP/+ERCKBq6urWl7VmJw8eRJBQUGQSCSwt7dHdHQ054BbU9z4trOpY6uvLobE7cmTJ1i5ciW6d+8OoVCIPn36IGd/jkH9ic84N5U4GzNu+LSBsbTof3yUlpam9oulLVu2wN3d3exl48NHjx7Bzs4O//3vfztsrC5fvgw7Ozs8evTI7PFsbzFvKdtTbDtS3DpKG1i0OI8ZMwZBQUG4ePEi7t27h6PHjuK1117DihUrzF42vly8eDEWLFjQYWP18ccfY8mSJWaPY3uMeUvZ3mLbUeLWUdrAosX56tWrGDJkCLvcFQgESExMxP37981eNr68f/8+PDw89C7H22Osbt68CQ8Pjw4V77aMeUvYHmPbEeLWkdrAosVZwbt37+KXX35plX9O0hb8448/2mzpaspYPXr0iPNLro7Etoy5JcW2vcetI7XB/xfiTElJSdnRqFOc6QteKSkpKc1DnS94paDoyKiqqkJVVZW5i0FBYTJQcaawCFRWVqotCykoOjKoOFNYBKg4U1gaqDhTWASoOFNYGqg4U1gEqDhTWBqoOFNYBKg4U1gaqDhTWASoOFNYGqg4U1gEqDhTWBqoOFN0WDQ2NrJ/q4qz8jUKio4IKs4UHRLPnj1Deno6+1lVnNPT0/Hs2TNzFI2CwiSg4kzRYZGWloa6ujoAXHGur69HRkaGOYtGQdFiUHGm6LBQFmFlcVYWbQqKjgoqzhQdGgohVogznTVTWAqoOFN0aCjEWCHOdNZMYSmg4kzR4ZGWlobTp0/jzJkzdNZMYTGg4kzR4VFfX49p06YhMTGRzpopLAZUnCksAsnJyUhOTjZ3MSgoTAYqzhQWgcbGRvrDEwqLAuc1VcFLlmBsejolJSUlZRsyeMkS7a+pqqysRGRlJRIBSkpKSso2ZKSG/w1DxZmSkpLSzKTiTElJSdkOaRpxlssxZHc2OgcHQyCVQmhjA8fQUIw5d87sFdRFQgiiqqvNbqOt69Uey6zKqOpqEELMXo6O1vamiq1qno5a145Mk4iz+/TpIAwDjzlzELwrCwFr1kDSqxcIIe1aoI3pcCPy8xBfX98iG21dr5aW+e0bN0AIUaPXggUGlcsQO5YszkN2Z8M2IACEYSDu3h39U1MxpbGxzcrd3sVZX3zef/YMHnPmgOncGUKZDC4xMZj06FG79xVVVQWn8HAIpFJIevTA8G8O6czfYnEe8/33EIjFeOunnzjpCS9ewLpvX9j6+7dZpzOUxnQ41Xvi6+vbdGDxoWqZVMtsaL3/ffw4CCEYlJ6OoIwMlmMvXjSoXIbYsVRxDtmzG4QQuMbFISzvGPqnpkIgFiNg3bo2K7cx4txW/ZxPfHrPmgVR164I3pWFYbm5sOnXD45hYe3a18SrVyGQSOA2ZQpCjxyG5/z5IIQg8tIlrfe0WJwdQ0PhvWSJxmvDD+Wic3Aw4uvq2qzjGUJTiHNHYEvFeeDmzWAcHVtcDkPsWKo42/TrB8fhw5Eol7Npg3fsgNjFpc3KbYw4txX5xEdoZ4eRhYXs57dv3gQhBHH/q223vtynT4fb5MmctE5vDELPqVO13tNicRbKZDrVX5VTGhvRPzUVEnd3EIaBtY8Phh/K5eQhhGBceTmco6IgkEgg6toVQ/fnsNedx4+Hja8v5573amtBCEHgpk0G+dElWpqWdsrUdI8p6sdpwIED4TxhAvs5ZN9eEELg88knbJrPsmUQSCRIePlSrUzayjz61Ck4hoWBiEQQOztjaM4+rW3m8eGH6BwcjIQXLxB95w4SXrwwauAZYkc19hOvXQPj6IjukyZhyqtXbPqIgnzY+Pqyy9LADRs4g00Ra6eICHYVxzf+fGxrE2e/5GTEPnmi3p4DBmDgli2ctPCSEhArK7W6T7x2DQ7DhkEgFkPm6alWxrd+/BGvjRgBoY0NCMPAxtcXIwoKOHnGVVTAMSwMQmtrMPb2cI6KwujTpzmx5ZNH01jRG0O5HEEZGZB6eEAgkaDLmDGIKC3l2FVtZz7xEdrZYfTp0+zndx88ACEE79UaJpht6Usok6mtEKOqqzHq5Emt97RInGOfPDH4WyRgzRoQKyv4p6RgZGEhvBYuBCEEIwryOQ1v6++PQWlpGPXdd+gyZgyIQID3nz5FIoCwY0dBCMHbN26w9wxKTwcRCNig8fVjiDgrGiaitBTvPnig8R5T1E+Z/qtXQyCRsILUMzERhBB0GjCAzWMXGAiXmBiN9dBWZlGXLgjKzEREaSm6T5qk1X8iAKeICIi6doVAKm0SeYEAHh9+iISGBjbP29evw2fpUo549U9NRURZmUF2NMU+qqoKTKdOcEtI4CytR586BUmvXgjelYWRRUVsrJWXpYQQ2Pj6wm3yZITs3cM7/nxtaxNnvrPqKY2NcHnnHTiFh6vVXezqCr+kJIQdO4reM2c296Mm8Z38998QymRN7bhtG0KPHoFbQkJTPZ49QyKAceXlIISga2Qkhubsw7CDB+ASHQ0iErGx5ZNH21jRF8OgjAwQhoHfqlUYWVgI78WLwTg6cuzG3LuHbhMnGhQfz/nzIfP0xKiTJxFx9iwcw8LgFBHBuY9Pf2wrX+8/fw5CCEKPHIatvz8Iw0Di7o6gbds496iyReI86dEjw8RZLodQJkPwrixOutfChej0xiBOw/dPTf1HFO/f53SOhIYGCGUy+CUlsXnsg4LQ/d13DfZjiDjrvcdE9VPmhF9/ZWcpiQDErq7oOXUqCCGIr6tDfF0dCCGcWYu+ehFCELhxI/v5/adPdYqJbUAAbP38MK6iAvH19Rh18iQYR0f4rljB5om+exeirl3R64MPkCiXI2DdOggkEs6qio8d1diP/+UXCG1s4Bobq7bn2XnIEM4XdCKAPnPnwq5/f05dPefPV2tDffHna7sl4hx95w6cwsPB2Nvj7Zs31eoevHMnJ7/HnDmwHzwYiQDi/leLfy1fzjlwV/QFhV+HYcPQIz5eza/blClsv+aTR1sf0hdDxtERwVlfc+x6L17Me7tEW3xinzyBQCzmrAgnXr3KvZdHf2wrX/+5fbtpQtS8dx1ZWYmgzEwIpNImgdZS/5Zta8jlTYXQkufd+/fRZ948vPXjj0jEP2L+Xk0NJ19EaSmE1tachle1qdrw7tOnQ+bpyQaWEILwM2cM9mNKcTZl/ZRjLHZ2Rr+VK/HuH3+AEILo334DEQoxsrAQIwoK1L4g+Ygzb/9aGLJnt9o+afRvv8HKyQn2QUE6+4U+O8qxF8pkENrYwK5/fzVxFspkats2hBAIbW059YooLVWrq77687WtuOf9Z88QVV3NUjHLVXxWXgYnvHgB388/BxEK0W3iRLXTf8X9qtsi4WfOcPpRIoCoK1cwNGcfvBYsgHXfvpwyCaRShJeUqMVWecuCTx5j+tB7NTUa6xBRVqZXnPXFxz4oCM5RUYj5/XfEPn6M7pMmqW1z8u2PbeFr0sOHTf1QZdY+ePt2SNzdtcahxXvOnYODtT5SNSgtjStezYVUE6+zZyGQSHQKhWramO+/b0q7cgV+yckQu7qySwRj/ah+jvzhB8PE2YT1U2bvWbNgP3gwQvbtZRvTYdgw9Jk3D33mzkXnkBDeZTTGvyaOq6jg7M0p6LtiBQghcI2N1blk02dHIVADN2/G2PPnQQjBgK++4uQRiMWIvHSJI4oKGlpX1TRDbSvETBsHbt2KRDQ99WAXGAgrJydEnD2rMSbaxDmirIzTjzzmzIGkRw/4LF2Kf584gdjHjzllEtraahTe8JIStl/zyWNMH9I2FsacO6dTnPXFhxX9x4/ZtNg//9QYL339sa18TXn1il3pcvp+eTkIw2iNRYvFObykBIRh1Da736upgcTdHeLu3f8paPOyf8jubE5er0WLOHuovAaUXA5xt27wXbEC0t69EbBmDeeaMX6UZ9+JAAZ8+aVB4mzS+ilx1MmTIITAJToavWbMQCIAv1WrIPPygszTk7O85DOQDPH/Xk0NhNbWagcngRs2cJb4iWjeYxSJEJz1NYR2dvBatIhte0PsJEJ91dJ79mwQKytE373LptkFBiL06BFOu/eaMQOeH39scF1V04y1re+a25QpkLi5qYmWprqrbmv0mTeP3R5THIBPeviQva4qzo6hoXCNi1Ozrzi34JvHqD4kl0NoZ4ch2bs4eXyWLdMpzvriE/fXX+ykTDVequKnqz+2tS/7oCAMO3iAc4//6tUaZ+EKmuRHKIqDiF4zZmDwjh3416efQuLmBkIIRhcXcwuUkgIiEiFg7VrugVl+nsED6l/Ll0Nobd3USVWWI8b4sXJyQqc3BuHf336L4J07IfP01CjOgRs2IPKHHzTaMGX9FExoaGAPaIbl5rINp5iVvX39uk5b+sqsz79LTAyISASfZcsQlnesaYbAMBhZVMTmGV1cDGJlxc7CoqqqILS1RVBmpkF2VAeB4nN8fT2snJyaDmuaO33YsaMgDIOAtWsRevQIesTHc2JkSF1V04y1reva5L//BhEK0TUykvOct4KqdRdaW8Nv1SqE5R1r+nIi/xwIvv/0KTvmIs6exdD9ObD28WkS9V1ZSGhowNiLF0FI82Hf/hwMP5QLl5gYzqEcnzza+pS+GA7cuvWfw/GiIvgsWwaxszPHrvKBIN/4OIWHQ9qnD4Yd2I+hOfsg9fBAlzFjOGXR1x/b0lcimp78EYjF8EtKwoj8PHgvWQIiEHAnACo02c+3gzIzYRsQAIFYDMbeHq+NGKHx3imNjei/fj0kPXuCMAxkXl56H3HTlvb29esghHAeNWuJn9HFxewjcIoDIVVx7j17NoQyGQRisUYbpqyfMp3Hj+d8CU159arpl0Y9e+oVBn1l1uc/oaEBXgsXNn3hWlnB1t9fTVATXr7E+J9/5qRFXbmCuL/+MsiOqkApp40sKmoSyAP72bRhB/Y3fYkKhZD06KE22zRWnI21reta1JUrOrc+VOs+9sIF9nRf5unJ+WJIBDD8m0MQu7iAMAw6vTEI4yoqmp7qEInwn9u3kQhg7IUL6BwSAoFUCsbBAa5xcewTGgo7fPIYtfqSyzEoPR0SNzcIJBI4jx+Pfx8/ztk3V25nvvF5r6YGPeLjIbSxgdDGBj3i49Vmv/r6Y1v6YvvTwQNN5wIMA2tv79b/hSAlJaVpqemLqSNyyO5s/OfWLU5a8K4sSD08zF62jkAqzpSU7YyWIs7OEyagc0gIoqqr8f7TpxhdXAwrJyf4p6SYvWwdgVScKSnbGS1FnCc9egTH0NB/tguEQvSeORPvP39u9rJ1BFJxpqRsZ0x48YLdM7YExtfVIfrOHc7P7in1k4ozJSUlZTukTnGmL3ilpKSkNA91vuCVgqIjo6qqClVVVeYuBgWFyUDFmcIiUKlhWWgMCCGorq42QYlaDj5lqW4+UKSwPNBWpbAIUHGmsDTQVqWwCJhKnOvr69HY2GiCErUcfMpCxdlyQVuVwiJgKnHuaKDibLmgrUphEeArzhUVFRg6dCjEYjHEYjHCwsJw5coV9rrqVoJcLkdGRgY8PDwgkUgwZswYlDa/fkn5npKSEowePRoSiQReXl64ePEiTp48ib59+4JhGLi7u+PUqVPsPY2NjUhNTYW7uzsYhoGPjw9yD+VyyqpaloqKCoSFhcHa2hr29vaIiorC6eZ/X0pheaCtSmER4CPODQ0NkEqleOedd5CTsw85+3MwatQo+Pn5sXlUBTEjIwMMw2DVqlUoLCzE4sWL4dj8X9yU7/Hw8MDuPbtRVFSEQW8MglgsRq9evXDom0PIL8hHcHAwnJ2d2XvWrFkDKysrpKSkoLCwEAub/6NhfkG+xrKUN/9zosjISOTk7MPBgwcQHR0NUfN/MKSwPNBWpbAI8BHnP5rfLnP9+nU27c8//8Su7F3sZ1VxdnR0xNdZX3PsLG5+/ZLyPQcPHmA/K2azP/30E5tWUVHB3iOXyyGTyZC1K4tjd+HChRj0xiCNZRk2bBji4+PV6jSl+RVTFJYH2qoUHRbKh2Wq4qzpIE0ul2PRokVwcHBATEwM0tLSUFtby8mjLIg1zW/HePLkCSdPWfPrl5TvUX7GWrEPLJfL1dIA4FHzK85qamo4dktLS2Ftba2xLFKpFCUlJWp1otsalgvaqhQdEs+ePUN6ejr7WVWc09PT8ezZM4331tfXI3t3NhISEuDg4IDVq1ez15QF8WHz65dURfRc8+uXNN0DaD6kU07TZvfs2bOQSCQa7dra2moU55LmV0xRWB5oq1J0WKSlpaGurg4AV5zr6+uRkZGhlr+mpgaJiYl4+fIlm3bixAmIRCL2s7IgyuVy2NnZcbY9AGBZ8+uXNN0D6BdnxbZG9u5sTp5FixZhwIABGu2GhoYiLi5OrU6Jza+YorA80Fal6LBQFmFlcd4I1i0AACAASURBVFYWbWW8evUKtra2iI+PR0FBAbJ2ZWHIkCEIDAxk86gK7datW9mDu6KiIixbtgzOza9f0naPPnEGgJSUFIhEIqxdu5ZzIJiXn6fR7sXmV0xFRkYiZ38Ocg/lIiYmRu1wksJyQFuVokNDIcQKcdY2a1agrKwMfn5+EAqFkMlkeOutt3Dnzh32uqZH6dLT0+Hm5gaJRILx48fj+PHjWveGAX7i3NjYiPXr16Nnz55gGAZeXl56H6W7cOECQkJCIJVK4eDggLi4OPYpDgrLA21Vig4NhRgrxFnbrNlYZO/Oxq1btzhpWbuy4OHhYTIfFBSaQMWZosMjLS0Np0+fxpkzZ3TOmo3BhAkTEBISgurqajx9+hTFxcVwcnJCSkqKSf1QUKiCijNFh0d9fT2mTZuGxMREk86agabH3kKVXr8kFAoxc+ZMPH/+3KR+KChUQcWZwiKQnJyM5OTkVrNfV1eHO3fu4NWrV63mg4JCGVScKSwCjY2N7ea/yVFQmAKc11QtmboE6Z+mU1JSUlK2IZdM1fGaqsrKSlTurgQqQUlJSUnZhqzcreft21ScKSkpKdueVJwpKSkp2yFNLs6Kx4uU2bVzV3w06SM8LXvaKpUghKD6UHWb2DKlL0rTsfpQ86/t2kFZtLGj9B0+5ewI8e7obBVx/vLjL3Fk7WEcWXsYB1cdwPLE5RBbiTE1cqrZOpOxtvJS81BfWt8iX9mfZSPAIwCMkEH317sjdW4qGssbDS6bqexYIjuCWBjbT03Z7qr92dhytka8+7n30+v35rGbEDPiFo/3tvBVlVuF8KBwSMVS9HDqgUMphwy6v1XEWVNl8lLzQAjBiwsvTNqgfDsTX9aX1nM6vqptQ33t+Xw3CCGIC4/DsfXHkDo3FWIrMdZ9uM6gcpnKjqXSUsXZ1O3OpwyqY6C14/3iwgtsmr9Jb9kazjegn3u/Fo33tvJ19fBVSMQSTBk3BUfWHsb82PkghODS3ku8bbSZONeV1oEQgjsFd0zSoIZ2OFPZNtRXP/d+GO4/HPIKOZu245MdcHnNxaBymMqOpdJSxdnU7W6qsWKqeKctToNAIGC3QHWV7aNJH8HT1dPoOrSlr+kTpmPymMmctEHegwzaPWgzcf5+x/cghODv7/9m0xrLG5E6NxXuzu5ghAx83HyQm5LLuY9PHm0+B3oNxIRhE9jPe1fuBSEEn0z5hE1blrAMErEELy++VLOluneuSDu19RTC+odBxIjg7OCMnKR9WuMxwHMAtizcwkkr2VYCK8bKoLgaaufHfT9iROAI2EhswAgZ+Lr7omBjAXt9/NDx8HX35dxTW1wLQgg2zd/EpuVvyIevuy+7pN4wbwNHKAghKM8uR0RQBPx7+/PyjUpAXiFHxtIMeLh4QCKSYMzgMSjNKFUb8Pr8K6gqFtcOX4OjrSMmvTkJr8pftag+irSo4VGQiCXo2rkrcpJy1MrAx7a2gZ78QTKenHrS4navyK7AUL+hEFuJIWbECOsfhivfXNHZnzXVV7mcFdkVCOsfBmuJNeyt7RE1PAqn008b3FaaBP3xyce4fOAyLh+4rDM+hZsKYcVY4co3V4wWzLb0JRPLcHHXRbU+enLLSd42WkWcCzYWoPpQNaoPVePygcvIScqBs4MzYkfFcvKumb0GVowVUmaloHBTIRbGNb/QckO+QXm0BXD1rNWQiCXs4EyMbPon5AM8B7B5AvsEImZkjEZbD44/ACEEpRmleHD8AXu9S+cuyFyWidKMUkx6cxIERMD7sLOxvBHvhL2D8KBwNu36ketYOnkppyOnzk1FWWaZQXYU/Pv7vyETy9ClcxdsW7INR9cdQcLYBAgEAjw7+wyoBI6uPwpCCG4cvcHel744HQKBALXFtUAlcGrrKfRy7oWsFVko2lTExl55SU0Iga+7LyaPmYw9X+zh5RuVQMbSDDBCBqtmrkLhpkIsfn8xHG0dOYOXj39NA78qtwqdZJ2QMDaBszw3pj6KNP/e/khbnIbvtnyHMYPHqLU5X9vaBjpfEdDV7g3nGyAVS/FO2DvISdqHnKQcjBo4Cn69/HT2Z031VZSlPLv5JbIhkchJ2oeDqw4gekQ0RIzI4La6V3gPE0Mn6tQOTTG4/+192EhssOuzXVrzGTqGWtPX83PPQQjBkbWH4d/bH4yQgbuzO7Yt2aZxYqGNbfK0hkQkQVx4HP468xebT14hh0wsQ9aKLM79C+MWYpD3IN55dAX614O/sjMDVAKur7tiauRUEEJQV1rHbrUoz4L0bWMQQrDxo43s56dlT3kPrDsFdxAeFA57a3vcPHaTTb9bcBddO3fFB1EfQF4hx7oP10Eilmjdn9JmR8Ha07VYnrgc53acY9MUdVWUs+F8A2RiGZJmJLF5gnyC8O6b77Kfh/gO4Yg3KoG5MXPRv09/Tjzmx843yDcqAUdbR3z96dcc24vfX8wZ8Hz8K6gQ51/2/wIbiQ1iR8Wq7ZsaUx9FWurcVM4AVq0PX9stEWd97f5HUfPLao9cZ9P+/O5PVmi09WdN9VXkGeY3DPER8Wq+poybYnRbaaOmGLwqf4WQfiF4b/R7OvMZOoZa09ftvNsgpOkptawVWajcXYnMZZmQiqXYtmQb73i02baGKh+daH6hZXENJ700oxTWEmveeXT5lFfI4ezgjJXTV7Id97f83yAUCFG4qRAFGwtACEHt6VqdnVf1s2pM9NX5xYUX+Hza5xAKhJgYOhGPTjxSy/Nb/m9wsndCkE8QJCKJxrjzsaPMK99cQU7SPix4bwH69uyrVs7pE6bD09UTqGwa+IQQnNl2hr0uE8s0ftnaSm05dS/NKDXId01x80tSTz3h3FOWWcYZ8Hz8K6gQZ5lYBhuJDfr36a8mzsbWh0+b87WtuOfZ2Wfs6lJRduUVp2L1Yki7yyvkWBS/CA62DogZGYO0xWkcO9r6s6b6KvJIxVKUbCtR86W6rWFIWxmiHatmrkIPpx68npjiM4bawtfDE03vg1SdtW//ZDvcnd15x8Ns4qyogKrwnt1+FhKRhHcefT5nvT0Lg30GY+/KvWxghvkNw7x352FuzFyE9AvR23n1HQjq8l9fWo/APoFwsnfC2e1ndcZkxdQVIIQgdlSs2vLHEDuoBOZEz0EPpx5YOnkpTmw+gccnH6uVU3EOcOWbK0j+IBmur7ty/IqtxLi09xJHRBTUVXd9vrW167kd5zgDno9/BRUCt3nBZpzfeR6EEHz18VecPMbWh0+aobYV4qaNWxdtNardFfdkf5aNhLEJcLB1wOpZq43uz7ZSW43iXLKtxOi2MkQ7xgweozVGfXv2NWgMtZWvV+WvQEjT6lw5f3l2ORghwzseZhNnxZZF9mfZnPRF8YvYPWE+efT5PLnlJAghiB4RjRlRM4DKpm9IL1cveLp6cparfDqvoeI8ZdwUuHVxUxMiVWYszYCIEeHrT7+GndQOi+IXcRqcrx1U/nOw9/DEQzZNkzjLK+To5tgNK6auQO9uvbFm9hqOncA+gTi67ggn/4yoGfg49mOtdefjW14hh53UjrPcRmXT4azygOfjX0HVw6bZ78yGFWOFuwV3W1QfvmnG2tZ3zZB2rymuQWJkInu4jUrgxOYTEDEio/tzaEAo4sLj1Hwpzm+MaStt1FSW3wt/VxN7xSrjdt5tg8ZQW/oK8gnCwVUHOPesnrVa7RBeF80mzqgEUmalQMSIsHbOWs5hX15qnkF5dPlsON/AHl7krs5lK634RlTen9PWeTfM24Af9vxg0OBFZdPBnFAgRGRIJDKWZqhRka84rRhWjBU7Q6nKrYKt1BaZyzINsqPg07KnEAgEmBE1A2e3n0VOUg583HxACEHWiiw0nG9g8y5PXA5riTUIIWrL5aPrj4IRMlg7Zy2OrjuC+Ih4Thw11Z2v762LtrIHvUWbirAsYRmcHZw5A56PfwVVxbm+tB5O9k4IDwpnB40x9eGbZqxtXdcMbfdX5a9gK7VFfEQ8CjYWIGtFFob4DkFgn0Cj+/PFXc0vkQ2JRE5SDnJTchEzMkbt8JZP/Y09EOSTT98Yamtf+RvyIWbESJqRhLzUPCx5fwkERMD5AtNHs4pzY3kj1s9dj55de4IRMvBy9dL4KJ2+PPp8jh86niM+r8pfQSqWomfXnnrLP/ud2ZCJZRAzYoMGLyrBPoqjjYp8Ly++xM85P6vdqzhA5WtHmYdSDsHlNRcwQgaDvAehIrsCMyfOhIgRcWYB149cByGE88ihMvcn74enqyeEAiF6OPXAzuU79dadj295hRzpi9Ph1sUNEpEE44eOx/GvjnPOEvj4V1BVnFEJFG0qAiEE+5P3t6g+fNOMsa3rmjHtXpZZBr9efhAKhJCJZXgr5C3ObwuM6c8Xsi4gpF8IpGIpHGwdEBcexz7FYUj9NbWRMdqhKZ++MWQOXwdXHUDfnn3BCBl49/A2/y8EKSn5MPuzbNzKu8VJy1qRBQ8XD7OXjZKyPZCKM6VZOGHYBIT0C0H1oWo8LXuK4rRiONk7IWVWitnLRknZHkjFmdIsfHTiEUIDlF6SKhBi5sSZeH7uudnLRknZHkjFmdKsrCutw52CO5yfWFNSUoKKMyUlJWV7pE5xpi94paSkpDQPdb7glYKiI6OqqgpVVVXmLgYFhclAxZnCIlBZqb4spKDoyKDiTGERoOJMYWmg4kxhEaDiTGFpoOJMYRGg4kxhaaDiTGERoOJMYWmg4kxhEaDiTGFpoOJMYRGg4kxhaaDiTNFh0djYyP6tKs7K1ygoOiKoOFN0SDx79gzp6ensZ1VxTk9Px7Nnz8xRNAoKk4CKM0WHRVpaGurq6gBwxbm+vh4ZGRnmLBoFRYtBxZmiw0JZhJXFWVm0KSg6Kqg4U3RoKIRYIc501kxhKaDiTNGhoRBjhTjTWTOFpYCKM0WHR1paGk6fPo0zZ87QWTOFxYCKM0WHR319PaZNm4bExEQ6a6awGFBxprAIJCcnIzk52dzFoKAwGag4U1gEGhsb6Q9PKCwKnNdUBS9ZgrHp6ZSUlJSUbcjgJTpeU1VZWYnIykokApSUlJSUbchIDf8bhoozJSUlpZlJxZmSkpKyHdL04iyXY8jubHQODoZAKoXQxgaOoaEYc+6c2Suri4QQRFVXm91GW9erPZZZlVHV1SCEmL0cLYlzQkMDrPv2xaRHj8xe1pa2waSHD2Hj64uEhgazl8uSaXJxdp8+HYRh4DFnDoJ3ZSFgzRpIevUCIaRdC7QxIjUiPw/x9fUtstHW9Wppmd++cQOEEDV6LVhgULkMsWMJ4hywbh28Fi40ezmNpWobeM6fj/7r1xtkw6ZfP60xCss7Buu+fUFEInQaMADjystbVN724iuqqgpO4eEQSKWQ9OiB4d8c4m3XpOI85vvvIRCL8dZPP3HSE168gHXfvrD19zd7J9NGY4RV9Z74+npMaWw0e12UqVom1TIbWu9/Hz8OQggGpacjKCOD5diLFw0qlyF2Oro4J7x8CavXXsN/bt0yezmNpWobvH3jBqxefx1TXr3Se2/CixcI3LRJa4wiSktBCIHnxx8j9PA36D5pEohAgJh79wwuZ3vyNfHqVQgkErhNmYLQI4fhOX8+CCGIvHSJl32TirNjaCi8lyzReG34oVx0Dg5GfF2d2TuaJppCnDsCWyrOAzdvBuPo2OJyGGKno4vziIJ82AYEmL2MLaGmNrD198eIggKd9w1KSwMRCNiVkaYYOUVEoPesWf+kyeVwHD7c4NVYe/PlPn063CZP5qR1emMQek6dysuHScVZKJPx/lZIBDClsRH9U1MhcXcHYRhY+/hg+KFcTh5CCMaVl8M5KgoCiQSirl0xdH8Oe915/HjY+Ppy7nmvthaEEARu2mSQH12ipdo5VZfjmu4xRf04DTtwIJwnTGA/h+zbC0IIfD75hE3zWbYMAokECS9fqpVJW5lHnzoFx7AwEJEIYmdnDM3Zp7XNPD78EJ2Dg5Hw4gWi79xBwosXBnVqY+yoxn7itWtgHB3RfdIkzsxtREE+bHx9QRgG4u7dEbhhAxLlcrVYO0VEsKs4vvHnY1ubOPeeNUttS4OPXz79RzVGE69dg8OwYRCIxZB5eqrZfOvHH/HaiBEQ2tiAMAxsfH3VBHZcRQUcw8IgtLYGY28P56gojD59Wk2cPefPh8ecOVrbKRFA7OPHGH/5MsZfvqw1RsTKSm3FFLxzJ2z69TOoT7U3X0KZTM1XVHU1Rp08ycuHycQ59skTEEIQ979a3hUMWLMGxMoK/ikpGFlYCK+FC0EIwYiCfE4ntvX3x6C0NIz67jt0GTMGRCDA+0+fIhFA2LGjIITg7Rs32HsGpaeDCAR4r7bWID+GiPO7Dx6AEIKI0lK8++CBxntMUT9l+q9eDYFEwgpSz8REEELQacAANo9dYCBcYmI01kNbmUVduiAoMxMRpaXsMk+T/0Q0zTxEXbtCIJU2ibxAAI8PP+QcDr19/Tp8li7liFf/1FRElJUZZEdT7KOqqsB06gS3hATOds3oU6cg6dULwbuyMLKoiI11wLp1nFjY+PrCbfJkhOzdwzv+fG1rE2dtX8r6/PLpP6oxEru6wi8pCWHHjqL3zJnN+ZvEd/Lff0MokzW197ZtCD16BG4JCU1+nz1DIoBx5eUghKBrZCSG5uzDsIMH4BIdDSISqQnvsAP7OROjmHv30G3iRK3jXVOM4v76C4QQxP75Jyc9orQUjIODQX2qPfl6//lzEEIQeuQwbP39QRgGEnd3BG3bxrGriyYT50mPHhkmznI5hDIZgndlcdK9Fi5EpzcGcSrePzWV/fzu/fucYCQ0NEAok8EvKYnNYx8UhO7vvmuwH0PEWe89JqqfMif8+is740oEIHZ1Rc+pU0EIQXxdHeLr6kAI4cyW9NWLEILAjRv/6VRPn+oUGtuAANj6+WFcRQXi6+sx6uRJMI6O8F2xgs0TffcuRF27otcHHyBRLkfAunUQSCScVRUfO6qxH//LLxDa2MA1NlZtb7/zkCGcL+hEAH3mzoVd//6cunrOn6/Whvriz9e2tpgJbWzUDsP1+uXZf1RjFLxzJyfdY84c2A8ejEQAcf+rxb+WL+eURdFnFH4dhg1Dj/h4NftuU6ao9f+I0lII7ez4jXctMXr75k0QQtRWTuN//pnjj0+fak++/nP7dtPEp2tXBO/KQmRlJYIyMyGQSpsEmke8TLetIZc3VUBL/nfv30efefPw1o8/IhH/iPl7NTXqDW5tzam4qk3VYLhPnw6Zp2dTYO/cASEE4WfOGOzHlOJsyvopx1js7Ix+K1fi3T/+ACEE0b/9BiIUYmRhIUYUFKh9QfIRZ97+tTBkz26IXVw4adG//QYrJyfYBwXp7Bf67CjHXiiTQWhjA7v+/dXEWSiTqW3bEEIgtLXl1CuitFStrvrqz9e2tpgpvlgM8cu3/6jGKPbJE056+JkzavmjrlzB0Jx98FqwoOmpBSW/AqkU4SUlavY1bWuoipo+apzN/q9WY7k1Cb8hfcrcviY9fNjU31Rm24O3b4fE3Z1XvEy659w5OFjrxvqgtDRu52suvFrnO3sWAolEZ8VV08Z8/31T2pUr8EtOhtjVlV06GOtH9XPkDz8YJs4mrJ8ye8+aBfvBgxGyby/byA7DhqHPvHnoM3cuOoeE8C6jMf41cVxFBYiVlVq674oVIITANTaW11JOmx2F8AzcvBljz58HIQQDvvqKk0cgFiPy0iVEVVer0dC6qqYZa1tBobU1xnz/vUF++fYf1RipCU9ZGSe/x5w5kPToAZ+lS/HvEycQ+/gxx6/Q1lajOIeXlKjPnM+ehdDGhnc/0RgjuRxEKFTTmSHZu2Dt7W10nzK3rymvXrErWk4fLy8HYRhe8TKpOIeXlIAwjNom+Hs1NZC4u0Pcvfs/lWxetg3Znc3J67VoEWcPldeAkssh7tYNvitWQNq7NwLWrOFcM8aP8uw7EcCAL780alvDJPVT4qiTJ0EIgUt0NHrNmIFEAH6rVkHm5QWZpydnqay3jAb6f6+mBkJrawzcsoWTHrhhA2eJnwggKCMDRCRCcNbXENrZwWvRIrbtDbGTCPVVS+/Zs0GsrBB99y6bZhcYiNCjRzjt3mvGDHh+/LHBdVVNM9a2gjIvL4Qe/sYwvzz7j2qMVLc1+sybx26DKA7KJz18yF5XFWfH0FC4xsWp2VecbyinDT+Uq1HUtFFbjLqMHcuJZSKaziRUD1F19an26Ms+KAjDDh7gpPmvXq32AIM2mvxHKIoDhl4zZmDwjh3416efQuLmBkIIRhcXcwuakgIiEiFg7VrugUd+nsED6l/Ll0Nobd3U+VR+hWWMHysnJ3R6YxD+/e23CN65EzJPT43iHLhhAyJ/+EGjDVPWT8GEhgb2cGZYbi7biIql9tvXr+u0pa/M+vy7xMSAiETwWbYMYXnHmmYXDIORRUVsntHFxSBWVuwMLKqqCkJbWwRlZhpkR0FVcY6vr4eVkxOcwsPZARN27CgIwyBg7VqEHj2CHvHxnBgZUlfVNGNtK+g+fbraI6Z8/PLpP6oxElpbw2/VKoTlHWv6EiP/HAi+//QpOzYjzp7F0P05sPbxaRL1XVlIaGjA2IsXQUjzgeD+HAw/lAuXmBgwjo5q/d978WJ2gpAI4w4EE9G8ZSIQIGDtWkSUlqLP3LkgIhHnuXA+faq9+RpRkA+BWAy/pCSMyM+D95IlIAIB94teB1vl59tBmZmwDQiAQCwGY2+P10aM0GhnSmMj+q9fD0nPniAMA5mXl95H3LSlvX39OgghnEfNWuJndHEx+wiT4kBItXP2nj0bQpkMArFYow1T1k+ZzuPHc76Eprx61fQLpJ499XYcfWXW5z+hoQFeCxc2feFaWcHW319NUBNevsT4n3/miseVK4j76y+D7LD3atjvH1lU1CSQB/azacMO7G/6EhUKIenRQ20Waaw4G2tbwdAjh9Fp4ECD/fLpP6oxGnvhAvt0gMzTk/MFkghg+DeHIHZxAWEYdHpjEMZVVDQ91SES4T+3byMRwNgLF9A5JAQCqRSMgwNc4+LYpziUbXV6YxBHaDS1kyExsvH1BRGJYB8UpKYXfPpUe/Q17OCBpn19hoG1t7f5fiFISUmpzoQXL8DY2yPm999bzYc+YTQ1o+/eBWNvb/Rz7pT6ScWZkrIN6Pv55/jXp5+2mv22FmefTz5Bvy++MHtcLZlUnCkp24DvP38OmZeX2tMXpmJbivN7NTWQeXnh/efPzR5XSyYVZ0rKNuL7z5612jZAwosX7J5xazPhxQv2F4WUrUcqzpSUlJTtkDrFmb7glZKSktI81PmCVwqKjoyqqipUVVWZuxgUFCYDFWcKi0ClhmWhMSCEoLq62gQlajn4lKW6+XCQwvJAW5XCIkDFmcLSQFuVwiJgKnGur69HY2OjCUrUcvApCxVnywVtVQqLgKnEuaOBirPlgrYqhUWArzhXVFRg6NChEIvFEIvFCAsLw5UrV9jrqlsJcrkcGRkZ8PDwgEQiwZgxY1Da/OJQ5XtKSkowevRoSCQSeHl54eLFizh58iT69u0LhmHg7u6OU6dOsfc0NjYiNTUV7u7uYBgGPj4+yD2UyymralkqKioQFhYGa2tr2NvbIyoqCqeb//cyheWBtiqFRYCPODc0NEAqleKdd95BTs4+5OzPwahRo+Dn58fmURXEjIwMMAyDVatWobCwEIsXL4Zj839tU77Hw8MDu/fsRlFREQa9MQhisRi9evXCoW8OIb8gH8HBwXB2dmbvWbNmDaysrJCSkoLCwkIsbP4vdPkF+RrLUt78z4giIyORk7MPBw8eQHR0NETN/7WQwvJAW5XCIsBHnP9ofqPM9evX2bQ///wTu7J3sZ9VxdnR0RFfZ33NsbN48WI1cT548AD7WTGb/emnn9i0iooK9h65XA6ZTIasXVkcuwsXLsSgNwZpLMuwYcMQHx+vVqcpza+UorA80Fal6LBQPixTFWdNB2lyuRyLFi2Cg4MDYmJikJaWhtraWk4eZUGsqakBIQRPnjzh5CkrK1MTZ+VnrBX7wHK5XC0NAB41v5aqpqaGY7e0tBTW1tYayyKVSlFSUqJWJ7qtYbmgrUrRIfHs2TOkp6ezn1XFOT09Hc+ePdN4b319PbJ3ZyMhIQEODg5YvXo1e01ZEB82vz5KVUTPnTunJs7Ks21Nh3TKadrsnj17FhKJRKNdW1tbjeJc0vxKKQrLA21Vig6LtLQ01NXVAeCKc319PTIyMtTy19TUIDExES9fvmTTTpw4AZFIxH5WFkS5XA47OzvOtgcALFu2rEXirNjWyN6dzcmzaNEiDBgwQKPd0NBQxMXFqdUpsfmVUhSWB9qqFB0WyiKsLM7Koq2MV69ewdbWFvHx8SgoKEDWriwMGTIEgYGBbB5Vod26dSt7cFdUVIRly5bB2dm5ReIMACkpKRCJRFi7di3nQDAvP0+j3YvNr5SKjIxEzv4c5B7KRUxMjNrhJIXlgLYqRYeGQogV4qxt1qxAWVkZ/Pz8IBQKIZPJ8NZbb+HOnTvsdU2P0qWnp8PNzQ0SiQTjx4/H8ePHte4NA/zEubGxEevXr0fPnj3BMAy8vLz0Pkp34cIFhISEQCqVwsHBAXFxcexTHBSWB9qqFB0aCjFWiLO2WbOxyN6djVu3bnHSsnZlwcPDw2Q+KCg0gYozRYdHWloaTp8+jTNnzuicNRuDCRMmICQkBNXV1Xj69CmKi4vh5OSElJQUk/qhoFAFFWeKDo/6+npMmzYNiYmJJp01A02PvYWGhoIQAkIIhEIhZs6ciefPn5vUDwWFKqg4U1gEkpOTkZyc3Gr26+rqcOfOHbx69arVfFBQKIOKM4VFoLGxsd38NzkKClOA85qqJVOXIP3TdEpKSkrKNuSSqTpeU1VZWYnK3ZVAJSgpKSkp25CVu/W8fZuKMyUlJWXbk4ozJSUlZTtkq4qz4lEjZXbtYWqMxAAAIABJREFU3BUfTfoIT8uetkqFCCGoPlTdJrZM6YvSdKw+1PzLu3ZQFm3sKH2HTzk7Qrw7IltdnL/8+EscWXsYR9YexsFVB7A8cTnEVmJMjZxqts5krK281DzUl9a3yFf2Z9kI8AgAI2TQ/fXuSJ2bisbyRoPLZio7lsiOIBbG9lNTtrtqfza2nKaO942jNzRO7Ba8t8AsdpTZz72f1nhU5VYhPCgcUrEUPZx64FDKoRbFodXFWVNF8lLzQAjBiwsvTNaghnQmvqwvred0fFXbhvra8/luEEIQFx6HY+uPIXVuKsRWYqz7cJ1B5TKVHUulpYqzqdudTxlUx0BbxPv4V8dBCEH64nRkLM1geXHXRbPYQSXw4sILbJq/SWvMrh6+ColYginjpuDI2sOYHzsfhBBc2nvJ6DiYRZzrSutACMGdgjsma1BDOpypbBvqq597Pwz3Hw55hZxN2/HJDri85mJQOUxlx1JpqeJs6nY31Vgxdbw3L9gMR1vHdmMnbXEaBAIBO/PWFLPpE6Zj8pjJnLRB3oNatENgFnH+fsf3IITg7+//ZtMayxuROjcV7s7uYIQMfNx8kJuSy7mPTx5tPgd6DcSEYRPYz3tX7gUhBJ9M+YRNW5awDBKxBC8vvlSzpbo0UqSd2noKYf3DIGJEcHZwRk7SPq3xGOA5AFsWbuGklWwrgRVjZVBcDbXz474fMSJwBGwkNmCEDHzdfVGwsYC9Pn7oePi6+3LuqS2uBSEEm+ZvYtPyN+TD192XXVJvmLeBIxSEEJRnlyMiKAL+vf15+UYlIK+QI2NpBjxcPCARSTBm8BiUZpSqDXh9/hVUFYtrh6/B0dYRk96chFflr1pUH0Va1PAoSMQSdO3cFTlJOWpl4GNbmzAmf5CMJ6eetLjdK7IrMNRvKMRWYogZMcL6h+HKN1d09mdN9VUuZ0V2BcL6h8FaYg17a3tEDY/C6fTTBreVLkH/MPpDBPcNxosLL3Cn4I7RK2xT2Xl88jEuH7iMywcua203mVimNiOvPlSNk1tOGuUTlWh9cS7YWIDqQ9WoPlSNywcuIycpB84OzogdFcvJu2b2GlgxVkiZlYLCTYVYGNf8cssN+Qbl0Ra81bNWQyKWsIMzMbLpH5IP8BzA5gnsE4iYkTEabT04/gCEEJRmlOLB8Qfs9S6duyBzWSZKM0ox6c1JEBAB78POxvJGvBP2DsKDwtm060euY+nkpZyOnDo3FWWZZQbZUfDv7/+GTCxDl85dsG3JNhxddwQJYxMgEAjw7OwzoBI4uv4oCCG4cfQGe1/64nQIBALUFtcClcCprafQy7kXslZkoWhTERt75SU1IQS+7r6YPGYy9nyxh5dvVAIZSzPACBmsmrkKhZsKsfj9xXC0deQMXj7+NQ38qtwqdJJ1QsLYBM7y3Jj6KNL8e/sjbXEavtvyHcYMHqPW5nxtaxNnvjNaXe3ecL4BUrEU74S9g5ykfchJysGogaPg18tPZ3/WVF9FWcqzm18oGxKJnKR9OLjqAKJHREPEiAxuq3uF9zAxdKLGekUERaBr566QiqUghEBABPgw+kM0nG8waJyYyo6+tnl+7jkIITiy9jD8e/uDETJwd3bHtiXbNE4e+LLNn9aQiCSIC4/DX2f+YvPJK+SQiWXIWpHFuX9h3EIM8h7EO4+ujv3rwV/ZmQEqAdfXXTE1cioIIagrrWO3WpRnQfq2MQgh2PjRRvbz07KnvAfWnYI7CA8Kh721PW4eu8mm3y24i66du+KDqA8gr5Bj3YfrIBFLtO5dabOjYO3pWixPXI5zO86xaYq6KsrZcL4BMrEMSTOS2DxBPkF498132c9DfIdwxBuVwNyYuejfpz8nHvNj5xvkG5WAo60jvv70a47txe8v5gx4Pv4VVIjzL/t/gY3EBrGjYtX2TY2pjyItdW4q+/n+t/fV6sPXdkvEWV+7/1HU/OLaI9fZtD+/+xO7Ptulsz9rqq8izzC/YYiPiFfzNWXcFKPbShMDPALg18sPFdkVqC+tx8ktJ+Fo64gVU1cYNE5MZUdf29zOuw1Cmp5Ey1qRhcrdlchclgmpWIptS7bxqrMmmmVbQ5WPTjS/3LK4hpNemlEKa4k17zy6fMor5HB2cMbK6SvZjvtb/m8QCoQo3FSIgo0FIISg9nStzs6r+lk1Pvrq/OLCC3w+7XMIBUJMDJ2IRyceqeX5Lf83ONk7IcgnCBKRRGMb8LGjzCvfXEFO0j4seG8B+vbsq1bO6ROmw9PVE4qBTwjBmW1n2OsysUzjl62t1JZT99KMUoN81xQ3vzD11BPOPWWZZZwBz8e/ggpxlollsJHYoH+f/mribGx9+LQ5X9uKe56dfcauLhVlV15xKlYvhrS7vEKORfGL4GDrgJiRMUhbnMaxo60/a6qvIo9ULEXJthI1X6rbGoa0FV/u+Xy32t46n3FiajuaxvfDE03vf1SdbW//ZDvcnd2NrnO7EGdF5VSF9+z2s5CIJLzz6PM56+1ZGOwzGHtX7mWDNsxvGOa9Ow9zY+YipF+I3s6r70BQl//60noE9gmEk70Tzm4/qzMmK6auACEEsaNi1ZZGhthBJTAneg56OPXA0slLcWLzCTw++VitnIpzgCvfXEHyB8lwfd2V41dsJcalvZc4IqKgrrrr862tXc/tOMcZ8Hz8K6gQuM0LNuP8zvMghOCrj7/i5DG2PnzSDLWtEDdt3Lpoq1Htrrgn+7NsJIxNgIOtA1bPWm10f7aV2moU55JtJUa3FV9WZFdo3FvXNU5aw46mGL0qf8WuwJXTy7PLwQgZo+vcLsRZsWWR/Vk2J31R/CJ2T5hPHn0+T245CUIIokdEY0bUDKASWDVzFbxcveDp6slZrvLpvIaK85RxU+DWxU1NiFSZsTQDIkaErz/9GnZSOyyKX8TpMHztoPKfg72HJx6yaZrEWV4hRzfHblgxdQV6d+uNNbPXcOwE9gnE0XVHOPlnRM3Ax7Efa607H9/yCjnspHac5TYqmw5nlQc8H/8Kqh42zX5nNqwYK9wtuNui+vBNM9a2vmuGtHtNcQ0SIxPZw21UAic2n4CIERndn0MDQhEXHqfmS3F+Y0xbaSq3tcRa7eBzw7wNatsiusaJqezwbZsgnyAcXHWAk7Z61mq1g3ZD2C7EGZVAyqwUiBgR1s5Zyznsy0vNMyiPLp8N5xvYw4vc1blsABQzFOX9OW2dd8O8Dfhhzw+8OrMy//7+bwgFQkSGRHKeuVRQka84rRhWjBU7Q6nKrYKt1BaZyzINsqPg07KnEAgEmBE1A2e3n0VOUg583HxACEHWiizO4cjyxOWwlliDEKK2XD66/igYIYO1c9bi6LojiI+I58RRU935+t66aCt70Fu0qQjLEpbB2cGZM+D5+FdQVZzrS+vhZO+E8KBwdtAZUx++acba1nXN0HZ/Vf4KtlJbxEfEo2BjAbJWZGGI7xAE9gk0uj9f3NX8QtmQSOQk5SA3JRcxI2PUDm/51F/XgWDMyBiIGBGWJSzDsfXHsGLqCjBCBkWbiniPE1Pa4TO+8zfkQ8yIkTQjCXmpeVjy/hIIiIDzJWUo2404N5Y3Yv3c9ejZtScYIQMvVy+Nj9Lpy6PP5/ih4zni86r8FaRiKXp27am3/LPfmQ2ZWAYxIzZo8KKyad9V19JVke/lxZf4OedntXsVB6h87SjzUMohuLzmAkbIYJD3IFRkV2DmxJkQMSLczrvN5rt+5DoIIZxHDpW5P3k/PF09IRQI0cOpB3Yu36m37nx8yyvkSF+cDrcubpCIJBg/dDyOf3Wcc5bAx7+Cmh7TKtpUBEII9ifvb1F9+KYZY1vXNWPavSyzDH69/CAUCCETy/BWyFuc3xYY058vZF1ASL8QSMVSONg6IC48jn2Kw5D6a2ojBRvON2Bh3EK4dXGDFWMF/97+HEHlM05MaYdvux1cdQB9e/YFI2Tg3cO7ff9CkJKSD7M/y8atvFuctKwVWfBw8TB72SgpzUUqzpRm54RhExDSLwTVh6rxtOwpitOK4WTvhJRZKWYvGyWluUjFmdLsfHTiEUIDlF6YKhBi5sSZeH7uudnLRklpLlJxpmw3rCutw52CO5yfWFNS/v9KKs6UlJSU7ZA6xZm+4JWSkpLy/7X37cFRFdn/JzOZqakkpFhMwaLUijwKkUcRgkCBGkCiJfJYEaiCrCspY1yWHxSIuxhY0WVdDAQkfAGXiXHFJKiJBCGQB4IhYQAlCc9lTaK8ZAV5JJM1AyqPmc/vj3Yed+bOzJ3JsMncnE/Vp4q+3ff06T63P+nb3cNtG/r8wCuDEc6oq6tDXV1dW7vBYIQMLM4MVeDIEc/XQgYjnMHizFAFWJwZagOLM0MVUCrOxcU70K1bN2g0GgC//Aqsvv5uu8dgBAwWZ4YqoFSc4+Li8MILL+Do0aMAghPnXSW7YLFYvKYZjFCAxZmhCigVZyLCqVOnJOlAxdn9Hp59M+4GWJwZqkAg4txaYWVxZvwvwOLMUAWUiLPH/+4GT2E9fvw4nnjiCXTq1AlarRbx8fEoLS31akPOJgCUlJYgPj4eWq0WPXv2xLp162Cz2SR2ampqMH36dBgMBtx3330oKCwIVXcwVAAWZ0bYwmq1Ov7tLs6ueXZcvvzLh05NJly+fBmAVJx//vlnREdH495778W7776L4uIdSE395SO1N27I2pCzWVFRgX79+iE3Lxfl5eVIT//lo6dvv+3whYgwbNgwGI1GfP7555g8eTIiIiJw/fr10HcUIyzB4swIS9y4cQPZ2dmOtLs4Z2dnOwTVFb6WJJr/24w33ngDhw4dcuS3tLT4XcZwTycmJuLMmTOSehctWoThw4dL7snKynKkv//+e14eYUjA4swIWxiNRrS0tACQirPFYkFOTo7sPUrWixsaGlBQ8DFeffVVDBkyJGBxjo728tHT2FjJPR7/lwKLM8MFLM6MsIWrCLuKs6tou8OfsC5cuBC9evXCa6+9hj179uDatWsBi7Ner8exY8dQX1/vQW/3eLvG6LhgcWaENexCbBdnX7NmwM+yRvMvH6m9csWRH4w4jxgxAsXFOxxpm82GuXPnYvHixV7v8XaN0XHB4swIa9jF2C7OvmbNgG9hvX79l4/Uzp2LgwcPoqCwAIMH//KR2rxc3Lx503HPunXrJD9kcU0X7yyGVqvFmjVrUFy8Aykpv3z0tKjIqx/erjE6LlicGWEPo9GIyspK7N+/3+esGfA/69326Tbcf//90Gq1GDV6FGprazFv3jzodDqcP38eALBgwQJER0dDr9fLpgGg8JNCDBgwABqNBr169cLmzZt9+uHtGqPjgsWZEfawWCyYM2cO0tLSfM6aGYxwAoszQxXIyMhARkZGW7vBYIQMLM4MVcBqtcr+8ITBCFdIP1P1py7I/r/uTCaTyfwf8i9/6uL9M1VHjhzBkX0ENDOZTCbzf8kj+2R+lMTizGQymW1LFmcmk8lsh+ww4uz+fxpERBAejidkvE6409j2/oUriz8kdIsjaCICi0V9tfc0MzSsr/7lvyxtB74wA2eHEucNqwg7tgi+v4EwL42g0RCeelw9Aj10kHehyzeKP0haLaHnbwhZKwjWJmmZnR8Thgwk6CIJI4cRaip81xfXhfDC7whHKwOLBYuzd+4qIFi+a70dFufwZocSZzkB+OpLgkFPKPhn2/vYGt66Sli/0ns7t2SLvNmzhABnrSDo9YS3/+4sYyoVZRbPJ2zPJzw3Q7xhXPzKd7+eOtS6WLA4++6fYMniHN7s8OKMZsLydDHjbGsfg6VxrRBR+5KNXDuHDiI8/hjBZnZee38D4f4ezvSkJwnzX3KmbWZxz6sLgutXpfewOLe+T+XI4hzeZHFuJhzfL5Y3XIULzYSSQkL8IIJWI5YB1r0lLUMkXvunTyEYDIT7unvOwGsrCGMfFbNUvY6QNIbQUBNYPf4G2bVvxOz11CHv7Rw5jPCPNdJrVSWESK0zHaklVH8uLbN5o/c/XB6fY5LpZznffYnzX14h3Ptrafsv/EuUKf5I3g+bmZCdRXiwj2jDw/GEimJnvrVJvCn07SWWdAYPIBTlefrkL5ZKyiiJpy9/5fpUic3aCvFsxUQTunQWPlbu8v3chKo9SmNeUyEmAMOGhDYuSsZYOJLFuZnQfE7kX25wXqsoJvTrTcj9B6G8iJC+UJRxXQYgEg+acS3h8x2EyU+JGez1iyL/5hVClIEwcyqh4D3xUE0YT0gYHFg9F78izPht69vpSmuT8GviEyL9wwVxb+NpaTlTqVhXlrNxuUHcYyp19l1rxfnfX4j04b3O/LeWEWI7iaUbOT82vS0G5vqVYnBPfELY+PakyF/9NyGCmcsJZVudfVxSqDyWSssoiacvf+X61J/NmgqRfmaieM62bibMelbsG/gT51C0R2nM4weJ/YkP3w1dXJSMsXAli3Mzofm8yL/ytfNa4mjCmaPScov+SBg+VGoza4Uz/X29tJ5LdSJ9+oizTONpQt6mwOoJVTvtvPAvIQhdOhPOHhPXzh4T97oL4MkD/ge4ryWKQMUZzYSH+hFeniP+bTMTevck/Hmedx/iuhA+ynGmr18kjHlErJvbzIToKCEurvekLySMGq48lkrLKImnL3/l+sOfzXGPElKSPfvlxd/7j10o2qM05q4xDFVclIyxcCWLc7P8skZ0lPyXlGM7SW26949rPTYzYcnLYjAmTxN//ZvPScsrqSdU7bx1lfDmUnHsbcZvCVdd/hjZ/0A1nZHeYyoldPbhy90Q54zXRZ9Zm5yzwrrD8vU3nZH3286rX4t881nPdsVEK4+l0jL+4unP32BsRhnEEpW7HSXLGq1tTyAxN5WGPi5Kxli4ksW5WQiW+2uQXk84ViXucacvm3LXLN+JY2ypz4mHaNVfA6snFO20fEcYkUDo3o1wsNwz32YWou0e77xNhEEPKa/PPX20MnBxPn/COZjnpfl+i7AvAzSfl8+/4kUEDpaLUzqBxFJJGX/x9OdvMDZjO8mLc1VJYG89wdQdbMxDGRd/Yyxc2eHF+asvxUZD4fvS6yMSxA8s7GmbmTA3VRwzU/rgmM8S0p4n3L7mzN+zXawFBlJPKNr54u/F8oD7YHDllAme9U56UrxqKq2PiLDfZYa0YVXg4mzvl7TnxSzqvfXe67eZxczRdSPpp8uEXvcTVrzmfH3ON0rvW/Ky2CRVGkulZfzF05+/wdgcnyiOSLr3TdrzrRdnpeMg0JiHKi5Kxli4skOJs/uPUOa/JJYznk7y/DFG8UdiB3nNm+LhTEkWNoo+UP7g3GkUs5qUZELpJ2J9LXG0eOADqae1G4I/Xxaz4mcmEnL+z5P2cpW7xGbLmjfFrHXRH8VDfu648vq6dxNrhru3iZMeAx4MTpztA1yrIbT8x3ebV/9NCN7GTMK2PPFHhkisl6NZbDjpIkW7XDeedhUoj6XSMkri6c9fInEiwv7DHn82qz8X6Wcmig2xojzxit/1ntaLs5L2BBPzUMVFyRgLV3YocXZlRITYBfb18+3C98WDptGImc3mjYE/3AfKxJKJRiNmClMnig25QOoJ5LyqnE8NNfLrhna6lt2xReyq6yIJj4zwXObwV9++nc6jUfbNpGDE2f7aKzcjdKf9SFbvnqIPBz0k3fG3NhHWriD0eUCI/cD+8ke2QiHOSuLpz98FfxDPil6n3ObhvWJTMcogXutnz3Ku1wfyrATTnmBiHsq4KBlj4cgOI87M8OK5456bSExmRyKLM7Nd8fY1sYyRkiyO1Ln/MIjJ7ChkcWa2K9qXcGI7EU6Y2t4fJrOtyOLMbFe0mcVxM7X8L4FMZrBkcWYymcx2SJ/izB94ZTKZzLahzw+8MhjhjLq6OtTV1bW1GwxGyMDizFAFjhw54vFaGAyICPX19SHwqPVQ4kt9fb04Z8xQHTiqDFWAxZmhNnBUGapAqMTZYrHAarWGwKPWQ4kvLM7qBUeVoQqESpzDDSzO6gVHlaEKKBXn2tpajB07Fnq9Hnq9HklJSWhoaHDkuy8l2Gw25OTkoH///jAYDJg8eTJMJpNEEIkIVVVVePrpp2EwGDBw4EBUV1dj7969GDJkCLRaLfr27YuKigrHPVarFVlZWejbty+0Wi0GDx6Mom1FEl/dfamtrUVSUhJiYmLQpUsXTJ8+HZWVlSzOKgVHlaEKKBHnmzdvIioqCjNnzkRBwccoKCzAhAkTkJCQ4CjjLog5OTnQarVYuXIlysrKsHTpUnTt2tVDnPv3748tH25BeXk5Ro0eBb1ej379+mHbp9tQUlqCxx57DD169HDcs3r1akRGRiIzMxNlZWVIT08HEaGktETWl5qaGvG/0T3zDAoKPsbWrZ9g1qxZ0Ol0LM4qBUeVoQooEedLly6BiHD69GnHtcbGRuTl5znS7uLctWtXfJD7gcTO0qVLPcR569ZPHGn7bPbEiROOa7W1tY57bDYboqOjkZuXK7Gbnp6OUaNHyfoybtw4pKSkeLTpxRdfZHFWKTiqjLCF62aZuzjLbaTZbDYsWbIEcXFxSE5OhtFoRHNzs6SMqyCazWYQEZqamiRlDhw44CHOrmes7evANpvN4xoAXL16VXwZxGyW2DWZTIiJiZH1JSoqClVVVR5t4mUN9YKjyghL3LhxA9nZ2Y60uzhnZ2fjxo0bsvdaLBbkb8lHamoq4uLisGrVKkeeqyBeuXJFVkQPHTrkIc6us225TTrXa97sHjx4EAaDQdZubGysrDhXVVWxOKsUHFVG2MJoNKKlpQWAVJwtFgtycnI8ypvNZqSlpeH27duOa3v27IFOp3OkXQXRZrOhc+fOkmUPAFi2bFmrxNm+rJG/JV9SZsmSJRg5cqSs3fHjx2P27NkebUpLS2NxVik4qoywhasIu4qzq2i74s6dO4iNjUVKSgpKS0uRm5eLxMREjBgxwlHGXWg3bdrk2LgrLy/HsmXL0KNHj1aJMwBkZmZCp9NhzZo1kg3BXSW7ZO1WV1c7NwQLC1C0rQjJyckem5MM9YCjyghr2IXYLs7eZs12HDhwAAkJCdBoNIiOjsbUqVNx4cIFR77cUbrs7Gz07t0bBoMBzz77LD777DOva8OAMnG2Wq1Yu3Yt+vTpA61Wi4EDB/o9Snf48GGMGTMGUVFRiIuLw+zZsx2nOBjqA0eVEdawi7FdnL3NmoNF/pZ8nDt3TnItNy8X/fv3D1kdDIYcWJwZYQ+j0YjKykrs37/f56w5GEybNg1jxoxBfX09rl+/jn379qF79+7IzMwMaT0MhjtYnBlhD4vFgjlz5iAtLS2ks2ZAHHsbP3684+vlGo0G8+bNw48//hjSehgMd7A4M1SBjIwMZGRk3DX7LS0tuHDhAu7cuXPX6mAwXMHizFAFrFZru/nf5BiMUIA/U8VkMpltTJ+fqeIPvDKZTGbbkL++zWQyme2QLM5MJpPZDtlhxdl+DMrOiAjCw/GEjNcJdxrb3r/2yOIPCd3iCJqIwPq5vtp7mhka1leLvm1rP5ihY4cW5w2rCDu2CL6/gTAvjaDREJ56PLwF+sYlwsI5hHt+RYiOIiRPI1z92nv5s8cIep1/0YzrQnjhd4SjlYH1M4uzd+4qIFi+a70dFmf1sUOLs5xIfPUlwaAnFPyz7X0MlvNfItzXnZD7D0LRB4ShgwhJY+TL3rwi8pWIJhHh1KHW9TOLs+/+CZYszuoji7NM3vJ0IVht7WOw7NyJULbVmT57TLS3+bxn2T/9P8KAB5WLc6BCwuIc+j6VI4uz+sjiLJN3fL9Y3rCZpddLCgnxgwhaDaHnbwjr3pKWISLUVBCmTyEYDGL26j4Dr60gjH2UoNeLpYSkMYSGmsDq8TcQO3ciVO5ypi83/CLO56TlyrYSIrWifn8i4b5GL9eHcn75Eue/vEK499fStl34lyhT/JG8HzYzITuL8GAf4fvD8YSKYme+tYmQtYLQtxdBqyUMHkAoyvP0yV+clJRREitf/sr1qRKbtRXiuYmJJnTpLHys3OX7mQhVe5TGvKaCMOlJwrAhoY2LkvGjFrI4y+Q1nxP5lxuc1yqKCf16i6WC8iJC+kJR5u2/S20OG0IwriV8voMw+Smx0Xj9osi/eYUQZSDMnEooeE88eBPGExIGB1bPxa8IM37rvW1/nidmw3t3EA6Wiwd40pPSMt/XEzrFEPI2+e8PNDsF3lTq7JfWivO/vxDpw3ud+W8tI8R2Ity6Ku/HprfFwFy/UgzuiU8IG9+eFPmr/yZEMHO5+ONj77+SQuVxUlpGSax8+SvXp/5s1lSI9DMTxTO0dTNh1rMEXaR/cQ5Fe5TGPH6Q2J/48N3QxUXJ+FETWZxl8prPi/wrLptoiaMJZ45Kyy36I2H4UKnNrBXO9Pf10nou1Yn06SPOMo2nnQKptB5/bDojBMF1VvZ1rTP/TiNhzCOE52cq6w9vZVorzmgmPNSP8PIc8W+bmdC7p/jj4s2HuC6Ej3Kc6esXRVu254v7o6OEuLjek76QMGq48jgpLaMkVr78lesPfzbHPUpISfbslxd/71+cQ9EepTF3jWGo4qJk/KiJLM4yeXLLGtFRnq+hRGKW52rTvb9c67GZCUteFgM2eZqYIbgvNSipxx8fGSFeDb/7N+HaN4TnZoiZjD1/5RuEXvdLTwm0lThnvC76w9rknBXWHZavv+mMyG86I59/9WuRbz4rvW4qFUsASuOktIy/WPnzNxibUQZCVYmnHSXLGq1tTyAxN5WGPi5Kxo+ayOIsk/fmUs9XJb2ecKxK3ONOXzblrlm+I+QbCanPiQdt1V8Dq8cXzWdFnde+cV5rPC0ViclPyQ9CIsKQgcr7zD19tDJwcT5/wjmY56X5fkNwrJ2fl8+/4kUEDpaLEziBxElJGX+x8udvMDZjO8mLc1WJf3FubXuCjXko4+Jv/KiJLM5u17/6UmxGFL4vvT4iQfwIw562mQlzUwmL5yt/uMxnCWnPE25TeD8/AAAVsklEQVRfc+bv2S7WCwOpxxd/uCDqdN0ksc9uWv4j0t/923PwERFKPxFiqbTPiAj7XWZIG1YFLs72Nqc9L2ZR7633Xr/NLGaOrhtJP10WbwErXnO+PucbpfcteZkwcpjyOCkt4y9W/vwNxub4RMLsWZ59k/Z868VZ6TMeaMxDFRcl40dN7NDi7P4jlPkvieWMp5PEa7Zr+eKPxC7zmjfFA5ySLGwUfaD84brTKGY+KclCCHP/Idb5RiQEVo+/DcGJT4i13ML3xcZJ/75ituyvPwJd1ujeTawZ7t5G2LzReSTP1z1y9dgHuFbj/APijav/JgRvYyZhWx5hygRx78kDIj9zuRisa96UbjztKlAeJ6VllMTKn79E4kSE/Yc9/mxWfy7Sz0wUG2JFeeIVv+s9rRdnJe0JJuahiouS8aMmdmhxdmVEhNgp9vXz7cL3xcOo0YjZz+aN/h9K92sHysSSiUYjZhNTJ4rjY4HU4+8onfmseIA7xQimJHu+Usr1R6DivG+n82iUfTMpGHG2v/bKzQjdaT+S1bun6J9BD0l3/K1NhLUrCH0eEGI/sL/8ka1QiLOSWPnzd8EfxHOg1ym3eXiv2FSMMojX+tmznOv1gcQ3mPYEE/NQxkXJ+FELO6w4M9sPzx333ERiMjs6WZyZbcbb18QyRkqyWIZx/9EPk9mRyeLMbDPal2diOxFOmNreHyazPZHFmdlmtJnFcbNw/h8Amcy7RRZnJpPJbIf0Kc78gVcmk8lsG/r8wCuDEc6oq6tDXV1dW7vBYIQMLM4MVeDIkSMer4XBgIhQX18fAo9aDyW+1NfXi7PHDNWBo8pQBVicGWoDR5WhCoRKnC0WC6xWawg8aj2U+MLirF5wVBmqQKjEOdzA4qxecFQZqoBSca6trcXYsWOh1+uh1+uRlJSEhoYGR777UoLNZkNOTg769+8Pg8GAyZMnw2QySQSRiFBVVYWnn34aBoMBAwcORHV1Nfbu3YshQ4ZAq9Wib9++qKiocNxjtVqRlZWFvn37QqvVYvDgwSjaViTx1d2X2tpaJCUlISYmBl26dMH06dNRWVnJ4qxScFQZqoAScb558yaioqIwc+ZMFBR8jILCAkyYMAEJCQmOMu6CmJOTA61Wi5UrV6KsrAxLly5F165dPcS5f//+2PLhFpSXl2PU6FHQ6/Xo168ftn26DSWlJXjsscfQo0cPxz2rV69GZGQkMjMzUVZWhvT0dBARSkpLZH2pqakR/0PdM8+goOBjbN36CWbNmgWdTsfirFJwVBmqgBJxvnTpEogIp0+fdlxrbGxEXn6eI+0uzl27dsUHuR9I7CxdutRDnLdu/cSRts9mT5w44bhWW1vruMdmsyE6Ohq5ebkSu+np6Rg1epSsL+PGjUNKSopHm1588UUWZ5WCo8oIW7hulrmLs9xGms1mw5IlSxAXF4fk5GQYjUY0NzdLyrgKotlsBhGhqalJUubAgQMe4ux6xtq+Dmyz2TyuAcDVq1fF10LMZoldk8mEmJgYWV+ioqJQVVXl0SZe1lAvOKqMsMSNGzeQnZ3tSLuLc3Z2Nm7cuCF7r8ViQf6WfKSmpiIuLg6rVq1y5LkK4pUrV2RF9NChQx7i7Drbltukc73mze7BgwdhMBhk7cbGxsqKc1VVFYuzSsFRZYQtjEYjWlpaAEjF2WKxICcnx6O82WxGWloabt++7bi2Z88e6HQ6R9pVEG02Gzp37ixZ9gCAZcuWtUqc7csa+VvyJWWWLFmCkSNHytodP348Zs+e7dGmtLQ0FmeVgqPKCFu4irCrOLuKtivu3LmD2NhYpKSkoLS0FLl5uUhMTMSIESMcZdyFdtOmTY6Nu/Lycixbtgw9evRolTgDQGZmJnQ6HdasWSPZENxVskvWbnV1tXNDsLAARduKkJyc7LE5yVAPOKqMsIZdiO3i7G3WbMeBAweQkJAAjUaD6OhoTJ06FRcuXHDkyx2ly87ORu/evWEwGPDss8/is88+87o2DCgTZ6vVirVr16JPnz7QarUYOHCg36N0hw8fxpgxYxAVFYW4uDjMnj3bcYqDoT5wVBlhDbsY28XZ26w5WORvyce5c+ck13LzctG/f/+Q1cFgyIHFmRH2MBqNqKysxP79+33OmoPBtGnTMGbMGNTX1+P69evYt28funfvjszMzJDWw2C4g8WZEfawWCyYM2cO0tLSQjprBsSxt/Hjxzu+3K7RaDBv3jz8+OOPIa2HwXAHizNDFcjIyEBGRsZds9/S0oILFy7gzp07d60OBsMVLM4MVcBqtbab/02OwQgF+DNVTCaT2cb0+Zkq/sArk8lktg3569tMJpPZDsnizGQyme2QHVac7ceg7IyIIDwcT8h4nXCnse39a48s/pDQLY6giQisn+urvaeZoWF9tejbtvaDGTp2aHHesIqwY4vg+xsI89IIGg3hqcfVI9BDB3kXw50fE4YMJOgiCSOHEWoqfNuK60J44XeEo5WB9TOLs3fuKiBYvmu9HRZn9bFDi7OcSHz1JcGgJxT8s+19bA1vXSWsX+m9naZSkbd4PmF7PuG5GeLt4eJXvvvs1KHW9TOLs+/+CZYszuoji7NM3vJ0MeNsax+DpXGtEFr7ko1cOyc9SZj/kjNtMxMef4zw6oLg+kzpPSzOre9TObI4q48szjJ5x/eL5Q2bWXq9pJAQP4ig1RB6/oaw7i1pGSKxNDB9CsFgINzX3XMGXltBGPsoQa8n6HWEpDGEhprA6vE3EK99I2a4pw55b2ekllD9ufTa5o3e/yi5r9HL9aGcX77E+S+vEO79tbRtF/4lyhR/JO+HzUzIziI82Ee04eF4QkWxM9/aRMhaQejbi6DVEgYPIBTlefrkL05KyiiJlS9/5fpUic3aCvHcxEQTunQWPlbu8v1MhKo9SmNeUyEmAMOGhDYuSsaPWsjiLJPXfE7kX25wXqsoJvTrTcj9B6G8iJC+UJR5++9Sm8OGiJnr5zsIk58SM9jrF0X+zSuEKANh5lRCwXviwZswnpAwOLB6Ln5FmPHb4Nv5wwVxvfG09LqpVKwry9m53CDuMZU6+6W14vzvL0T68F5n/lvLCLGdxLKMnB+b3hYDc/1KMbgnPiFsfHtS5K/+mxDBzOWEsq3O/ispVB4npWWUxMqXv3J96s9mTYVIPzNRPENbNxNmPSv2DfyJcyjaozTm8YPE/sSH74YuLkrGj5rI4iyT13xe5F/52nktcTThzFFpuUV/JAwfKrWZtcKZ/r5eWs+lOpE+fcRZpvE0IW9TYPW0tp1nj4nr7gJ48oD/Ae5riSJQcUYz4aF+hJfniH/bzITePQl/nufdh7guhI9ynOnrFwljHhHr5jYzITpKiIvrPekLCaOGK4+T0jJKYuXLX7n+8Gdz3KOElGTPfnnx9/5jF4r2KI25awxDFRcl40dNZHGWyZNb1oiO8nwNJRKzPFeb7v3lWo/NTFjyshiwydPEDKH5nLS8knpa2077H5+mM9LrplJCZx/13A1xznhd9Ie1yTkrrDssX3/TGXm/7bz6tcg3n/VsV0y08jgpLeMvVv78DcZmlIFQVeJpR8myRmvbE0jMTaWhj4uS8aMmsjjL5L251PNVSa8nHKsS97jTl025a5bvCPlGQupz4kFb9dfA6mltO21mcVbZPbZ5mwiDHlJuyz19tDJwcT5/wjmY56X5fkOwLwM0n5fPv+JFBA6WixM4gcRJSRl/sfLnbzA2YzvJi3NVSWBvPcHUHWzMQxkXf+NHTWRxdrv+1ZdiM6Lwfen1EQniRxj2tM1MmJsqjqIpfbjMZwlpzxNuX3Pm79ku1gsDqScU7ZwywdPmpCfFq6ZSW0SE/S4zpA2rAhdne5vTnhezqPfWe6/fZhYzR9eNpJ8uE3rdT1jxmvP1Od8ovW/Jy+Ict9I4KS3jL1b+/A3G5vhEwuxZnn2T9nzrxVnpMx5ozEMVFyXjR03s0OLs/iOU+S+J5Yynk8Rrtmv54o/ELvOaN8UDnJIsbBR9oPzhutMoZj4pyYTST8QaXOJoMSgCqae1G4JoFq/BERGiHlOpWFvURRLOHVduq3s3sWa4e5s46THgweDE2T7AtRpCy398t2f134TgbcwkbMsTf2SIxHo5msWGky5StMt142lXgfI4KS2jJFb+/CUSJyLsP+zxZ7P6c5F+ZqLYECvKE6/4Xe9pvTgraU8wMQ9VXJSMHzWxQ4uzKyMixE6xr59vF74vHkaNRsx+Nm/0/1C6XztQJpZMNBoxm5g6URwfC6SeQM60+lq+2bFF7KrrIgmPjPBc5vBna99O59Eo+2ZSMOJsf+2VmxG6034kq3dP0T+DHpLu+FubCGtXEPo8IMR+YH/5I1uhEGclsfLn74I/iOdAr1Nu8/BesakYZRCv9bNnOdfrA3kOgmlPMDEPZVyUjB+1sMOKM7P98Nxxz00kJrOjk8WZ2Wa8fU0sY6QkiyN17j/6YTI7MlmcmW1G+/JMbCfCCVPb+8NktieyODPbjDazOG6mlv8BkMkMJVmcmUwmsx3SpzjzB16ZTCazbejzA68MRjijrq4OdXV1be0GgxEysDgzVIEjR454vBYyGOEMFmeGKsDizFAbWJwZqgCLM0NtYHFmqAIszgy1gcWZoQqwODPUBhZnhirA4sxQG1icGaoAizNDbWBxZoQtrFar49/u4uyax2CEI1icGWGJGzduIDs725F2F+fs7GzcuHGjLVxjMEICFmdG2MJoNKKlpQWAVJwtFgtycnLa0jUGo9VgcWaELVxF2FWcXUWbwQhXsDgzwhp2IbaLM8+aGWoBizMjrGEXY7s486yZoRawODPCHkajEZWVldi/fz/PmhmqAYszI+xhsVgwZ84cpKWl8ayZoRqwODNUgYyMDGRkZLS1GwxGyMDizFAFrFYr//CEoSqwODMYDEY7BIszg8FgtEOwODMYDEY7BIszg8FgtEOwOLdT7N69G5MmTcKvfvUr3HPPPZg8eTJ2797d1m61SxQX70C3bt2g0Wja2hUJiAj19fVt7QYjTMHi3A7x5ptvgogwffp0bNy4ERs2bMC0adNARFi+fHnA9naV7ILFYgm5n62xO3To0JAJV1xcHF544QUcPXq0VfXu3LUTQ4YMgU6nw8iRI1FTU9Mqv1icGa0Bi3M7Q01NDYgIO3Zs98jbsWM7iAi1tbUB2bxbIhGM3Vu3bmH9+vUh9YmIcOrUqVbVazKZQERYvHgxtm//FM899xwiIiJw8eLFVvnF4swIFizO7QxJSUlITU31mj979mxMmDAhIJvtRZyNRiMiIiJARCEXZ1+2lNQ7adIkzJ8/35G22Wx4/PHH8eqrr941vxgMX2Bxbkew2WwwGAw4dOiQ1zImkwkxMTGOtLsA1NfXg4gk+a6053/zzTcYN24c9Ho9BgwYgILCAkk9gdqVK+OOa9eu4dSpUzh16pRi4bJarcjKykLfvn2h1WoxePBgFG0r8ulHMPVGRkaiurpacm3z5s0YOnSoV99sNhuys7Px4IMPIjIyEg8//DAqKiokvrnWdfz4cTzxxBPo1KkTtFot4uPjUVpaKrFZW1uLsWPHQq/XQ6/XIykpCQ0NDYrzAaCktATx8fHQarXo2bMn1q1bB5vNFpANRtuDxbkd4dq1ayAiNDU1eS1z9epVEBHMZjMA/yJ6+fJlEBFMJhMuX77syH/ggQfw1ltvoXhnMebNmwcikghFoHYB4OLFi5gxY4aitioV59WrVyMyMhKZmZkoKytDeno6iAglpSVe/Qi03h9++AFEhMbGRsl1k8mEuLg4r7Y2bdoEvV6P9evXo2hbESZOnAgiwrfffutR188//4zo6Gjce++9ePfdd1FcvAOpqamIiIhwfLHl5s2biIqKwsyZM1FQ8DEKCgswYcIEJCQkKMoHgIqKCvTr1w+5ebkoLy939Nfbb7+t2AajfYDFuR3BLry+xNlsNoOIcO3aNQD+RdS9jD1/8+bNkjILFy7Eo48+KnuPEruBQsm9NpsN0dHRyM3LlVxPT0/HqNGjgvJDruzZs2dBRLh165bk+smTJ32+CcTFxeGjjz9ypK9fv44xY8Zg+/ZPPepq/m8z3njjDclbUUtLi6TMpUuXQEQ4ffq0o0xjYyPy8vMU5QNAYmIizpw5I/Fz0aJFGD58uGIbjPYBFud2BPuyxhdffOG1zKFDh9ClSxdHOlhxdv8DsH///oCWS+TKBAIl97q/Jdjhb2kn0Hqb/9ss2ycmkwmdO3eWtdPU1OT3D6lcXQ0NDSgo+BivvvoqhgwZIiljs9mwZMkSxMXFITk5GUajEc3NzY57/eUDQHR0tMdSDxEhNjZWsQ1G+wCLczuDvw3B1NRUyYaguwAcPXo0KHE+cOAADAZD0HYDhZJ7r1y5IivOBw8e9OlroPXabDZoNBrJB2IBIC8/D4MGDZK1Y19Oaf6vd2Fzr2vhwoXo1asXXnvtNezZs8exjOXuj8ViQf6WfKSmpiIuLg6rVq1SnK/X63Hs2DHU19d7MJA6GG0PFud2hurqar9H6crLyx3XiAj79+93pDds2BDUssYrr7zisVQQiN1AEciyRv6WfMn1JUuWYOTIkUH54a3slClTsHjxYsm1SZMmIT093atvUVFRks3Jn376Cb169cKKFSs86mpuFrPzK1euOMq7i7PZbEZaWhpu377tKLNnzx7odDpF+QAwYsQIFBfvkPg5d+5cR9uU2GC0D7A4t0MsX74cRIQZM2bgnXfewTvvvIPp06eDiPDKK69Iynbv3h2jRo/C7t27sXnzZgwYMEBWRNetW4ejR486xDkmJgYrV67Ezl07sWDBAo8NwUDtAndnQzAzMxM6nQ5r1qyRbAjuKtkVsC1fZSsrKxEREYE1a9bAZDJh0aJF0Ol0OHfunFdbq1evRlRUFDZu3Ihtn27DlClTQEQ4efKkR13Xr19HREQE5s6di4MHD6KgsACDBw8GESE3Lxc3b97EnTt3EBsbi5SUFJSWliI3LxeJiYkYMWIEAPjNB4DincXQarVYs2YNiot3ICUlBUSEoqIixTYY7QMszu0U5eXlmDhxIrp06YKYmBgkJiairKzMo9y+ffscx8zsm0HuIrpgwQJER0dDr9c7xPnw4cMYNmwYtFotBgwY4Bi8wdoF/B+lc0UgR+nWrl2LPn36QKvVYuDAgZLZaiC2/JXdsWM74uPjodPp8Mgjj3gsc8j5lpWVhd69e0Oj0WDQoEGOUyRydW37dBvuv/9+aLVajBo9CrW1tZg3bx50Oh3Onz8PQCwvJSQkQKPRIDo6GlOnTsWFCxccNvzlA0DhJ4UYMGAANBoNevXq5fGWpMQGo+3B4tzBEIiAMhiMtgOP0g4GFmcGIzzAo7SDgcWZwQgP8CjtYLh165ZjfZPBYLRfsDgzGAxGOwSLM4PBYLRDsDgzGAxGOwSLM4PBYLRD/H9wZ6yFGmx55wAAAABJRU5ErkJggg==)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, kernel_size=(5, 5)), # свёртка из 3 слоёв в 6 разных ядром 5х5
nn.MaxPool2d(kernel_size=(2, 2), stride=2), # сокращение размерности фильтром 2x2 с шагом 2
nn.Tanh(), # функция активации (сигмоида)
nn.Conv2d(6, 16, kernel_size=(5, 5)), # ещё свёртка с превращением 6 слоёв в 16
nn.MaxPool2d(kernel_size=(2, 2), stride=2), # ещё пулинг
nn.Tanh(), # и ещё активация
Flattener(), # вытягиваем данные в одномерный вектор
nn.Linear(16 * 5 * 5, 120), # полносвязный слой с (16 слоёв х 5х5 фильтр)=400 входными нейронами и 120 выходными
nn.Tanh(), # функция активации
nn.Linear(120, 84), # полносвязный слой 120-84
nn.Tanh(), # активация
nn.Linear(84, 10) # полносвязный слой 84-10
)
#nn.Sigmoid
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
Average loss: 1.343403, Train accuracy: 0.542487, Val accuracy: 0.802471
Average loss: 0.609818, Train accuracy: 0.813091, Val accuracy: 0.851136
Average loss: 0.525782, Train accuracy: 0.837491, Val accuracy: 0.860010
Average loss: 0.473702, Train accuracy: 0.853718, Val accuracy: 0.875435
Average loss: 0.446730, Train accuracy: 0.862454, Val accuracy: 0.875435
Average loss: 0.421188, Train accuracy: 0.870474, Val accuracy: 0.885127
Average loss: 0.406773, Train accuracy: 0.874825, Val accuracy: 0.877005
Average loss: 0.393163, Train accuracy: 0.879825, Val accuracy: 0.881169
Average loss: 0.376561, Train accuracy: 0.883920, Val accuracy: 0.885878
Average loss: 0.367274, Train accuracy: 0.886752, Val accuracy: 0.871203
###Markdown
Подбор гиперпараметров
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e0, 1e-1, 1e-2, 1e-3, 1e-4]
anneal_coeff = 0.2
anneal_epochs = [1, 5, 10, 15, 20, 50]
regs = [1e-3, 1e-4, 1e-5, 1e-7]
batch_size = 64
epoch_num = 10 # будем учить по 10 эпох для каждой модели
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {} # список опробованнных моделей и метрик точности их работы
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
# TODO: Your code here!
for lr in learning_rates: # перебираем все скорости обучения
for anneal_epoch in anneal_epochs: # отжиг
for reg in regs: # регуляризация
lenet_model = nn.Sequential( # модель по архитектуре LeNet
nn.Conv2d(3, 6, kernel_size=(5, 5)),
nn.MaxPool2d(kernel_size=(2, 2), stride=2),
nn.Tanh(),
nn.Conv2d(6, 16, kernel_size=(5, 5)),
nn.MaxPool2d(kernel_size=(2, 2), stride=2),
nn.Tanh(),
Flattener(),
nn.Linear(16 * 5 * 5, 120),
nn.Tanh(),
nn.Linear(120, 84),
nn.Tanh(),
nn.Linear(84, 10)
)
lenet_model.type(torch.cuda.FloatTensor) # подключаем GPU
lenet_model.to(device) # загружаем модель на GPU
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor) # функцию ощибок - тоже на GPU
params = Hyperparams(lr, anneal_epoch, reg) # набор параметров для этого теста
optimizer = optim.SGD(lenet_model.parameters(), lr=lr, weight_decay=reg) # оптимизатор
scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=anneal_epoch, gamma=anneal_coeff) # изменение модели по триггеру
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, epoch_num, scheduler) # обучение модели
result = RunResult(lenet_model, train_history, val_history, val_history[-1]) # запоминаем последние результаты по обучению в словарик
run_record[params] = result # сохранение результата обучения
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
Best validation accuracy: 0.87, best hyperparams: Hyperparams(learning_rate=1.0, anneal_epochs=5, reg=0.001)
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
best_model = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=(3, 3)),
nn.MaxPool2d(kernel_size=(2, 2), stride=2),
nn.BatchNorm2d(64), # добавлена нормализация по батчам
nn.ReLU(inplace=True), # замена функции активации на ReLU
# второй свёрточный слой
nn.Conv2d(64, 256, kernel_size=(3, 3)),
nn.MaxPool2d(kernel_size=(2, 2), stride=2),
nn.BatchNorm2d(256),
nn.ReLU(inplace=True),
# три свёрточных слоя вместо двух
nn.Conv2d(256, 256, kernel_size=(3, 3)),
nn.MaxPool2d(kernel_size=(2, 2), stride=2),
nn.BatchNorm2d(256),
nn.ReLU(inplace=True),
Flattener(), # выравнивание для полносвязных слоёв
nn.Linear(256 * 2 * 2, 120), # полносвязный слой для выбора сильных признаков
nn.BatchNorm1d(120),
nn.ReLU(inplace=True),
nn.Linear(120, 10) # полносвязный слой для понимания по сильным признакам, какой класс
)
best_model.type(torch.cuda.FloatTensor) # модель для GPU
best_model.to(device) # сохранение модели на GPU
optimizer = optim.Adam(best_model.parameters(), lr=1e-3, weight_decay=1e-5) # замена SGD на Adam
scheduler = optim.lr_scheduler.CyclicLR(optimizer, base_lr=1e-4, max_lr=1e-3, cycle_momentum=False)
loss_history, train_history, val_history = train_model(best_model, train_aug_loader, val_loader, loss, optimizer, 20, scheduler)
###Output
Average loss: 1.078968, Train accuracy: 0.691550, Val accuracy: 0.857006
Average loss: 0.520047, Train accuracy: 0.852711, Val accuracy: 0.885196
Average loss: 0.407449, Train accuracy: 0.881036, Val accuracy: 0.899666
Average loss: 0.349635, Train accuracy: 0.895744, Val accuracy: 0.910586
Average loss: 0.311264, Train accuracy: 0.906665, Val accuracy: 0.910177
Average loss: 0.283639, Train accuracy: 0.916032, Val accuracy: 0.915910
Average loss: 0.259416, Train accuracy: 0.923711, Val accuracy: 0.916729
Average loss: 0.237700, Train accuracy: 0.929376, Val accuracy: 0.916183
Average loss: 0.217588, Train accuracy: 0.935314, Val accuracy: 0.919801
Average loss: 0.206099, Train accuracy: 0.938812, Val accuracy: 0.919596
Average loss: 0.192030, Train accuracy: 0.943026, Val accuracy: 0.918094
Average loss: 0.177849, Train accuracy: 0.946661, Val accuracy: 0.917617
Average loss: 0.165803, Train accuracy: 0.949561, Val accuracy: 0.920142
Average loss: 0.156029, Train accuracy: 0.952599, Val accuracy: 0.919118
Average loss: 0.143338, Train accuracy: 0.956984, Val accuracy: 0.923964
Average loss: 0.134080, Train accuracy: 0.959509, Val accuracy: 0.921848
Average loss: 0.131185, Train accuracy: 0.960106, Val accuracy: 0.917958
Average loss: 0.123818, Train accuracy: 0.962410, Val accuracy: 0.921848
Average loss: 0.117014, Train accuracy: 0.963639, Val accuracy: 0.921439
Average loss: 0.110578, Train accuracy: 0.966061, Val accuracy: 0.922053
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
# TODO Write the code to compute accuracy on test set
final_test_accuracy = 0.0
test_loader = torch.utils.data.DataLoader(data_test, batch_size=batch_size) # загрузка тестовых данных
final_test_accuracy = compute_accuracy(best_model, test_loader) # точность на тестовых данных
print("Final test accuracy - ", final_test_accuracy)
###Output
_____no_output_____
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
device
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
# class Flattener(nn.Module):
# def forward(self, x):
# batch_size, *_ = x.shape
# return x.view(batch_size, -1)
# на самом деле эта штука не актуальна, будем использовать nn.Flatten
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Flatten(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs, scheduler=None):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
if scheduler is not None:
scheduler.step()
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
# raise Exception("Not implemented")
model.eval() # Evaluation mode
correct = 0
total = 0
for X, y in loader:
X_gpu, y_gpu = X.to(device), y.to(device)
pred = model(X_gpu).argmax(dim=1)
correct += (y_gpu == pred).sum().item()
total += y_gpu.shape[0]
return correct / total
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 1.381758, Train accuracy: 0.539484, Val accuracy: 0.760631
Average loss: 0.677868, Train accuracy: 0.794134, Val accuracy: 0.807726
Average loss: 0.581774, Train accuracy: 0.826059, Val accuracy: 0.820900
Average loss: 0.536676, Train accuracy: 0.839573, Val accuracy: 0.800150
Average loss: 0.505937, Train accuracy: 0.848497, Val accuracy: 0.845471
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:1249: UserWarning: Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead
"Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead"
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
def visualize_augmentation(data, nrows=4, ncols=7):
plt.figure(figsize=(5 * nrows, ncols * 2))
for i, (x, y) in enumerate(data):
if i == nrows * ncols:
break
plt.subplot(nrows, ncols, i + 1)
plt.title(f"True label: {y}", size=14)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
visualize_augmentation(data_aug_vis)
###Output
/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:1249: UserWarning: Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead
"Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead"
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные Агментации, связанные с горизонтальным и вертикальным отражением будут путать модель - модели сложно будет отличать 6 от 9 в случае вертикального отражения, отличать2 от 5 в случае горизонтального отражения. Поэтому, уберем эти агментации.
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
visualize_augmentation(dset.SVHN('./', transform=tfs))
# TODO:
tfs = transforms.Compose([
# TODO: Add good augmentations
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# TODO create new instances of loaders with the augmentations you chose
data_aug_train = dset.SVHN('./', transform=tfs)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
def weight_reset(model):
for layer in model.children():
if hasattr(layer, 'reset_parameters'):
layer.reset_parameters()
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
weight_reset(nn_model)
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 1.601936, Train accuracy: 0.454988, Val accuracy: 0.693263
Average loss: 0.814979, Train accuracy: 0.745657, Val accuracy: 0.774896
Average loss: 0.696991, Train accuracy: 0.783913, Val accuracy: 0.803358
Average loss: 0.644279, Train accuracy: 0.801778, Val accuracy: 0.819944
Average loss: 0.612077, Train accuracy: 0.814080, Val accuracy: 0.806088
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, kernel_size=5, stride=1, padding=0),
nn.MaxPool2d(2),
nn.Tanh(),
nn.Conv2d(6, 16, kernel_size=5, stride=1, padding=0),
nn.MaxPool2d(2),
nn.Tanh(),
nn.Flatten(),
nn.Linear(16 * 5 * 5, 120),
nn.Tanh(),
nn.Linear(120, 84),
nn.Tanh(),
nn.Linear(84, 10)
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
Average loss: 1.355999, Train accuracy: 0.545200, Val accuracy: 0.817146
Average loss: 0.590508, Train accuracy: 0.819404, Val accuracy: 0.843287
Average loss: 0.503640, Train accuracy: 0.846023, Val accuracy: 0.866016
Average loss: 0.460365, Train accuracy: 0.859246, Val accuracy: 0.868405
Average loss: 0.427471, Train accuracy: 0.869587, Val accuracy: 0.874411
Average loss: 0.404798, Train accuracy: 0.875354, Val accuracy: 0.879803
Average loss: 0.387144, Train accuracy: 0.880763, Val accuracy: 0.879735
Average loss: 0.367547, Train accuracy: 0.887179, Val accuracy: 0.887311
Average loss: 0.355881, Train accuracy: 0.890694, Val accuracy: 0.884035
Average loss: 0.344033, Train accuracy: 0.893697, Val accuracy: 0.887789
###Markdown
Подбор гиперпараметров
###Code
from itertools import product
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e0, 1e-1, 1e-2, 1e-3, 1e-4]
anneal_coeff = 0.2
anneal_epochs = [1, 5, 10, 15, 20, 50]
reg = [1e-3, 1e-4, 1e-5, 1e-7]
batch_size = 64
epoch_num = 10
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
params = np.array(list(product(learning_rates, anneal_epochs, reg)))
n_params = 10
indices = np.random.choice(np.arange(len(params)), size=n_params, replace=False)
for i, (lr, n_epochs_anneal, rg) in enumerate(params[indices]):
print("======" * 3)
print(f"Model {i + 1}/{n_params}")
print(f"lr: {lr}, weight_decay: {rg}, step_size: {n_epochs_anneal}\n")
weight_reset(lenet_model)
optimizer = optim.SGD(lenet_model.parameters(), lr=lr, weight_decay=rg)
scheduler = optim.lr_scheduler.StepLR(
optimizer,
step_size=n_epochs_anneal,
gamma=anneal_coeff
)
loss_history, train_history, val_history = train_model(
lenet_model,
train_aug_loader,
val_loader,
loss,
optimizer,
epoch_num,
scheduler=scheduler
)
run_record[Hyperparams(lr, n_epochs_anneal, rg)] = RunResult(
lenet_model,
train_history,
val_history,
val_history[-1]
)
# TODO: Your code here!
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
Best validation accuracy: 0.89, best hyperparams: Hyperparams(learning_rate=0.1, anneal_epochs=20.0, reg=0.0001)
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
# тестировал разные архитектуру и разную глубину - такая модель
# оказалась наилучшей, оптимизитор - Адам
model = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, stride=1, padding=1),
nn.MaxPool2d(2),
nn.BatchNorm2d(16),
nn.ReLU(inplace=True),
nn.Conv2d(16, 32, kernel_size=3, stride=1, padding=1),
nn.MaxPool2d(2),
nn.BatchNorm2d(32),
nn.ReLU(inplace=True),
nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1),
nn.MaxPool2d(2),
nn.BatchNorm2d(64),
nn.ReLU(inplace=True),
nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1),
nn.MaxPool2d(2),
nn.BatchNorm2d(128),
nn.ReLU(inplace=True),
nn.Flatten(),
nn.Linear(128 * 2 * 2, 128),
nn.BatchNorm1d(128),
nn.ReLU(inplace=True),
nn.Linear(128, 32),
nn.BatchNorm1d(32),
nn.ReLU(inplace=True),
nn.Linear(32, 10)
)
model.type(torch.cuda.FloatTensor)
model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.Adam(model.parameters())
n_epochs = 20
loss_history, train_history, val_history = train_model(model, train_aug_loader, val_loader, loss, optimizer, n_epochs)
best_model = model
###Output
_____no_output_____
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
# TODO Write the code to compute accuracy on test set
final_test_accuracy = compute_accuracy(best_model, val_loader)
print("Final test accuracy - ", final_test_accuracy)
###Output
_____no_output_____
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
train_loader.dataset.data.shape
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def multiclass_accuracy(prediction, ground_truth):
"""
Computes metrics for multiclass classification
Arguments:
prediction, np array of int (num_samples) - model predictions
ground_truth, np array of int (num_samples) - true labels
Returns:
accuracy - ratio of accurate predictions to total samples
"""
# TODO: Implement computing accuracy
accuracy = len(prediction[prediction == ground_truth]) / len(prediction)
return accuracy
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs, scheduler=False,show=True):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
if scheduler:
scheduler.step()
if show:
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
#def compute_accuracy(X, y):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
accuracies = []
for i_step, (x, y) in enumerate(loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
pred = torch.argmax(prediction, dim=1)
#print(pred.shape)
#print(y_gpu.shape)
accuracies.append(multiclass_accuracy(pred, y_gpu))
return np.mean(accuracies)
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
torch.Size([64, 3, 32, 32])
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
_____no_output_____
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
# TODO: Add good augmentations
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# TODO create new instances of loaders with the augmentations you chose
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 1.578891, Train accuracy: 0.458349, Val accuracy: 0.725356
Average loss: 0.793040, Train accuracy: 0.751834, Val accuracy: 0.793121
Average loss: 0.676433, Train accuracy: 0.793383, Val accuracy: 0.809218
Average loss: 0.624462, Train accuracy: 0.810702, Val accuracy: 0.810890
Average loss: 0.598728, Train accuracy: 0.819029, Val accuracy: 0.838126
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
# ((n + 2*padding -filter_size)/stride + 1)
# 3x32x32
nn.Conv2d(3, 6, kernel_size=(5, 5), stride=(1, 1)),
# 6x28x28
nn.Tanh(),
# 6x28x28
nn.MaxPool2d(2),
# 6x14x14
nn.Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1)),
# 16x10x10
nn.Tanh(),
# 16x10x10
nn.MaxPool2d(2),
# 16x5x5
nn.Conv2d(16, 120, kernel_size=(5, 5), stride=(1, 1)),
# 120x1x1
nn.Tanh(),
# 120x1x1
Flattener(),
# 120
nn.Linear(in_features=120, out_features=84, bias=True),
# 84
nn.Tanh(),
# 84
nn.Linear(in_features=84, out_features=10, bias=True),
# 10
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
#loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
_____no_output_____
###Markdown
Подбор гиперпараметров
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e0, 1e-1, 1e-2, 1e-3, 1e-4]
anneal_coeff = 0.2
anneal_epochs = [1, 5, 10, 15, 20, 50]
anneal_epochs = [1,5]
regs = [1e-3, 1e-4, 1e-5, 1e-7]
batch_size = 64
epoch_num = 10
batch_size = 16
epoch_num = 5
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
# TODO: Your code here!
best_val_accuracy = 0
for learning_rate in learning_rates:
for anneal_epoch in anneal_epochs:
for reg in regs:
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, kernel_size=(5, 5), stride=(1, 1)),
nn.Tanh(),
nn.MaxPool2d(2),
nn.Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1)),
nn.Tanh(),
nn.MaxPool2d(2),
nn.Conv2d(16, 120, kernel_size=(5, 5), stride=(1, 1)),
nn.Tanh(),
Flattener(),
nn.Linear(in_features=120, out_features=84, bias=True),
nn.Tanh(),
nn.Linear(in_features=84, out_features=10, bias=True),
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
optimizer = optim.SGD(lenet_model.parameters(), lr=learning_rate, weight_decay=reg)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=anneal_epoch, gamma=anneal_coeff)
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, epoch_num,scheduler=scheduler,show=False)
print(Hyperparams(learning_rate,anneal_epoch,reg),val_history[-1])
run_record[Hyperparams(learning_rate,anneal_epoch,reg)] = RunResult(lenet_model,train_history,val_history,val_history[-1])
best_val_accuracy = None
best_hyperparams = {'learning_rates': 1e-1, 'anneal_coeff': 0.2, 'anneal_epochs': 5, 'reg': 1e-3, 'batch_size': 64, 'epoch_num': 10}
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
print(run_result)
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
RunResult(model=Sequential(
(0): Conv2d(3, 6, kernel_size=(5, 5), stride=(1, 1))
(1): Tanh()
(2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(3): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
(4): Tanh()
(5): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(6): Conv2d(16, 120, kernel_size=(5, 5), stride=(1, 1))
(7): Tanh()
(8): Flattener()
(9): Linear(in_features=120, out_features=84, bias=True)
(10): Tanh()
(11): Linear(in_features=84, out_features=10, bias=True)
), train_history=[0.12510664437088354, 0.2171450022182029, 0.3756953212981606, 0.42198750981128214, 0.4312186465549602], val_history=[0.13930662978313965, 0.3080152283324698, 0.41781233809488566, 0.4389467378432389, 0.44039116275627266], final_val_accuracy=0.44039116275627266)
RunResult(model=Sequential(
(0): Conv2d(3, 6, kernel_size=(5, 5), stride=(1, 1))
(1): Tanh()
(2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(3): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
(4): Tanh()
(5): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(6): Conv2d(16, 120, kernel_size=(5, 5), stride=(1, 1))
(7): Tanh()
(8): Flattener()
(9): Linear(in_features=120, out_features=84, bias=True)
(10): Tanh()
(11): Linear(in_features=84, out_features=10, bias=True)
), train_history=[0.22335597037845956, 0.5295703511585844, 0.6033000034126199, 0.619509947786916, 0.6218646554960243], val_history=[0.4050923784323884, 0.6076784656946191, 0.6346021297461328, 0.6402422100510695, 0.6411743209236919], final_val_accuracy=0.6411743209236919)
RunResult(model=Sequential(
(0): Conv2d(3, 6, kernel_size=(5, 5), stride=(1, 1))
(1): Tanh()
(2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(3): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
(4): Tanh()
(5): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(6): Conv2d(16, 120, kernel_size=(5, 5), stride=(1, 1))
(7): Tanh()
(8): Flattener()
(9): Linear(in_features=120, out_features=84, bias=True)
(10): Tanh()
(11): Linear(in_features=84, out_features=10, bias=True)
), train_history=[0.5848377299252636, 0.7294133706446438, 0.739770671944852, 0.7432856704091731, 0.7480121489267311], val_history=[0.7263618533047148, 0.7505562597143068, 0.767721671230849, 0.7153708089704685, 0.7671989490045148], final_val_accuracy=0.7671989490045148)
RunResult(model=Sequential(
(0): Conv2d(3, 6, kernel_size=(5, 5), stride=(1, 1))
(1): Tanh()
(2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(3): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
(4): Tanh()
(5): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(6): Conv2d(16, 120, kernel_size=(5, 5), stride=(1, 1))
(7): Tanh()
(8): Flattener()
(9): Linear(in_features=120, out_features=84, bias=True)
(10): Tanh()
(11): Linear(in_features=84, out_features=10, bias=True)
), train_history=[0.5893253250520425, 0.738405623997543, 0.7469712998669078, 0.7581988192335256, 0.7557587960277105], val_history=[0.760079749833469, 0.767733235881874, 0.7794065946266007, 0.7765350917770705, 0.7929615220931092], final_val_accuracy=0.7929615220931092)
RunResult(model=Sequential(
(0): Conv2d(3, 6, kernel_size=(5, 5), stride=(1, 1))
(1): Tanh()
(2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(3): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
(4): Tanh()
(5): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(6): Conv2d(16, 120, kernel_size=(5, 5), stride=(1, 1))
(7): Tanh()
(8): Flattener()
(9): Linear(in_features=120, out_features=84, bias=True)
(10): Tanh()
(11): Linear(in_features=84, out_features=10, bias=True)
), train_history=[0.5597720369927994, 0.8343684946933762, 0.8479507217691021, 0.8510050165512063, 0.8516534143261782], val_history=[0.8080580175412626, 0.8612450040707572, 0.8649468488638886, 0.8649121549108134, 0.8654869180667605], final_val_accuracy=0.8654869180667605)
RunResult(model=Sequential(
(0): Conv2d(3, 6, kernel_size=(5, 5), stride=(1, 1))
(1): Tanh()
(2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(3): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
(4): Tanh()
(5): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(6): Conv2d(16, 120, kernel_size=(5, 5), stride=(1, 1))
(7): Tanh()
(8): Flattener()
(9): Linear(in_features=120, out_features=84, bias=True)
(10): Tanh()
(11): Linear(in_features=84, out_features=10, bias=True)
), train_history=[0.5701634644916903, 0.839231478005665, 0.8505272497696481, 0.8543664471214552, 0.8538033648431901], val_history=[0.8125381633483829, 0.8579016634594036, 0.8681305972910961, 0.8694385593220338, 0.8691656335578417], final_val_accuracy=0.8691656335578417)
RunResult(model=Sequential(
(0): Conv2d(3, 6, kernel_size=(5, 5), stride=(1, 1))
(1): Tanh()
(2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(3): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
(4): Tanh()
(5): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(6): Conv2d(16, 120, kernel_size=(5, 5), stride=(1, 1))
(7): Tanh()
(8): Flattener()
(9): Linear(in_features=120, out_features=84, bias=True)
(10): Tanh()
(11): Linear(in_features=84, out_features=10, bias=True)
), train_history=[0.5954509777155923, 0.8428488550660342, 0.8544005733201379, 0.8566017131351739, 0.8563969559430775], val_history=[0.8284198986011397, 0.8651226315594701, 0.8697508048997112, 0.8709789708385759, 0.8710136647916512], final_val_accuracy=0.8710136647916512)
RunResult(model=Sequential(
(0): Conv2d(3, 6, kernel_size=(5, 5), stride=(1, 1))
(1): Tanh()
(2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(3): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
(4): Tanh()
(5): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(6): Conv2d(16, 120, kernel_size=(5, 5), stride=(1, 1))
(7): Tanh()
(8): Flattener()
(9): Linear(in_features=120, out_features=84, bias=True)
(10): Tanh()
(11): Linear(in_features=84, out_features=10, bias=True)
), train_history=[0.5938641094768454, 0.8291471862949186, 0.8497423471999453, 0.8608333617718322, 0.8695184793365867], val_history=[0.8307108559692102, 0.8489008955665753, 0.8751075512545333, 0.8808956590925913, 0.8859898878691436], final_val_accuracy=0.8859898878691436)
Best validation accuracy: 0.89, best hyperparams: Hyperparams(learning_rate=0.1, anneal_epochs=5, reg=0.001)
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
best_model = None
###Output
_____no_output_____
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
# TODO Write the code to compute accuracy on test set
final_test_accuracy = 0.0
print("Final test accuracy - ", final_test_accuracy)
###Output
_____no_output_____
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
device
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN("data",
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN("data", split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs, scheduler = None):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
if scheduler is not None:
scheduler.step()
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
accuracy = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
accuracy = float(correct_samples) / total_samples
return accuracy
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 1.406965, Train accuracy: 0.527079, Val accuracy: 0.776968
Average loss: 0.705750, Train accuracy: 0.784920, Val accuracy: 0.824643
Average loss: 0.605360, Train accuracy: 0.818619, Val accuracy: 0.833908
Average loss: 0.556412, Train accuracy: 0.833737, Val accuracy: 0.831587
Average loss: 0.526128, Train accuracy: 0.841330, Val accuracy: 0.842832
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('data',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
_____no_output_____
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('data',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
# TODO: Add good augmentations
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# TODO create new instances of loaders with the augmentations you chose
data_aug_vis = dset.SVHN('data',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_vis, batch_size=batch_size, sampler=train_sampler)
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 0.606438, Train accuracy: 0.815360, Val accuracy: 0.834351
Average loss: 0.566008, Train accuracy: 0.827748, Val accuracy: 0.855407
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, 5),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
nn.Conv2d(6, 16, 5),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
Flattener(),
nn.Linear(16*5*5, 120),
nn.ReLU(inplace=True),
nn.Linear(120, 84),
nn.ReLU(inplace=True),
nn.Linear(84, 10)
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
Average loss: 1.376088, Train accuracy: 0.532864, Val accuracy: 0.836433
Average loss: 0.568289, Train accuracy: 0.831860, Val accuracy: 0.874791
Average loss: 0.473084, Train accuracy: 0.859298, Val accuracy: 0.892417
Average loss: 0.421249, Train accuracy: 0.875474, Val accuracy: 0.886650
Average loss: 0.390424, Train accuracy: 0.884961, Val accuracy: 0.911989
Average loss: 0.359345, Train accuracy: 0.893543, Val accuracy: 0.907211
Average loss: 0.346232, Train accuracy: 0.896768, Val accuracy: 0.920571
Average loss: 0.325535, Train accuracy: 0.901205, Val accuracy: 0.924052
Average loss: 0.312845, Train accuracy: 0.906358, Val accuracy: 0.917022
Average loss: 0.299984, Train accuracy: 0.909873, Val accuracy: 0.931458
###Markdown
Подбор гиперпараметров
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e0, 1e-1, 1e-2, 1e-3, 1e-4]
anneal_coeff = 0.2
anneal_epochs = [1, 5, 10, 15, 20, 50]
reg = [1e-3, 1e-4, 1e-5, 1e-7]
batch_size = 64
epoch_num = 10
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
# TODO: Your code here!
for i in range(10):
lr = learning_rates[np.random.randint(len(learning_rates))]
ae = anneal_epochs[np.random.randint(len(anneal_epochs))]
rg = reg[np.random.randint(len(reg))]
print(f'Parameters are lr={lr}, anneal_epochs={ae}, reg={r}')
params = Hyperparams(lr, ae, r)
new_lenet_model = nn.Sequential(
nn.Conv2d(3, 6, 5),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
nn.Conv2d(6, 16, 5),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
Flattener(),
nn.Linear(16*5*5, 120),
nn.ReLU(inplace=True),
nn.Linear(120, 84),
nn.ReLU(inplace=True),
nn.Linear(84, 10)
)
new_lenet_model.type(torch.cuda.FloatTensor)
new_lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.Adam(new_lenet_model.parameters(), lr=lr, weight_decay=r)
scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=ae,
gamma=anneal_coeff)
loss_history, train_history, val_history = \
train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10,
scheduler=scheduler)
results = RunResult(new_lenet_model, train_history, val_history, val_history[-1])
run_record[params] = results
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
Best validation accuracy: 0.93, best hyperparams: Hyperparams(learning_rate=1.0, anneal_epochs=20, reg=1e-07)
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
epoch_num = 15
best_model = nn.Sequential(
nn.Conv2d(3, 32, 6, stride = 2),
nn.BatchNorm2d(32),
nn.ReLU(inplace=True),
nn.Conv2d(32, 64, 3, padding=1),
nn.BatchNorm2d(64),
nn.ReLU(inplace=True),
nn.Conv2d(64, 128, 4, stride = 2, padding=2),
nn.BatchNorm2d(128),
nn.ReLU(inplace=True),
nn.Conv2d(128, 128, 4, stride = 2, padding=2),
nn.BatchNorm2d(128),
nn.ReLU(inplace=True),
nn.Conv2d(128, 128, 5, stride = 2, padding=2),
nn.BatchNorm2d(128),
nn.ReLU(inplace=True),
Flattener(),
nn.Linear(128 * 3 * 3, 64),
nn.BatchNorm1d(64),
nn.ReLU(inplace=True),
nn.Linear(64, 10)
)
best_model.type(torch.cuda.FloatTensor)
best_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.Adam(best_model.parameters(), lr=1e-2, weight_decay=1e-7)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=20, gamma=0.1)
loss_history, train_history, val_history = \
train_model(best_model, train_aug_loader, val_loader, loss, optimizer, epoch_num, scheduler=scheduler)
###Output
Average loss: 0.827293, Train accuracy: 0.726444, Val accuracy: 0.896734
Average loss: 0.389098, Train accuracy: 0.880575, Val accuracy: 0.920247
Average loss: 0.314524, Train accuracy: 0.904566, Val accuracy: 0.932259
Average loss: 0.279012, Train accuracy: 0.916015, Val accuracy: 0.949152
Average loss: 0.247170, Train accuracy: 0.925212, Val accuracy: 0.956404
Average loss: 0.220282, Train accuracy: 0.935007, Val accuracy: 0.956591
Average loss: 0.202107, Train accuracy: 0.938658, Val accuracy: 0.958537
Average loss: 0.179331, Train accuracy: 0.944954, Val accuracy: 0.967734
Average loss: 0.164785, Train accuracy: 0.949288, Val accuracy: 0.971846
Average loss: 0.149617, Train accuracy: 0.953810, Val accuracy: 0.978756
Average loss: 0.135762, Train accuracy: 0.957240, Val accuracy: 0.979200
Average loss: 0.122879, Train accuracy: 0.961318, Val accuracy: 0.980787
Average loss: 0.111789, Train accuracy: 0.964185, Val accuracy: 0.984268
Average loss: 0.105957, Train accuracy: 0.966164, Val accuracy: 0.984387
Average loss: 0.095423, Train accuracy: 0.969952, Val accuracy: 0.985172
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
# TODO Write the code to compute accuracy on test set
test_loader = torch.utils.data.DataLoader(data_test)
final_test_accuracy = compute_accuracy(best_model, test_loader)
print("Final test accuracy - ", final_test_accuracy)
###Output
Final test accuracy - 0.9711804252124356
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs, scheduler=None):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
if scheduler is not None:
scheduler.step()
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
correct_samples = 0
total_samples = 0
for x, y in loader:
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y_gpu.shape[0]
val_accuracy = float(correct_samples) / total_samples
return val_accuracy
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 1.410031, Train accuracy: 0.527591, Val accuracy: 0.741178
Average loss: 0.688733, Train accuracy: 0.790277, Val accuracy: 0.799331
Average loss: 0.583746, Train accuracy: 0.827492, Val accuracy: 0.828749
Average loss: 0.535295, Train accuracy: 0.842064, Val accuracy: 0.831684
Average loss: 0.506325, Train accuracy: 0.850203, Val accuracy: 0.822538
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
_____no_output_____
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные **Лишняя аугментация:** Если перед нами не стоит цель отличать цифры в зеркальных отражениях, то аугментация отражениями запутает сеть (например 6 и 9).
###Code
# TODO:
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# TODO create new instances of loaders with the augmentations you chose
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 0.601480, Train accuracy: 0.817578, Val accuracy: 0.836189
Average loss: 0.553268, Train accuracy: 0.833003, Val accuracy: 0.822742
Average loss: 0.531902, Train accuracy: 0.839914, Val accuracy: 0.835301
Average loss: 0.516019, Train accuracy: 0.845920, Val accuracy: 0.856597
Average loss: 0.499294, Train accuracy: 0.850049, Val accuracy: 0.850113
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, kernel_size=(5, 5)),
nn.MaxPool2d(kernel_size=(2, 2), stride=2),
nn.Tanh(),
nn.Conv2d(6, 16, kernel_size=(5, 5)),
nn.MaxPool2d(kernel_size=(2, 2), stride=2),
nn.Tanh(),
Flattener(),
nn.Linear(16 * 5 * 5, 120),
nn.Tanh(),
nn.Linear(120, 84),
nn.Tanh(),
nn.Linear(84, 10)
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
Average loss: 1.265287, Train accuracy: 0.572655, Val accuracy: 0.812504
Average loss: 0.570986, Train accuracy: 0.827902, Val accuracy: 0.850454
Average loss: 0.486061, Train accuracy: 0.852080, Val accuracy: 0.865948
Average loss: 0.443744, Train accuracy: 0.864195, Val accuracy: 0.870999
Average loss: 0.414783, Train accuracy: 0.873716, Val accuracy: 0.875913
Average loss: 0.393272, Train accuracy: 0.878715, Val accuracy: 0.879462
Average loss: 0.373772, Train accuracy: 0.885694, Val accuracy: 0.885332
Average loss: 0.360902, Train accuracy: 0.889141, Val accuracy: 0.875026
Average loss: 0.350492, Train accuracy: 0.892451, Val accuracy: 0.880964
Average loss: 0.335930, Train accuracy: 0.897058, Val accuracy: 0.890110
###Markdown
Подбор гиперпараметров Уменьшим число параметров и эпох. Допишем `lr_scheduler` в функцию `train_model`:
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e-1, 1e-2]
anneal_coeff = 0.2
anneal_epochs = [2, 4]
regs = [1e-4, 1e-5]
batch_size = 64
epoch_num = 6
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
# TODO: Your code here!
for lr in learning_rates:
for anneal_epoch in anneal_epochs:
for reg in regs:
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, kernel_size=(5, 5)),
nn.MaxPool2d(kernel_size=(2, 2), stride=2),
nn.Tanh(),
nn.Conv2d(6, 16, kernel_size=(5, 5)),
nn.MaxPool2d(kernel_size=(2, 2), stride=2),
nn.Tanh(),
Flattener(),
nn.Linear(16 * 5 * 5, 120),
nn.Tanh(),
nn.Linear(120, 84),
nn.Tanh(),
nn.Linear(84, 10)
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
params = Hyperparams(lr, anneal_epoch, reg)
optimizer = optim.SGD(lenet_model.parameters(), lr=lr, weight_decay=reg)
scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=anneal_epoch, gamma=anneal_coeff)
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, epoch_num, scheduler)
result = RunResult(lenet_model, train_history, val_history, val_history[-1])
run_record[params] = result
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
Best validation accuracy: 0.89, best hyperparams: Hyperparams(learning_rate=0.1, anneal_epochs=4, reg=0.0001)
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации Применим первые 3 совета, а также поменяем scheduler на циклический.
###Code
best_model = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=(3, 3)),
nn.MaxPool2d(kernel_size=(2, 2), stride=2),
nn.BatchNorm2d(64),
nn.ReLU(inplace=True),
nn.Conv2d(64, 256, kernel_size=(3, 3)),
nn.MaxPool2d(kernel_size=(2, 2), stride=2),
nn.BatchNorm2d(256),
nn.ReLU(inplace=True),
nn.Conv2d(256, 256, kernel_size=(3, 3)),
nn.MaxPool2d(kernel_size=(2, 2), stride=2),
nn.BatchNorm2d(256),
nn.ReLU(inplace=True),
Flattener(),
nn.Linear(256 * 2 * 2, 120),
nn.BatchNorm1d(120),
nn.ReLU(inplace=True),
nn.Linear(120, 10)
)
best_model.type(torch.cuda.FloatTensor)
best_model.to(device)
optimizer = optim.Adam(best_model.parameters(), lr=1e-3, weight_decay=1e-5)
scheduler = optim.lr_scheduler.CyclicLR(optimizer, base_lr=1e-4, max_lr=1e-3, cycle_momentum=False)
loss_history, train_history, val_history = train_model(best_model, train_aug_loader, val_loader, loss, optimizer, 20, scheduler)
###Output
Average loss: 1.040683, Train accuracy: 0.708528, Val accuracy: 0.855027
Average loss: 0.495890, Train accuracy: 0.859980, Val accuracy: 0.887380
Average loss: 0.385912, Train accuracy: 0.888476, Val accuracy: 0.900621
Average loss: 0.326620, Train accuracy: 0.904293, Val accuracy: 0.905740
Average loss: 0.288272, Train accuracy: 0.915265, Val accuracy: 0.910450
Average loss: 0.256955, Train accuracy: 0.924155, Val accuracy: 0.914067
Average loss: 0.228394, Train accuracy: 0.933061, Val accuracy: 0.917753
Average loss: 0.209964, Train accuracy: 0.936867, Val accuracy: 0.917958
Average loss: 0.190176, Train accuracy: 0.942975, Val accuracy: 0.917617
Average loss: 0.171519, Train accuracy: 0.949152, Val accuracy: 0.919869
Average loss: 0.157150, Train accuracy: 0.952343, Val accuracy: 0.919596
Average loss: 0.146907, Train accuracy: 0.955858, Val accuracy: 0.914955
Average loss: 0.133668, Train accuracy: 0.958741, Val accuracy: 0.919391
Average loss: 0.125490, Train accuracy: 0.962273, Val accuracy: 0.917002
Average loss: 0.114747, Train accuracy: 0.965464, Val accuracy: 0.915705
Average loss: 0.108141, Train accuracy: 0.966693, Val accuracy: 0.920620
Average loss: 0.098150, Train accuracy: 0.970071, Val accuracy: 0.917821
Average loss: 0.091485, Train accuracy: 0.971829, Val accuracy: 0.916934
Average loss: 0.085928, Train accuracy: 0.973501, Val accuracy: 0.916661
Average loss: 0.082750, Train accuracy: 0.973620, Val accuracy: 0.918231
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
# TODO Write the code to compute accuracy on test set
test_loader = torch.utils.data.DataLoader(data_test, batch_size=batch_size)
final_test_accuracy = compute_accuracy(best_model, test_loader)
print("Final test accuracy - ", final_test_accuracy)
###Output
_____no_output_____
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./data/',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./data/', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
#raise Exception("Not implemented")
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(val_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y_gpu.shape[0]
val_accuracy = float(correct_samples) / total_samples
return val_accuracy
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 0.696906, Train accuracy: 0.789612, Val accuracy: 0.823971
Average loss: 0.588195, Train accuracy: 0.823687, Val accuracy: 0.814552
Average loss: 0.543352, Train accuracy: 0.838993, Val accuracy: 0.854140
Average loss: 0.510267, Train accuracy: 0.846500, Val accuracy: 0.842332
Average loss: 0.487705, Train accuracy: 0.854759, Val accuracy: 0.852638
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./data/',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
_____no_output_____
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./data/',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs_ = transforms.Compose([
# TODO: Add good augmentations
transforms.ColorJitter(hue=.50, saturation=.50),
#transforms.RandomHorizontalFlip(),
#transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# TODO create new instances of loaders with the augmentations you chose
data_aug_train = dset.SVHN('./data/', transform=tfs_)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_train):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
_____no_output_____
###Markdown
Подбор гиперпараметров
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e0, 1e-1, 1e-2, 1e-3, 1e-4]
anneal_coeff = 0.2
anneal_epochs = [1, 5, 10, 15, 20, 50]
reg = [1e-3, 1e-4, 1e-5, 1e-7]
batch_size = 64
epoch_num = 10
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
# TODO: Your code here!
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
_____no_output_____
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
best_model = None
###Output
_____no_output_____
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
# TODO Write the code to compute accuracy on test set
final_test_accuracy = 0.0
print("Final test accuracy - ", final_test_accuracy)
###Output
_____no_output_____
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!mkdir ../data
!cd ../data
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
raise Exception("Not implemented")
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
_____no_output_____
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
# TODO: Add good augmentations
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# TODO create new instances of loaders with the augmentations you chose
train_aug_loader = None
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
_____no_output_____
###Markdown
Подбор гиперпараметров
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e0, 1e-1, 1e-2, 1e-3, 1e-4]
anneal_coeff = 0.2
anneal_epochs = [1, 5, 10, 15, 20, 50]
reg = [1e-3, 1e-4, 1e-5, 1e-7]
batch_size = 64
epoch_num = 10
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
# TODO: Your code here!
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
_____no_output_____
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
best_model = None
###Output
_____no_output_____
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
# TODO Write the code to compute accuracy on test set
final_test_accuracy = 0.0
print("Final test accuracy - ", final_test_accuracy)
###Output
_____no_output_____
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
#!pip3 install torch torchvision
#!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('../assignment1/data/',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('../assignment1/data/', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
acc = [torch.mean((model(batch[0].to(device)).argmax(axis=1) == batch[1].to(device)).float())
for batch in loader]
acc = torch.mean(torch.Tensor(acc))
return acc
return accuracy
%timeit loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 1.396531, Train accuracy: 0.536805, Val accuracy: 0.764475
Average loss: 0.699114, Train accuracy: 0.788008, Val accuracy: 0.797897
Average loss: 0.597210, Train accuracy: 0.820940, Val accuracy: 0.798597
Average loss: 0.552182, Train accuracy: 0.836126, Val accuracy: 0.804402
Average loss: 0.518961, Train accuracy: 0.844982, Val accuracy: 0.825139
Average loss: 0.493323, Train accuracy: 0.852814, Val accuracy: 0.825048
Average loss: 0.473506, Train accuracy: 0.858837, Val accuracy: 0.846792
Average loss: 0.457436, Train accuracy: 0.864212, Val accuracy: 0.838984
Average loss: 0.441164, Train accuracy: 0.868853, Val accuracy: 0.848002
Average loss: 0.427621, Train accuracy: 0.872249, Val accuracy: 0.850737
Average loss: 0.416806, Train accuracy: 0.874160, Val accuracy: 0.860824
Average loss: 0.411368, Train accuracy: 0.876122, Val accuracy: 0.851562
Average loss: 0.402633, Train accuracy: 0.879808, Val accuracy: 0.851687
Average loss: 0.392752, Train accuracy: 0.882845, Val accuracy: 0.855496
Average loss: 0.385110, Train accuracy: 0.885319, Val accuracy: 0.848002
Average loss: 0.378955, Train accuracy: 0.885524, Val accuracy: 0.863627
Average loss: 0.372882, Train accuracy: 0.888271, Val accuracy: 0.862945
Average loss: 0.365834, Train accuracy: 0.889005, Val accuracy: 0.858288
Average loss: 0.360425, Train accuracy: 0.892247, Val accuracy: 0.863144
Average loss: 0.352128, Train accuracy: 0.893765, Val accuracy: 0.861728
Average loss: 0.347335, Train accuracy: 0.894874, Val accuracy: 0.866908
Average loss: 0.347843, Train accuracy: 0.894362, Val accuracy: 0.866692
Average loss: 0.339263, Train accuracy: 0.897400, Val accuracy: 0.873652
Average loss: 0.335317, Train accuracy: 0.898952, Val accuracy: 0.853069
Average loss: 0.334705, Train accuracy: 0.897775, Val accuracy: 0.849014
Average loss: 0.330349, Train accuracy: 0.900317, Val accuracy: 0.844363
Average loss: 0.329317, Train accuracy: 0.899993, Val accuracy: 0.861575
Average loss: 0.323023, Train accuracy: 0.901870, Val accuracy: 0.865009
Average loss: 0.317352, Train accuracy: 0.904140, Val accuracy: 0.873640
Average loss: 0.314448, Train accuracy: 0.904225, Val accuracy: 0.863951
Average loss: 0.313239, Train accuracy: 0.904566, Val accuracy: 0.869848
Average loss: 0.309010, Train accuracy: 0.904805, Val accuracy: 0.827738
Average loss: 0.308009, Train accuracy: 0.905283, Val accuracy: 0.853250
Average loss: 0.303620, Train accuracy: 0.908559, Val accuracy: 0.862337
Average loss: 0.301909, Train accuracy: 0.907347, Val accuracy: 0.870610
Average loss: 0.301586, Train accuracy: 0.908832, Val accuracy: 0.858555
Average loss: 0.295276, Train accuracy: 0.908712, Val accuracy: 0.852090
Average loss: 0.297306, Train accuracy: 0.908764, Val accuracy: 0.862206
Average loss: 0.291400, Train accuracy: 0.908644, Val accuracy: 0.872384
Average loss: 0.291790, Train accuracy: 0.910709, Val accuracy: 0.867442
1min 26s ± 1.83 s per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
#transforms.RandomHorizontalFlip(),
#transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('../assignment1/data',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
_____no_output_____
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
#transforms.RandomHorizontalFlip(),
#transforms.RandomVerticalFlip(),
transforms.RandomSizedCrop(29),
transforms.RandomCrop(29),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('../assignment1/data',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
C:\Users\Omega\Anaconda3\lib\site-packages\torchvision\transforms\transforms.py:703: UserWarning: The use of the transforms.RandomSizedCrop transform is deprecated, please use transforms.RandomResizedCrop instead.
warnings.warn("The use of the transforms.RandomSizedCrop transform is deprecated, " +
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
# TODO: Add good augmentations
transforms.ColorJitter(hue=.20, saturation=.20),
#transforms.RandomHorizontalFlip(),
#transforms.RandomVerticalFlip(),
transforms.RandomCrop(29),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size, sampler=train_sampler)
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 1.394894, Train accuracy: 0.550319, Val accuracy: 0.749663
Average loss: 0.969426, Train accuracy: 0.684930, Val accuracy: 0.764105
Average loss: 0.888430, Train accuracy: 0.713306, Val accuracy: 0.763429
Average loss: 0.859065, Train accuracy: 0.724909, Val accuracy: 0.763799
Average loss: 0.820393, Train accuracy: 0.734549, Val accuracy: 0.775756
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, 5),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
nn.Conv2d(6, 16, 5),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
Flattener(),
nn.Linear(400, 120),
nn.Sigmoid(),
nn.Linear(120, 84),
nn.Sigmoid(),
nn.Linear(84, 10),
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
Average loss: 2.248402, Train accuracy: 0.185203, Val accuracy: 0.146380
Average loss: 2.246128, Train accuracy: 0.186329, Val accuracy: 0.146408
Average loss: 2.244367, Train accuracy: 0.187404, Val accuracy: 0.191731
Average loss: 2.243313, Train accuracy: 0.188223, Val accuracy: 0.191714
Average loss: 2.239908, Train accuracy: 0.188155, Val accuracy: 0.191749
Average loss: 2.084726, Train accuracy: 0.250077, Val accuracy: 0.432050
Average loss: 1.490952, Train accuracy: 0.496912, Val accuracy: 0.634772
Average loss: 1.074408, Train accuracy: 0.657544, Val accuracy: 0.733310
Average loss: 0.872123, Train accuracy: 0.730727, Val accuracy: 0.767870
Average loss: 0.765614, Train accuracy: 0.763966, Val accuracy: 0.779577
###Markdown
Подбор гиперпараметров
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e0, 1e-1, 1e-2, 1e-3, 1e-4]
anneal_coeff = 0.2
anneal_epochs = [1, 5, 10, 15, 20, 50]
reg = [1e-3, 1e-4, 1e-5, 1e-7]
batch_size = 64
epoch_num = 10
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
# TODO: Your code here!
for lr in learning_rates:
for ae in anneal_epochs:
for r in reg:
print(f'Parameters are lr={lr}, anneal_epochs={ae}, reg={r}')
params = Hyperparams(lr, ae, r)
new_lenet_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 120),
nn.Linear(120, 84),
nn.Linear(84, 10),
)
new_lenet_model.type(torch.cuda.FloatTensor)
new_lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.Adam(new_lenet_model.parameters(), lr=lr, weight_decay=r)
loss_history, train_history, val_history = train_model(new_lenet_model, train_aug_loader, val_loader, loss, optimizer, epoch_num)
results = RunResult(new_lenet_model, train_history, val_history, val_history[-1])
run_record[params] = results
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
_____no_output_____
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
best_model = None
###Output
_____no_output_____
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
# TODO Write the code to compute accuracy on test set
final_test_accuracy = 0.0
print("Final test accuracy - ", final_test_accuracy)
###Output
_____no_output_____
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
device
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def compute_val_loss(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Implement the inference of the model on all of the batches from loader,
# and compute the overall accuracy.
# Hint: PyTorch has the argmax function!
loss_accum = 0
for x, y in loader:
x_gpu = x.to(device)
y_gpu = y.to(device)
preds = model(x_gpu)
loss_accum += float(loss(preds, y_gpu))
return loss_accum
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs, sheduler=None):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
if epoch % 5 == 0:
print(f'epoch number {epoch}')
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
val_loss = compute_val_loss(model, val_loader)
if sheduler is not None:
if isinstance(sheduler, optim.lr_scheduler.ReduceLROnPlateau):
sheduler.step(val_loss)
else:
sheduler.step()
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval()
# Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
correct_samples = 0
total_samples = 0
for x, y in loader:
x_gpu = x.to(device)
y_gpu = y.to(device)
preds = model(x_gpu)
_, indices = torch.max(preds, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y_gpu.shape[0]
return float(correct_samples) / total_samples
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
epoch number 0
Average loss: 1.424487, Train accuracy: 0.522404, Val accuracy: 0.735376
Average loss: 0.687619, Train accuracy: 0.791762, Val accuracy: 0.799331
Average loss: 0.583968, Train accuracy: 0.826195, Val accuracy: 0.817828
Average loss: 0.537634, Train accuracy: 0.840204, Val accuracy: 0.816258
Average loss: 0.513988, Train accuracy: 0.847371, Val accuracy: 0.841513
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:1201: UserWarning: Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead
"Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead"
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:1201: UserWarning: Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead
"Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead"
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
epoch number 0
Average loss: 1.001039, Train accuracy: 0.671433, Val accuracy: 0.747799
Average loss: 0.843871, Train accuracy: 0.726769, Val accuracy: 0.777899
Average loss: 0.798323, Train accuracy: 0.743542, Val accuracy: 0.786363
Average loss: 0.767177, Train accuracy: 0.753455, Val accuracy: 0.772985
Average loss: 0.743733, Train accuracy: 0.762140, Val accuracy: 0.772302
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
nn.Conv2d(in_channels=3, kernel_size=5, out_channels=6, stride=1),
nn.Tanh(),
nn.MaxPool2d(kernel_size=2),
nn.Conv2d(in_channels=6, kernel_size=5, out_channels=16),
nn.Tanh(),
nn.MaxPool2d(2),
nn.Conv2d(in_channels=16, out_channels=120, kernel_size=5),
nn.Tanh(),
nn.Flatten(),
nn.Linear(in_features=120, out_features=10),
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
epoch number 0
Average loss: 1.332580, Train accuracy: 0.549534, Val accuracy: 0.752099
Average loss: 0.775760, Train accuracy: 0.753472, Val accuracy: 0.784929
Average loss: 0.676480, Train accuracy: 0.785551, Val accuracy: 0.801106
Average loss: 0.616508, Train accuracy: 0.807341, Val accuracy: 0.824312
Average loss: 0.581721, Train accuracy: 0.820496, Val accuracy: 0.828749
epoch number 5
Average loss: 0.554805, Train accuracy: 0.826127, Val accuracy: 0.810388
Average loss: 0.536220, Train accuracy: 0.831775, Val accuracy: 0.836530
Average loss: 0.518306, Train accuracy: 0.838481, Val accuracy: 0.837349
Average loss: 0.507587, Train accuracy: 0.842422, Val accuracy: 0.838100
Average loss: 0.495398, Train accuracy: 0.847336, Val accuracy: 0.838919
###Markdown
Подбор гиперпараметров
###Code
import itertools
import random
from scipy.stats import uniform, norm, expon, randint
def create_params(grid, n_iter):
params = []
for i in range(n_iter):
param = []
for key, value in grid.items():
if isinstance(value, list):
param.append(random.choice(value))
else:
param.append(value.rvs())
params.append(param)
return params
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
batch_size = 64
epoch_num = 10
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
grid_of_params = {
'learning_rates': uniform(1e-6, 1e-3),
# 'anneal_coeff': uniform(0, 0.3),
'anneal_epochs': randint(2, 7),
'reg': uniform(1e-6, 1e-2),
}
from tqdm import tqdm
for lr, anneal_ap, r_str in tqdm(create_params(grid_of_params, n_iter=10)):
param = Hyperparams(lr, anneal_ap, r_str)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.Adam(lenet_model.parameters(), lr=lr, weight_decay=r_str)
sheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, patience=anneal_ap)
run_record[param] = RunResult(lenet_model, *train_model(lenet_model,
train_aug_loader,
val_loader,
loss,
optimizer,
10,
sheduler=sheduler))
final_accuracies = []
for key, value in run_record.items():
final_accuracies.append(value.final_val_accuracy[-1])
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_accuracy in zip(run_record.keys(), final_accuracies):
if best_val_accuracy is None or best_val_accuracy < run_accuracy:
best_val_accuracy = run_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
Best validation accuracy: 0.87, best hyperparams: Hyperparams(learning_rate=0.0002865932336092837, anneal_epochs=5, reg=0.0020876934704385465)
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайновЧто можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
layers_param_grid = {
'conv1_out': randint(6, 13),
'conv2_out': randint(16, 33),
'n_epoch': randint(10, 26),
'non_linear': [nn.Tanh, nn.Sigmoid, nn.ReLU]
}
Hyperparams2 = namedtuple("Hyperparams2", ['conv1_out', 'conv2_out', 'n_epoch', 'non_linear'])
run_result2 = {}
for conv1_out, conv2_out, n_epoch, non_linear in tqdm(create_params(layers_param_grid, n_iter=10)):
param = Hyperparams2(conv1_out, conv2_out, n_epoch, non_linear)
print(param)
lr, anneal_ap, r_str = best_hyperparams
best_model = nn.Sequential(
nn.Conv2d(in_channels=3, kernel_size=5, out_channels=conv1_out, stride=1),
nn.BatchNorm2d(conv1_out),
non_linear(),
nn.MaxPool2d(kernel_size=2),
nn.Conv2d(in_channels=conv1_out, kernel_size=5, out_channels=conv2_out),
nn.BatchNorm2d(conv2_out),
non_linear(),
nn.MaxPool2d(2),
nn.Conv2d(in_channels=conv2_out, out_channels=120, kernel_size=5),
nn.BatchNorm2d(120),
non_linear(),
nn.Flatten(),
nn.Linear(in_features=120, out_features=10),
)
best_model.type(torch.cuda.FloatTensor)
best_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.Adam(best_model.parameters(), lr=lr, weight_decay=r_str)
sheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, patience=anneal_ap)
loss_history, train_history, val_history = train_model(best_model,
train_aug_loader,
val_loader,
loss,
optimizer,
n_epoch,
sheduler=sheduler)
result = RunResult(best_model, train_history, val_history, val_history[-1])
run_result2[param] = result
best_val_accuracy_2 = None
best_hyperparams_2 = None
best_run_2 = None
for hyperparams, run_accuracy in run_result2.items():
if best_val_accuracy_2 is None or best_val_accuracy_2 < run_accuracy.final_val_accuracy:
best_val_accuracy_2 = run_accuracy.final_val_accuracy
best_hyperparams_2 = hyperparams
best_run_2 = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy_2, best_hyperparams_2))
layers_param_grid = {
'conv1_out': randint(10, 21),
'conv2_out': randint(20, 35),
'n_epoch': randint(15, 31),
'non_linear': [nn.ReLU]
}
run_result2 = {}
for conv1_out, conv2_out, n_epoch, non_linear in tqdm(create_params(layers_param_grid, n_iter=3)):
param = Hyperparams2(conv1_out, conv2_out, n_epoch, non_linear)
print(param)s
lr, anneal_ap, r_str = best_hyperparams
best_model = nn.Sequential(
nn.Conv2d(in_channels=3, kernel_size=5, out_channels=conv1_out, stride=1),
nn.BatchNorm2d(conv1_out),
non_linear(),
nn.MaxPool2d(kernel_size=2),
nn.Conv2d(in_channels=conv1_out, kernel_size=5, out_channels=conv2_out),
nn.BatchNorm2d(conv2_out),
non_linear(),
nn.MaxPool2d(2),
nn.Conv2d(in_channels=conv2_out, out_channels=120, kernel_size=5),
nn.BatchNorm2d(120),
non_linear(),
nn.Flatten(),
nn.Linear(in_features=120, out_features=10),
)
best_model.type(torch.cuda.FloatTensor)
best_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.Adam(best_model.parameters(), lr=lr, weight_decay=r_str)
sheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, patience=anneal_ap)
loss_history, train_history, val_history = train_model(best_model,
train_aug_loader,
val_loader,
loss,
optimizer,
n_epoch,
sheduler=sheduler)
result = RunResult(best_model, train_history, val_history, val_history[-1])
run_result2[param] = result
best_val_accuracy_2 = None
best_hyperparams_2 = None
best_run_2 = None
for hyperparams, run_accuracy in run_result2.items():
if best_val_accuracy_2 is None or best_val_accuracy_2 < run_accuracy.final_val_accuracy:
best_val_accuracy_2 = run_accuracy.final_val_accuracy
best_hyperparams_2 = hyperparams
best_run_2 = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy_2, best_hyperparams_2))
for key, value in run_result2.items():
print(key, value.final_val_accuracy)
conv1_out = 20
conv2_out = 25
n_epoch = 20
best_model = nn.Sequential(
nn.Conv2d(in_channels=3, kernel_size=5, out_channels=conv1_out, stride=1),
nn.BatchNorm2d(conv1_out),
non_linear(),
nn.MaxPool2d(kernel_size=2),
nn.Conv2d(in_channels=conv1_out, kernel_size=5, out_channels=conv2_out),
nn.BatchNorm2d(conv2_out),
non_linear(),
nn.MaxPool2d(2),
nn.Conv2d(in_channels=conv2_out, out_channels=120, kernel_size=5),
nn.BatchNorm2d(120),
non_linear(),
nn.Flatten(),
nn.Linear(in_features=120, out_features=10),
)
best_model.type(torch.cuda.FloatTensor)
best_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.Adam(best_model.parameters(), lr=lr, weight_decay=r_str)
sheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, patience=anneal_ap)
loss_history, train_history, val_history = train_model(best_model,
train_aug_loader,
val_loader,
loss,
optimizer,
n_epoch,
sheduler=sheduler)
###Output
epoch number 0
Average loss: 1.136484, Train accuracy: 0.646214, Val accuracy: 0.781790
Average loss: 0.703671, Train accuracy: 0.783930, Val accuracy: 0.821923
Average loss: 0.614290, Train accuracy: 0.813296, Val accuracy: 0.834824
Average loss: 0.555864, Train accuracy: 0.830717, Val accuracy: 0.845335
Average loss: 0.519868, Train accuracy: 0.843736, Val accuracy: 0.858440
epoch number 5
Average loss: 0.497416, Train accuracy: 0.849828, Val accuracy: 0.868200
Average loss: 0.472497, Train accuracy: 0.857608, Val accuracy: 0.871545
Average loss: 0.457932, Train accuracy: 0.861908, Val accuracy: 0.873114
Average loss: 0.443500, Train accuracy: 0.867317, Val accuracy: 0.874070
Average loss: 0.436205, Train accuracy: 0.870235, Val accuracy: 0.879530
epoch number 10
Average loss: 0.423629, Train accuracy: 0.873716, Val accuracy: 0.878643
Average loss: 0.418419, Train accuracy: 0.874740, Val accuracy: 0.877005
Average loss: 0.408442, Train accuracy: 0.879381, Val accuracy: 0.883762
Average loss: 0.404500, Train accuracy: 0.879620, Val accuracy: 0.885059
Average loss: 0.401963, Train accuracy: 0.880166, Val accuracy: 0.888062
epoch number 15
Average loss: 0.393858, Train accuracy: 0.883169, Val accuracy: 0.884513
Average loss: 0.391015, Train accuracy: 0.885438, Val accuracy: 0.886902
Average loss: 0.386207, Train accuracy: 0.885899, Val accuracy: 0.888335
Average loss: 0.383398, Train accuracy: 0.886940, Val accuracy: 0.892158
Average loss: 0.382253, Train accuracy: 0.888220, Val accuracy: 0.891065
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
test_loader = torch.utils.data.DataLoader(data_test, batch_size=batch_size)
final_test_accuracy = compute_accuracy(best_model, test_loader)
print("Test accuracy: %2.4f" % final_test_accuracy)
###Output
Test accuracy: 0.8896
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10)
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs, scheduler=None, print_full_history=True):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
if scheduler:
scheduler.step() # use the sheduler
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
if print_full_history:
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
else:
if epoch == 0 or epoch == (num_epochs - 1):
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f, epoch num: %i" % (ave_loss, train_accuracy, val_accuracy, epoch))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# Don't forget to move the data to device before running it through the model!
correct = 0
total = 0
for (x, y) in loader:
x_gpu = x.to(device)
y_gpu = y.to(device)
pred = model(x_gpu)
indices = torch.argmax(pred, 1)
correct += torch.sum(indices == y_gpu)
total += y_gpu.shape[0]
return float(correct) / total
# scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=20, gamma=0.2) # add sheduler
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 0.370235, Train accuracy: 0.890199, Val accuracy: 0.855573
Average loss: 0.366488, Train accuracy: 0.889653, Val accuracy: 0.843151
Average loss: 0.359449, Train accuracy: 0.892264, Val accuracy: 0.858098
Average loss: 0.355024, Train accuracy: 0.891683, Val accuracy: 0.851205
Average loss: 0.350557, Train accuracy: 0.893987, Val accuracy: 0.854140
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
_____no_output_____
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
# TODO: Add good augmentations
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
data_aug_train = dset.SVHN('./',
transform=tfs
)
# TODO create new instances of loaders with the augmentations you chose
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size, sampler=train_sampler)
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 0.547490, Train accuracy: 0.837047, Val accuracy: 0.854481
Average loss: 0.480162, Train accuracy: 0.852336, Val accuracy: 0.864241
Average loss: 0.464104, Train accuracy: 0.859110, Val accuracy: 0.864446
Average loss: 0.456499, Train accuracy: 0.859656, Val accuracy: 0.849498
Average loss: 0.445704, Train accuracy: 0.864877, Val accuracy: 0.854686
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, 5, padding=0),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
nn.Conv2d(6, 16, 5, padding=0),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
Flattener(),
nn.Linear(16*5*5, 120),
nn.ReLU(inplace=True),
nn.Linear(120, 84),
nn.ReLU(inplace=True),
nn.Linear(84, 10)
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
Average loss: 1.332039, Train accuracy: 0.553322, Val accuracy: 0.799263
Average loss: 0.554227, Train accuracy: 0.835955, Val accuracy: 0.860010
Average loss: 0.465584, Train accuracy: 0.861806, Val accuracy: 0.870043
Average loss: 0.419195, Train accuracy: 0.875508, Val accuracy: 0.872773
Average loss: 0.382746, Train accuracy: 0.884688, Val accuracy: 0.876527
Average loss: 0.359842, Train accuracy: 0.892844, Val accuracy: 0.872227
Average loss: 0.337912, Train accuracy: 0.899311, Val accuracy: 0.887380
Average loss: 0.324927, Train accuracy: 0.902228, Val accuracy: 0.884923
Average loss: 0.311207, Train accuracy: 0.907484, Val accuracy: 0.894342
Average loss: 0.299666, Train accuracy: 0.908968, Val accuracy: 0.891134
###Markdown
Подбор гиперпараметров
###Code
class GridSearch:
"the random iteration"
def __init__(self, *args):
self.args = args
def rand_iterator(self, num):
from itertools import product
from random import sample
sample_list = list(product(*self.args))
num = min(num, len(sample_list))
return sample(sample_list, num)
from tqdm import tqdm_notebook
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_coeffs', 'anneal_epochs', 'regs'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e-1, 1e-2, 1e-3]
anneal_coeffs = [0.2]
anneal_epochs = [5, 10, 15, 20]
regs = [1e-3, 1e-4, 1e-5]
batch_size = 64
epoch_num = 10
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
# TODO: Your code here!
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, 5, padding=0),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
nn.Conv2d(6, 16, 5, padding=0),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
Flattener(),
nn.Linear(16*5*5, 120),
nn.ReLU(inplace=True),
nn.Linear(120, 84),
nn.ReLU(inplace=True),
nn.Linear(84, 10)
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
full_params = Hyperparams(learning_rates, anneal_coeffs, anneal_epochs, regs)
grid_search = GridSearch(*full_params)
rand_iterator = grid_search.rand_iterator(50)
for params in tqdm_notebook(rand_iterator, total=len(rand_iterator)):
params = Hyperparams(params[0], params[1], params[2], params[3])
num_epochs = 10
while num_epochs <= params.anneal_epochs:
num_epochs *= 2
print(f"num_epochs: {num_epochs}, params: {params}")
optimizer = optim.SGD(lenet_model.parameters(), lr=params.learning_rate, weight_decay=params.regs)
scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=params.anneal_epochs, gamma=params.anneal_coeffs) # add sheduler
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
loss_history, train_history, val_history = train_model(lenet_model, train_loader, val_loader, loss,
optimizer, num_epochs, scheduler=scheduler, print_full_history=False)
run_record[params] = RunResult(model=nn_model, train_history=train_history, val_history=val_history, final_val_accuracy=val_history[-1])
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, 5, padding=0),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
nn.Conv2d(6, 16, 5, padding=0),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
Flattener(),
nn.Linear(16*5*5, 120),
nn.ReLU(inplace=True),
nn.Linear(120, 84),
nn.ReLU(inplace=True),
nn.Linear(84, 10)
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=20, gamma=0.2) # add sheduler
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10, scheduler)
###Output
_____no_output_____
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
best_model = nn.Sequential(
nn.Conv2d(3, 6, 5, padding=0),
nn.BatchNorm2d(6),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
nn.Conv2d(6, 16, 5, padding=0),
nn.BatchNorm2d(16),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
Flattener(),
nn.Linear(16*5*5, 120),
nn.BatchNorm1d(120),
nn.ReLU(inplace=True),
nn.Linear(120, 84),
nn.BatchNorm1d(84),
nn.ReLU(inplace=True),
nn.Linear(84, 10)
)
best_model.type(torch.cuda.FloatTensor)
best_model.to(device)
optimizer = optim.Adam(best_model.parameters(), lr=1e-3, weight_decay=1e-4)
scheduler = optim.lr_scheduler.CosineAnnealingWarmRestarts(optimizer, T_0=2) # add sheduler
loss_history, train_history, val_history = train_model(best_model, train_aug_loader, val_loader, loss, optimizer, 10, scheduler=scheduler)
###Output
Average loss: 0.214649, Train accuracy: 0.932328, Val accuracy: 0.911678
Average loss: 0.181904, Train accuracy: 0.942412, Val accuracy: 0.914477
Average loss: 0.205869, Train accuracy: 0.934444, Val accuracy: 0.911815
Average loss: 0.176995, Train accuracy: 0.943948, Val accuracy: 0.912497
Average loss: 0.205109, Train accuracy: 0.934239, Val accuracy: 0.909085
Average loss: 0.172584, Train accuracy: 0.944818, Val accuracy: 0.913726
Average loss: 0.200776, Train accuracy: 0.936030, Val accuracy: 0.908607
Average loss: 0.171292, Train accuracy: 0.945620, Val accuracy: 0.913794
Average loss: 0.196727, Train accuracy: 0.936508, Val accuracy: 0.907720
Average loss: 0.168917, Train accuracy: 0.946354, Val accuracy: 0.911883
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
# TODO Write the code to compute accuracy on test set
final_test_accuracy = compute_accuracy(best_model, train_loader)
print("Final test accuracy - ", final_test_accuracy)
###Output
Final test accuracy - 0.9755315155444835
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./data/', split='train',
download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
data_test = dset.SVHN('./data/', split='test',
download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
Using downloaded and verified file: ./data/train_32x32.mat
Using downloaded and verified file: ./data/test_32x32.mat
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1), # input 3*32*32, output 64*32*32
nn.ReLU(inplace=True),
nn.MaxPool2d(4), # output 64*8*8
nn.Conv2d(64, 64, 3, padding=1), # output 64*8*8
nn.ReLU(inplace=True),
nn.MaxPool2d(4), # output 64*2*2
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=2, gamma=0.5)
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
scheduler.step()
print(f'Epoch {epoch} '
f'Average loss: {ave_loss:.4f} '
f'Train accuracy: {train_accuracy:.4f} '
f'Val accuracy: {val_accuracy:.4f}')
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
acc_history = []
for i_step, (x, y) in enumerate(loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
pred_prob = model(x_gpu)
pred_labels = pred_prob.argmax(dim=1)
accuracy = sum([1 for pred, test in zip(pred_labels, y_gpu) if pred == test]) / batch_size
acc_history.append(accuracy)
return np.mean(acc_history)
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /pytorch/c10/core/TensorImpl.h:1156.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./', download=True, transform=tfs)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
/usr/local/lib/python3.8/dist-packages/torchvision/transforms/transforms.py:1230: UserWarning: Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead
warnings.warn(
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./', download=True, transform=tfs)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
Using downloaded and verified file: ./train_32x32.mat
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
# TODO: Add good augmentations
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomAffine(degrees=(-45, 45)),
])
# TODO create new instances of loaders with the augmentations you chose
data_aug_train = dset.SVHN('./', download=True, transform=tfs)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_train):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomAffine(degrees=(-30, 30)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
data_aug_train = dset.SVHN('./', download=True, transform=tfs)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
# Finally, let's train with augmentations!
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1), # input 3*32*32, output 64*32*32
nn.ReLU(inplace=True),
nn.MaxPool2d(4), # output 64*8*8
nn.Conv2d(64, 64, 3, padding=1), # output 64*8*8
nn.ReLU(inplace=True),
nn.MaxPool2d(4), # output 64*2*2
Flattener(),
nn.Linear(64*2*2, 10),
)
# Note we shouldn't use augmentations on validation
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
Epoch 0 Average loss: 1.8405 Train accuracy: 0.3642 Val accuracy: 0.6489
Epoch 1 Average loss: 1.1364 Train accuracy: 0.6284 Val accuracy: 0.7377
Epoch 2 Average loss: 0.9099 Train accuracy: 0.7091 Val accuracy: 0.7596
Epoch 3 Average loss: 0.8634 Train accuracy: 0.7270 Val accuracy: 0.7697
Epoch 4 Average loss: 0.7922 Train accuracy: 0.7521 Val accuracy: 0.7766
Epoch 5 Average loss: 0.7777 Train accuracy: 0.7564 Val accuracy: 0.7950
Epoch 6 Average loss: 0.7366 Train accuracy: 0.7710 Val accuracy: 0.7987
Epoch 7 Average loss: 0.7296 Train accuracy: 0.7739 Val accuracy: 0.8072
Epoch 8 Average loss: 0.7181 Train accuracy: 0.7784 Val accuracy: 0.8104
Epoch 9 Average loss: 0.7104 Train accuracy: 0.7800 Val accuracy: 0.8110
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, 5), # input 1*32*32, output 6*28*28
nn.Tanh(),
nn.AvgPool2d(2, stride=2), # output 6*14*14
nn.Tanh(),
nn.Conv2d(6, 16, 5), # output 16*10*10
nn.Tanh(),
nn.AvgPool2d(2, stride=2), # output 16*5*5
nn.Tanh(),
Flattener(),
nn.Linear(16*5*5, 120),
nn.Tanh(),
nn.Linear(120, 84),
nn.Tanh(),
nn.Linear(84, 10),
nn.Softmax(dim=1)
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_loader, val_loader, loss, optimizer, 10)
###Output
Epoch 0 Average loss: 2.2772 Train accuracy: 0.1787 Val accuracy: 0.1871
Epoch 1 Average loss: 2.2503 Train accuracy: 0.1995 Val accuracy: 0.2381
Epoch 2 Average loss: 2.1841 Train accuracy: 0.2727 Val accuracy: 0.3017
Epoch 3 Average loss: 2.1226 Train accuracy: 0.3333 Val accuracy: 0.3597
Epoch 4 Average loss: 2.0835 Train accuracy: 0.3792 Val accuracy: 0.3985
Epoch 5 Average loss: 2.0568 Train accuracy: 0.4104 Val accuracy: 0.4147
Epoch 6 Average loss: 2.0367 Train accuracy: 0.4287 Val accuracy: 0.4347
Epoch 7 Average loss: 2.0257 Train accuracy: 0.4394 Val accuracy: 0.4441
Epoch 8 Average loss: 2.0177 Train accuracy: 0.4465 Val accuracy: 0.4475
Epoch 9 Average loss: 2.0133 Train accuracy: 0.4498 Val accuracy: 0.4507
###Markdown
Подбор гиперпараметров
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e0, 1e-1, 1e-2, 1e-3, 1e-4]
anneal_coeff = 0.2
anneal_epochs = [1, 5, 10, 15, 20, 50]
reg = [1e-3, 1e-4, 1e-5, 1e-7]
batch_size = 64
epoch_num = 10
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
# TODO: Your code here!
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
_____no_output_____
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
# add lr scheduler, batch normalization, Adam optimization
nn_model = nn.Sequential(
nn.Conv2d(3, 32, 7, padding=5), # input 3*32*32, output 32*32*32
nn.ReLU(inplace=True),
nn.MaxPool2d(4), # output 32*8*8
nn.BatchNorm2d(32),
nn.Conv2d(32, 64, 5, padding=3), # output 64*8*8
nn.ReLU(inplace=True),
nn.MaxPool2d(2), # output 64*4*4
nn.BatchNorm2d(64),
nn.Conv2d(64, 128, 3, padding=1), # output 128*4*4
nn.ReLU(inplace=True),
nn.MaxPool2d(2), # output 128*2*2
nn.BatchNorm2d(128),
Flattener(),
nn.Linear(128*2*2, 256),
nn.ReLU(inplace=True),
nn.BatchNorm1d(256),
nn.Linear(256, 128),
nn.ReLU(inplace=True),
nn.BatchNorm1d(128),
nn.Linear(128, 10)
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.Adam(nn_model.parameters(), lr=1e-1)
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 10)
# add lr scheduler, batch normalization, Adam optimization, LeakyReLU activation
nn_model = nn.Sequential(
nn.Conv2d(3, 32, 7, padding=5), # input 3*32*32, output 32*32*32
nn.LeakyReLU(inplace=True),
nn.MaxPool2d(2), # output 32*16*16
nn.BatchNorm2d(32),
nn.Conv2d(32, 64, 5, padding=3), # output 64*16*16
nn.LeakyReLU(inplace=True),
nn.MaxPool2d(2), # output 64*8*8
nn.BatchNorm2d(64),
nn.Conv2d(64, 128, 3, padding=1), # output 128*8*8
nn.LeakyReLU(inplace=True),
nn.MaxPool2d(2), # output 128*4*4
nn.BatchNorm2d(128),
nn.Conv2d(128, 256, 3, padding=1), # output 256*4*4
nn.LeakyReLU(inplace=True),
nn.MaxPool2d(2), # output 256*2*2
nn.BatchNorm2d(256),
Flattener(),
nn.Linear(256*2*2, 512),
nn.LeakyReLU(inplace=True),
nn.BatchNorm1d(512),
nn.Linear(512, 256),
nn.LeakyReLU(inplace=True),
nn.BatchNorm1d(256),
nn.Linear(256, 10)
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.Adam(nn_model.parameters(), lr=1e-1)
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 10)
###Output
Epoch 0 Average loss: 1.9841 Train accuracy: 0.3321 Val accuracy: 0.7102
Epoch 1 Average loss: 0.6576 Train accuracy: 0.8008 Val accuracy: 0.8513
Epoch 2 Average loss: 0.3738 Train accuracy: 0.8867 Val accuracy: 0.8815
Epoch 3 Average loss: 0.3431 Train accuracy: 0.8964 Val accuracy: 0.8828
Epoch 4 Average loss: 0.2461 Train accuracy: 0.9265 Val accuracy: 0.9033
Epoch 5 Average loss: 0.2165 Train accuracy: 0.9350 Val accuracy: 0.9117
Epoch 6 Average loss: 0.1470 Train accuracy: 0.9566 Val accuracy: 0.9193
Epoch 7 Average loss: 0.1159 Train accuracy: 0.9647 Val accuracy: 0.9197
Epoch 8 Average loss: 0.0641 Train accuracy: 0.9806 Val accuracy: 0.9211
Epoch 9 Average loss: 0.0430 Train accuracy: 0.9867 Val accuracy: 0.9189
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
# TODO Write the code to compute accuracy on test set
test_loader = torch.utils.data.DataLoader(data_test, batch_size=batch_size)
test_accuracy = compute_accuracy(nn_model, test_loader)
print(f'Final test accuracy: {test_accuracy:.4f}')
###Output
Final test accuracy: 0.9174
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs, scheduler=None,):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
if scheduler:
scheduler.step() # use the sheduler
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Implement the inference of the model on all of the batches from loader,
# and compute the overall accuracy.
# Hint: PyTorch has the argmax function!
# raise Exception("Not implemented")
correct_samples = 0
total_samples = 0
for (x, y) in loader:
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
indices = torch.argmax(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y_gpu.shape[0]
val_accuracy = float(correct_samples) / total_samples
return val_accuracy
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 1.407771, Train accuracy: 0.530355, Val accuracy: 0.729370
Average loss: 0.704501, Train accuracy: 0.786711, Val accuracy: 0.806088
Average loss: 0.597260, Train accuracy: 0.820530, Val accuracy: 0.814688
Average loss: 0.549296, Train accuracy: 0.834932, Val accuracy: 0.833390
Average loss: 0.521415, Train accuracy: 0.843429, Val accuracy: 0.848475
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
_____no_output_____
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
rows = 3
cols = 10
fig = plt.figure(figsize=(20, 6))
fig.suptitle('\nOriginal images:', fontsize=16)
for i, (x, y) in enumerate(dset.SVHN('./')):
if i == rows * cols:
break
plt.subplot(rows, cols, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
fig = plt.figure(figsize=(30, 3))
fig.suptitle('\nAugmented images:', fontsize=16)
for i, (x, y) in enumerate(data_aug_vis):
if i == (rows * cols):
break
plt.subplot(rows, cols, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
# TODO: Add good augmentations
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# TODO create new instances of loaders with the augmentations you chose
data_aug_train = dset.SVHN('./', transform=tfs)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size, sampler=train_sampler)
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 0.610021, Train accuracy: 0.813108, Val accuracy: 0.852365
Average loss: 0.566076, Train accuracy: 0.830632, Val accuracy: 0.849498
Average loss: 0.544929, Train accuracy: 0.834283, Val accuracy: 0.826019
Average loss: 0.524016, Train accuracy: 0.841910, Val accuracy: 0.858167
Average loss: 0.512086, Train accuracy: 0.844060, Val accuracy: 0.848475
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, 5),
nn.Tanh(),
nn.MaxPool2d(2),
nn.Conv2d(6, 16, 5),
nn.Tanh(),
nn.MaxPool2d(2),
Flattener(),
nn.Linear(5*5*16, 120),
nn.Tanh(),
nn.Linear(120, 84),
nn.Tanh(),
nn.Linear(84, 10),
nn.Tanh()
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, 5),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
nn.Conv2d(6, 16, 5),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
Flattener(),
nn.Linear(5*5*16, 120),
nn.ReLU(inplace=True),
nn.Linear(120, 84),
nn.ReLU(inplace=True),
nn.Linear(84, 10),
nn.ReLU(inplace=True),
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
aaa = [1, 2, 3, 5]
print(aaa[-1])
###Output
5
###Markdown
Подбор гиперпараметров Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
HyperParams = namedtuple("Hyperparams", ['learning_rate', 'reg_strength', 'epochs', 'step_size', 'gamma'])
RunResult = namedtuple("RunResult", ['model', 'loss_history', 'train_history', 'val_history'])
model = nn.Sequential(
# 32x32@3 => 32x32@16
nn.Conv2d(in_channels=3, out_channels=16, kernel_size=5, padding=2),
nn.BatchNorm2d(num_features=16),
nn.ReLU(inplace=True),
# 32x32@16 => 16x16@16
nn.MaxPool2d(kernel_size=2),
# 16x16@16 => 14x14@32
nn.Conv2d(in_channels=16, out_channels=32, kernel_size=3),
nn.BatchNorm2d(num_features=32),
nn.ReLU(inplace=True),
# 14x14@32 => 7x7@32
nn.MaxPool2d(kernel_size=2),
# 7x7@32 => 5x5@64
nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3),
nn.BatchNorm2d(num_features=64),
nn.ReLU(inplace=True),
Flattener(),
nn.Linear(in_features=5*5*64, out_features=128),
nn.BatchNorm1d(num_features=128),
nn.ReLU(inplace=True),
nn.Linear(in_features=128, out_features=10)
)
model.type(torch.cuda.FloatTensor)
model.to(device)
def search(stats, model, learning_rates, reg_strengths, epochs, step_size, gamma):
for learning_rate in learning_rates:
for reg_strength in reg_strengths:
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.Adam(model.parameters(), lr=learning_rate, weight_decay=reg_strength)
#scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, factor=anneal_factor, patience=anneal_patience, verbose=True)
scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=step_size, gamma=gamma) # add sheduler
key = HyperParams(learning_rate, reg_strength, epochs, step_size, gamma)
print('Training model: %s' % str(key))
loss_history, train_history, val_history = train_model(model, train_aug_loader, val_loader, loss, optimizer, epochs, scheduler=scheduler)
value = RunResult(model, loss_history, train_history, val_history)
stats[key] = value
print('\n')
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
learning_rates = [1e-2, 1e-3]
reg_strengths = [1e-4, 1e-5]
#step_size = [2, 10, 15]
#gamma = [0.2, 0.5]
#step_size = 2, gamma = 0.2 test = 0.92689
#step_size = 15, gamma = 0.5 test =
search(run_record, model, learning_rates, reg_strengths, epochs=15, step_size=15, gamma=0.5)
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.val_history:
best_val_accuracy = run_result.val_history
best_hyperparams = hyperparams
best_run = run_result
print(best_val_accuracy[-1])
print(best_hyperparams)
best_model = run_record[best_hyperparams].model
test_loader = torch.utils.data.DataLoader(data_test, batch_size=batch_size)
test_accuracy = compute_accuracy(best_model, test_loader)
print( test_accuracy)
lenet_model = nn.Sequential(
nn.Conv2d(3, 18, 5, padding=0),
nn.BatchNorm2d(18),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
nn.Conv2d(18, 24, 5, padding=0),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
Flattener(),
nn.Linear(24*5*5, 200),
nn.BatchNorm1d(200),
nn.ReLU(inplace=True),
nn.Linear(200, 84),
nn.ReLU(inplace=True),
nn.Linear(84, 10),
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 20)
test_loader = torch.utils.data.DataLoader(data_test, batch_size=batch_size)
test_accuracy = compute_accuracy(lenet_model, test_loader)
print( test_accuracy)
###Output
_____no_output_____
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
%matplotlib inline
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs, scheduler = None):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
if scheduler is not None:
scheduler.step()
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
#raise Exception("Not implemented")
correct_samples = 0
total_samples = 0
for x, y in loader:
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y_gpu.shape[0]
val_accuracy = float(correct_samples) / total_samples
return val_accuracy
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 1.384971, Train accuracy: 0.538614, Val accuracy: 0.779196
Average loss: 0.685683, Train accuracy: 0.791352, Val accuracy: 0.786567
Average loss: 0.587269, Train accuracy: 0.824608, Val accuracy: 0.827111
Average loss: 0.540846, Train accuracy: 0.838293, Val accuracy: 0.828339
Average loss: 0.510836, Train accuracy: 0.848377, Val accuracy: 0.828544
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
_____no_output_____
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.axis('off')
plt.imshow(x)
#plt.show()
###Output
_____no_output_____
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# TODO create new instances of loaders with the augmentations you chose
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 0.593109, Train accuracy: 0.818005, Val accuracy: 0.846973
Average loss: 0.551600, Train accuracy: 0.830632, Val accuracy: 0.847382
Average loss: 0.528634, Train accuracy: 0.838515, Val accuracy: 0.832025
Average loss: 0.513244, Train accuracy: 0.843719, Val accuracy: 0.845813
Average loss: 0.500644, Train accuracy: 0.848053, Val accuracy: 0.864651
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, kernel_size=(5, 5)),
nn.MaxPool2d(kernel_size=(2, 2), stride=2),
nn.Tanh(),
nn.Conv2d(6, 16, kernel_size=(5, 5)),
nn.MaxPool2d(kernel_size=(2, 2), stride=2),
nn.Tanh(),
Flattener(),
nn.Linear(16 * 5 * 5, 120),
nn.Tanh(),
nn.Linear(120, 84),
nn.Tanh(),
nn.Linear(84, 10)
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
Average loss: 1.278501, Train accuracy: 0.567877, Val accuracy: 0.796806
Average loss: 0.562527, Train accuracy: 0.828396, Val accuracy: 0.863286
Average loss: 0.483218, Train accuracy: 0.852933, Val accuracy: 0.871408
Average loss: 0.441390, Train accuracy: 0.866413, Val accuracy: 0.871067
Average loss: 0.411978, Train accuracy: 0.873733, Val accuracy: 0.871476
Average loss: 0.392221, Train accuracy: 0.879944, Val accuracy: 0.885946
Average loss: 0.370595, Train accuracy: 0.886599, Val accuracy: 0.882261
Average loss: 0.357252, Train accuracy: 0.891001, Val accuracy: 0.889769
Average loss: 0.343471, Train accuracy: 0.894925, Val accuracy: 0.892431
Average loss: 0.332205, Train accuracy: 0.897741, Val accuracy: 0.890519
###Markdown
Подбор гиперпараметров
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e-1, 1e-2]
anneal_coeff = 0.2
anneal_epochs = [2, 4]
regs = [1e-4, 1e-5]
batch_size = 64
epoch_num = 6
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
# TODO: Your code here!
for lr in learning_rates:
for anneal_epoch in anneal_epochs:
for reg in regs:
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, kernel_size=(5, 5)),
nn.MaxPool2d(kernel_size=(2, 2), stride=2),
nn.Tanh(),
nn.Conv2d(6, 16, kernel_size=(5, 5)),
nn.MaxPool2d(kernel_size=(2, 2), stride=2),
nn.Tanh(),
Flattener(),
nn.Linear(16 * 5 * 5, 120),
nn.Tanh(),
nn.Linear(120, 84),
nn.Tanh(),
nn.Linear(84, 10)
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
params = Hyperparams(lr, anneal_epoch, reg)
optimizer = optim.SGD(lenet_model.parameters(), lr=lr, weight_decay=reg)
scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=anneal_epoch, gamma=anneal_coeff)
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, epoch_num, scheduler)
result = RunResult(lenet_model, train_history, val_history, val_history[-1])
run_record[params] = result
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
Best validation accuracy: 0.90, best hyperparams: Hyperparams(learning_rate=0.1, anneal_epochs=4, reg=1e-05)
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
best_model = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=(3, 3)),
nn.MaxPool2d(kernel_size=(2, 2), stride=2),
nn.BatchNorm2d(64),
nn.ReLU(inplace=True),
nn.Conv2d(64, 256, kernel_size=(3, 3)),
nn.MaxPool2d(kernel_size=(2, 2), stride=2),
nn.BatchNorm2d(256),
nn.ReLU(inplace=True),
nn.Conv2d(256, 256, kernel_size=(3, 3)),
nn.MaxPool2d(kernel_size=(2, 2), stride=2),
nn.BatchNorm2d(256),
nn.ReLU(inplace=True),
Flattener(),
nn.Linear(256 * 2 * 2, 120),
nn.BatchNorm1d(120),
nn.ReLU(inplace=True),
nn.Linear(120, 10)
)
best_model.type(torch.cuda.FloatTensor)
best_model.to(device)
optimizer = optim.Adam(best_model.parameters(), lr=1e-3, weight_decay=1e-5)
scheduler = optim.lr_scheduler.CyclicLR(optimizer, base_lr=1e-4, max_lr=1e-3, cycle_momentum=False)
loss_history, train_history, val_history = train_model(best_model, train_aug_loader, val_loader, loss, optimizer, 20, scheduler)
###Output
Average loss: 1.033389, Train accuracy: 0.712111, Val accuracy: 0.860760
Average loss: 0.486625, Train accuracy: 0.863086, Val accuracy: 0.892158
Average loss: 0.377618, Train accuracy: 0.889431, Val accuracy: 0.904716
Average loss: 0.322338, Train accuracy: 0.903593, Val accuracy: 0.910040
Average loss: 0.282183, Train accuracy: 0.917295, Val accuracy: 0.914955
Average loss: 0.252749, Train accuracy: 0.924615, Val accuracy: 0.918572
Average loss: 0.225077, Train accuracy: 0.933113, Val accuracy: 0.919050
Average loss: 0.205312, Train accuracy: 0.938965, Val accuracy: 0.921098
Average loss: 0.186355, Train accuracy: 0.944613, Val accuracy: 0.923009
Average loss: 0.168551, Train accuracy: 0.950005, Val accuracy: 0.923145
Average loss: 0.157428, Train accuracy: 0.953298, Val accuracy: 0.920278
Average loss: 0.147672, Train accuracy: 0.954885, Val accuracy: 0.922736
Average loss: 0.131888, Train accuracy: 0.959612, Val accuracy: 0.924237
Average loss: 0.121038, Train accuracy: 0.963366, Val accuracy: 0.920074
Average loss: 0.114556, Train accuracy: 0.965328, Val accuracy: 0.920620
Average loss: 0.104609, Train accuracy: 0.968246, Val accuracy: 0.922394
Average loss: 0.099093, Train accuracy: 0.969065, Val accuracy: 0.920893
Average loss: 0.090629, Train accuracy: 0.971948, Val accuracy: 0.923145
Average loss: 0.085228, Train accuracy: 0.973638, Val accuracy: 0.918163
Average loss: 0.080119, Train accuracy: 0.975241, Val accuracy: 0.919391
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
# TODO Write the code to compute accuracy on test set
test_loader = torch.utils.data.DataLoader(data_test, batch_size=batch_size)
final_test_accuracy = compute_accuracy(best_model, test_loader)
print("Final test accuracy - ", final_test_accuracy)
###Output
_____no_output_____
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.tensorboard import SummaryWriter
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 32
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs, sheduler = None):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = float(loss_accum / i_step)
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
if not sheduler is None:
sheduler.step(val_accuracy)
loss_history.append(ave_loss)
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Epoch: %d, Average loss: %f, Train accuracy: %f, Val accuracy: %f"
% (epoch, ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
correct_samples = 0
total_samples = 0
for x, y in loader:
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
return float(correct_samples) / total_samples
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
Epoch: 0, Average loss: 0.838227, Train accuracy: 0.733099, Val accuracy: 0.770869
Epoch: 1, Average loss: 0.652211, Train accuracy: 0.800822, Val accuracy: 0.802334
Epoch: 2, Average loss: 0.588906, Train accuracy: 0.822987, Val accuracy: 0.818784
Epoch: 3, Average loss: 0.549236, Train accuracy: 0.835034, Val accuracy: 0.823562
Epoch: 4, Average loss: 0.520964, Train accuracy: 0.843804, Val accuracy: 0.819876
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, interpolation=transforms.InterpolationMode.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
_____no_output_____
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, interpolation=transforms.InterpolationMode.BILINEAR),
])
data_aug_vis = dset.SVHN('./', transform=tfs)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomRotation(15, interpolation=transforms.InterpolationMode.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
data_aug_train = dset.SVHN('./', transform=tfs)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history_aug, train_history_aug, val_history_aug = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
Epoch: 0, Average loss: 1.129444, Train accuracy: 0.631232, Val accuracy: 0.734694
Epoch: 1, Average loss: 0.935386, Train accuracy: 0.697795, Val accuracy: 0.754283
Epoch: 2, Average loss: 0.876157, Train accuracy: 0.717674, Val accuracy: 0.754146
Epoch: 3, Average loss: 0.843248, Train accuracy: 0.729908, Val accuracy: 0.754488
Epoch: 4, Average loss: 0.813180, Train accuracy: 0.739890, Val accuracy: 0.741315
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, 5),
nn.LeakyReLU(inplace=True),
nn.MaxPool2d(2),
nn.Conv2d(6, 16, 5),
nn.LeakyReLU(inplace=True),
nn.MaxPool2d(2),
nn.Conv2d(16, 120, 5),
nn.LeakyReLU(inplace=True),
Flattener(),
nn.Linear(120, 84),
nn.LeakyReLU(inplace=True),
nn.Linear(84, 10)
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
Epoch: 0, Average loss: 1.348350, Train accuracy: 0.543101, Val accuracy: 0.789161
Epoch: 1, Average loss: 0.549987, Train accuracy: 0.833703, Val accuracy: 0.863422
Epoch: 2, Average loss: 0.458908, Train accuracy: 0.864400, Val accuracy: 0.878780
Epoch: 3, Average loss: 0.411691, Train accuracy: 0.878101, Val accuracy: 0.877551
Epoch: 4, Average loss: 0.372735, Train accuracy: 0.889807, Val accuracy: 0.887448
Epoch: 5, Average loss: 0.347727, Train accuracy: 0.895301, Val accuracy: 0.894000
Epoch: 6, Average loss: 0.327728, Train accuracy: 0.903133, Val accuracy: 0.896253
Epoch: 7, Average loss: 0.314056, Train accuracy: 0.906375, Val accuracy: 0.893659
Epoch: 8, Average loss: 0.298133, Train accuracy: 0.910845, Val accuracy: 0.896321
Epoch: 9, Average loss: 0.285396, Train accuracy: 0.913217, Val accuracy: 0.885127
###Markdown
Подбор гиперпараметров
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
from itertools import product
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e-1, 1e-2, 1e-3]
anneal_coeff = 0.2
anneal_epochs = [4, 16]
reg = [1e-4, 1e-5]
batch_size = 64
epoch_num = 10
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
num = 1;
for lr, an_ep, re in product(learning_rates, anneal_epochs, reg):
header = f"--- Model: {num}, with params: learning rate = {lr}, anneal epochs = {an_ep}, regularization = {re}"
print(header)
params = Hyperparams(lr, an_ep, re)
model = nn.Sequential(
nn.Conv2d(3, 6, 5),
nn.LeakyReLU(inplace=True),
nn.MaxPool2d(2),
nn.Conv2d(6, 16, 5),
nn.LeakyReLU(inplace=True),
nn.MaxPool2d(2),
nn.Conv2d(16, 120, 5),
nn.LeakyReLU(inplace=True),
Flattener(),
nn.Linear(120, 84),
nn.LeakyReLU(inplace=True),
nn.Linear(84, 10)
)
model.type(torch.cuda.FloatTensor)
model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(model.parameters(), lr=lr, weight_decay=re)
lr_lambda = lambda epoch: anneal_coeff if epoch % an_ep == 0 else 1.0
sheduler = optim.lr_scheduler.MultiplicativeLR(optimizer, lr_lambda=lr_lambda)
loss_history, train_history, val_history = train_model(
model, train_aug_loader, val_loader, loss, optimizer, epoch_num, sheduler)
run_record[params] = RunResult(model, train_history, val_history, val_history[-1])
num += 1
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
Best validation accuracy: 0.90, best hyperparams: Hyperparams(learning_rate=0.1, anneal_epochs=16, reg=1e-05)
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
my_model = nn.Sequential( #32x32
nn.Conv2d(3, 16, 3, padding=1), nn.ReLU(inplace=True),
nn.Conv2d(16, 16, 3, padding=1), nn.ReLU(inplace=True),
nn.BatchNorm2d(16),
#-----------------------
nn.MaxPool2d(2), #16x16
#---------------------
nn.Conv2d(16, 32, 3, padding=1), nn.ReLU(inplace=True),
nn.Conv2d(32, 32, 3, padding=1), nn.ReLU(inplace=True),
nn.BatchNorm2d(32),
#--------------------
nn.MaxPool2d(2), #8x8
#----------------------
nn.Conv2d(32, 64, 3, padding=1), nn.ReLU(inplace=True),
nn.Conv2d(64, 64, 3, padding=1), nn.ReLU(inplace=True),
nn.BatchNorm2d(64),
#--------------------
nn.MaxPool2d(2), #4x4
#----------------------
nn.Conv2d(64, 128, 3, padding=1), nn.ReLU(inplace=True),
nn.Conv2d(128, 128, 3, padding=1), nn.ReLU(inplace=True),
nn.BatchNorm2d(128),
#--------------------
nn.MaxPool2d(2),#2x2
#----------------------
nn.Conv2d(128, 128, 3, padding=1), nn.ReLU(inplace=True),
nn.Conv2d(128, 128, 3, padding=1), nn.ReLU(inplace=True),
nn.BatchNorm2d(128),
#--------------------
nn.MaxPool2d(2), #1x1
#--------------------
Flattener(),
#----------------------
nn.Linear(128, 512), nn.ReLU(inplace=True),
nn.BatchNorm1d(512),
nn.Linear(512, 10)
)
my_model.type(torch.cuda.FloatTensor)
my_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.Adadelta(my_model.parameters(), lr=1e-1, weight_decay=1e-2)
sheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, factor=0.8, patience=3, threshold=0.01, verbose=True)
loss_history, train_history, val_history = train_model(my_model, train_aug_loader , val_loader, loss, optimizer, 15, sheduler)
###Output
Epoch: 0, Average loss: 1.319301, Train accuracy: 0.540986, Val accuracy: 0.846632
Epoch: 1, Average loss: 0.495803, Train accuracy: 0.852234, Val accuracy: 0.892567
Epoch: 2, Average loss: 0.406759, Train accuracy: 0.883613, Val accuracy: 0.904307
Epoch: 3, Average loss: 0.367912, Train accuracy: 0.896581, Val accuracy: 0.909358
Epoch 5: reducing learning rate of group 0 to 8.0000e-02.
Epoch: 4, Average loss: 0.345531, Train accuracy: 0.904856, Val accuracy: 0.923555
Epoch: 5, Average loss: 0.307005, Train accuracy: 0.918404, Val accuracy: 0.927991
Epoch: 6, Average loss: 0.292341, Train accuracy: 0.922499, Val accuracy: 0.923213
Epoch: 7, Average loss: 0.284874, Train accuracy: 0.924445, Val accuracy: 0.928606
Epoch 9: reducing learning rate of group 0 to 6.4000e-02.
Epoch: 8, Average loss: 0.275909, Train accuracy: 0.926322, Val accuracy: 0.927513
Epoch: 9, Average loss: 0.253291, Train accuracy: 0.934051, Val accuracy: 0.936182
Epoch: 10, Average loss: 0.244141, Train accuracy: 0.936559, Val accuracy: 0.931882
Epoch: 11, Average loss: 0.239195, Train accuracy: 0.938761, Val accuracy: 0.938366
Epoch 13: reducing learning rate of group 0 to 5.1200e-02.
Epoch: 12, Average loss: 0.234012, Train accuracy: 0.940433, Val accuracy: 0.935568
Epoch: 13, Average loss: 0.215055, Train accuracy: 0.944989, Val accuracy: 0.934680
Epoch: 14, Average loss: 0.209538, Train accuracy: 0.948248, Val accuracy: 0.940550
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
# TODO Write the code to compute accuracy on test set
test_loader = torch.utils.data.DataLoader(data_test, batch_size=batch_size)
final_test_accuracy = compute_accuracy(my_model_2, test_loader)
print(f"Final test accuracy {final_test_accuracy: .4f} ")
import matplotlib.pyplot as plt
plt.figure(figsize=(14, 6), dpi=80)
plt.subplot(1, 2, 1)
plt.title("Loss")
plt.plot(loss_history)
plt.grid()
plt.subplot(1,2,2)
plt.title("Accuracy")
plt.plot(train_history, label="train")
plt.plot(val_history, label="validate")
plt.legend()
plt.grid()
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs, lr_scheduler=None):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
if lr_scheduler is not None:
lr_scheduler.step()
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
res = torch.argmax(prediction, dim=1)
total_samples += res.shape[0]
correct_samples += torch.sum(res == y_gpu)
return 1. * correct_samples / total_samples
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 0.441529, Train accuracy: 0.864929, Val accuracy: 0.856733
Average loss: 0.441543, Train accuracy: 0.864929, Val accuracy: 0.856733
Average loss: 0.441444, Train accuracy: 0.864929, Val accuracy: 0.856733
Average loss: 0.441507, Train accuracy: 0.864929, Val accuracy: 0.856733
Average loss: 0.441474, Train accuracy: 0.864929, Val accuracy: 0.856733
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
_____no_output_____
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomRotation(20, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# TODO create new instances of loaders with the augmentations you chose
data_my_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_my_aug_train, batch_size=batch_size,
sampler=train_sampler)
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 0.552558, Train accuracy: 0.830325, Val accuracy: 0.853935
Average loss: 0.538235, Train accuracy: 0.835597, Val accuracy: 0.837349
Average loss: 0.534926, Train accuracy: 0.836109, Val accuracy: 0.827930
Average loss: 0.522943, Train accuracy: 0.840852, Val accuracy: 0.851751
Average loss: 0.519345, Train accuracy: 0.839641, Val accuracy: 0.856733
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, 5, padding=0),
nn.Tanh(),
nn.MaxPool2d(2, stride=2),
nn.Conv2d(6, 16, 5, padding=0),
nn.Tanh(),
nn.MaxPool2d(2, stride=2),
Flattener(),
nn.Linear(400, 120),
nn.Tanh(),
nn.Linear(120, 84),
nn.Tanh(),
nn.Linear(84, 10)
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
Average loss: 1.292604, Train accuracy: 0.563526, Val accuracy: 0.810866
Average loss: 0.655858, Train accuracy: 0.796471, Val accuracy: 0.845540
Average loss: 0.558919, Train accuracy: 0.826571, Val accuracy: 0.850317
Average loss: 0.508241, Train accuracy: 0.842815, Val accuracy: 0.863149
Average loss: 0.472683, Train accuracy: 0.852370, Val accuracy: 0.873592
Average loss: 0.448090, Train accuracy: 0.861721, Val accuracy: 0.879189
Average loss: 0.430179, Train accuracy: 0.866720, Val accuracy: 0.877210
Average loss: 0.413141, Train accuracy: 0.872846, Val accuracy: 0.882397
Average loss: 0.403406, Train accuracy: 0.875269, Val accuracy: 0.880759
Average loss: 0.392772, Train accuracy: 0.879569, Val accuracy: 0.874343
###Markdown
Подбор гиперпараметров
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e-1, 1e-2, 1e-3]
anneal_coeff = 0.2
anneal_epochs = [1, 5, 10, 15, 20, 50]
reg = [1e-3, 1e-4, 1e-5, 1e-7]
batch_size = 64
epoch_num = 5
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
for lr in learning_rates:
for _ in range(3):
an_e = np.random.choice(anneal_epochs)
reg_str = np.random.choice(reg)
lenet = nn.Sequential(
nn.Conv2d(3, 6, 5, padding=0),
nn.Tanh(),
nn.MaxPool2d(2, stride=2),
nn.Conv2d(6, 16, 5, padding=0),
nn.Tanh(),
nn.MaxPool2d(2, stride=2),
Flattener(),
nn.Linear(400, 120),
nn.Tanh(),
nn.Linear(120, 84),
nn.Tanh(),
nn.Linear(84, 10)
)
lenet.type(torch.cuda.FloatTensor)
lenet.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet.parameters(), lr=lr, weight_decay=reg_str)
scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=an_e, gamma=anneal_coeff)
loss_history, train_history, val_history = train_model(lenet, train_aug_loader, val_loader, loss, optimizer, epoch_num, lr_scheduler=scheduler)
run_record[Hyperparams(lr, an_e, reg_str)] = RunResult(lenet, train_history, val_history, val_history[-1])
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
Best validation accuracy: 0.88, best hyperparams: Hyperparams(learning_rate=0.1, anneal_epochs=15, reg=1e-07)
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
best_model = nn.Sequential(
nn.Conv2d(3, 6, 5, padding=0),
nn.MaxPool2d(2, stride=2),
nn.ReLU(inplace=True),
nn.BatchNorm2d(6),
nn.Conv2d(6, 12, 5, padding=0),
nn.MaxPool2d(2, stride=2),
nn.ReLU(inplace=True),
nn.BatchNorm2d(12),
Flattener(),
nn.Linear(5 * 5 * 12, 120),
nn.ReLU(inplace=True),
nn.BatchNorm1d(120),
nn.Linear(120, 84),
nn.ReLU(inplace=True),
nn.BatchNorm1d(84),
nn.Linear(84, 10)
)
best_model.type(torch.cuda.FloatTensor)
best_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(best_model.parameters(), lr=0.1, weight_decay=1e-7)
scheduler = optim.lr_scheduler.CosineAnnealingWarmRestarts(optimizer, 5, )
loss_history, train_history, val_history = train_model(best_model, train_aug_loader, val_loader, loss, optimizer, 10, lr_scheduler=scheduler)
###Output
Average loss: 0.850550, Train accuracy: 0.726547, Val accuracy: 0.848747
Average loss: 0.521791, Train accuracy: 0.836826, Val accuracy: 0.879530
Average loss: 0.453118, Train accuracy: 0.860492, Val accuracy: 0.885742
Average loss: 0.405313, Train accuracy: 0.875644, Val accuracy: 0.898505
Average loss: 0.375686, Train accuracy: 0.885217, Val accuracy: 0.901645
Average loss: 0.433829, Train accuracy: 0.866021, Val accuracy: 0.893932
Average loss: 0.406254, Train accuracy: 0.874450, Val accuracy: 0.900280
Average loss: 0.374482, Train accuracy: 0.886496, Val accuracy: 0.902874
Average loss: 0.340788, Train accuracy: 0.896546, Val accuracy: 0.908675
Average loss: 0.319712, Train accuracy: 0.901990, Val accuracy: 0.910859
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
test_loader = torch.utils.data.DataLoader(data_test, batch_size=batch_size)
final_test_accuracy = compute_accuracy(best_model, test_loader)
print("Final test accuracy - ", final_test_accuracy)
###Output
_____no_output_____
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential( #3x32x32
nn.Conv2d(3, 64, 3, padding=1), #64x32x32
nn.ReLU(inplace=True), #64x32x32
nn.MaxPool2d(4), #64x8x8
nn.Conv2d(64, 64, 3, padding=1), #64x8x8
nn.ReLU(inplace=True), #64x8x8
nn.MaxPool2d(4), #64x2x2
Flattener(), #64*2*2x1x1
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
with torch.no_grad():
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
return float(correct_samples) / total_samples
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
# raise Exception("Not implemented")
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 1.410763, Train accuracy: 0.528086, Val accuracy: 0.715378
Average loss: 0.719593, Train accuracy: 0.781456, Val accuracy: 0.775169
Average loss: 0.610382, Train accuracy: 0.816640, Val accuracy: 0.830865
Average loss: 0.567428, Train accuracy: 0.831007, Val accuracy: 0.816736
Average loss: 0.528010, Train accuracy: 0.841450, Val accuracy: 0.844311
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
_____no_output_____
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs2 = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
# transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
# transforms.RandomVerticalFlip(),
# TODO: Add good augmentations
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
data_aug_train_2 = dset.SVHN('./',
transform=tfs2
)
# TODO create new instances of loaders with the augmentations you chose
train_aug_loader = torch.utils.data.DataLoader(data_aug_train_2, batch_size=batch_size,
sampler=train_sampler)
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
data_aug_train_2.data.shape
###Output
_____no_output_____
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential( #3x32x32
nn.Conv2d(3, 6, 5, padding=0), #6x28x28
nn.ReLU(inplace=True), #6x28x28
nn.MaxPool2d(2), #6x14x14
nn.Conv2d(6, 16, 5, padding=0), #16x10x10
nn.ReLU(inplace=True), #16x10x10
nn.MaxPool2d(2), #16x5x5
nn.Conv2d(16, 120, 5, padding=0), #120x1x1
nn.ReLU(inplace=True), #120x1x1
Flattener(),
nn.Linear(120, 84),
nn.ReLU(inplace=True),
nn.Linear(84, 10),
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
Average loss: 1.228518, Train accuracy: 0.587670, Val accuracy: 0.832981
Average loss: 0.510850, Train accuracy: 0.845579, Val accuracy: 0.868541
Average loss: 0.410852, Train accuracy: 0.877180, Val accuracy: 0.878370
Average loss: 0.360911, Train accuracy: 0.892775, Val accuracy: 0.873524
Average loss: 0.320828, Train accuracy: 0.902706, Val accuracy: 0.883762
Average loss: 0.292999, Train accuracy: 0.911613, Val accuracy: 0.890929
Average loss: 0.269732, Train accuracy: 0.917807, Val accuracy: 0.890861
Average loss: 0.252380, Train accuracy: 0.922687, Val accuracy: 0.877346
Average loss: 0.233784, Train accuracy: 0.928147, Val accuracy: 0.887721
Average loss: 0.218132, Train accuracy: 0.932311, Val accuracy: 0.884581
###Markdown
Подбор гиперпараметров
###Code
def train_model_lr_schedule(model, train_loader, val_loader, loss, optimizer, num_epochs, step_size, gamma):
loss_history = []
train_history = []
val_history = []
scheduler = torch.optim.lr_scheduler.StepLR(optimizer=optimizer, step_size=step_size, gamma=gamma, last_epoch=-1, verbose=False)
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
scheduler.step()
return loss_history, train_history, val_history
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e-1]#[1e0, 1e-1, 1e-2, 1e-3, 1e-4]
anneal_coeff = 0.2
anneal_epochs = [1, 5, 10, 15, 20, 50]
reg = [1e-3, 1e-4, 1e-5, 1e-7]
batch_size = 64
epoch_num = 5
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
for i in reversed(learning_rates):
for k in anneal_epochs:
for j in reg:
print(f"Now running parameters: lr = {i}, anneal_epochs = {k}, reg = {j}")
lenet_model = nn.Sequential( #3x32x32
nn.Conv2d(3, 6, 5, padding=0), #6x28x28
nn.ReLU(inplace=True), #6x28x28
nn.MaxPool2d(2), #6x14x14
nn.Conv2d(6, 16, 5, padding=0), #16x10x10
nn.ReLU(inplace=True), #16x10x10
nn.MaxPool2d(2), #16x5x5
nn.Conv2d(16, 120, 5, padding=0), #120x1x1
nn.ReLU(inplace=True), #120x1x1
Flattener(),
nn.Linear(120, 84),
nn.ReLU(inplace=True),
nn.Linear(84, 10),
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
optimizer = optim.SGD(lenet_model.parameters(), lr=i, weight_decay=j)
loss_history, train_history, val_history = train_model_lr_schedule(
lenet_model, train_aug_loader, val_loader, loss, optimizer, epoch_num, k, 0.2)
run_record[Hyperparams(i,k,j)] = RunResult(lenet_model, train_history, val_history, val_history[-1])
# TODO: Your code here!
# run_record
#Now running parameters: lr = 0.1, anneal_epochs = 5, reg = 0.001
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
Best validation accuracy: 0.89, best hyperparams: Hyperparams(learning_rate=0.1, anneal_epochs=5, reg=1e-05)
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
best_model = nn.Sequential( #3x32x32
nn.Conv2d(3, 6, 5, padding=0), #6x28x28
nn.ReLU(inplace=True), #6x28x28
nn.MaxPool2d(2), #6x14x14
nn.Conv2d(6, 16, 5, padding=0), #16x10x10
nn.ReLU(inplace=True), #16x10x10
nn.MaxPool2d(2), #16x5x5
nn.Conv2d(16, 120, 5, padding=0), #120x1x1
nn.ReLU(inplace=True), #120x1x1
Flattener(),
nn.BatchNorm1d(120),
nn.Linear(120, 84),
nn.ReLU(inplace=True),
nn.BatchNorm1d(84),
nn.Linear(84, 10),
)
best_model.type(torch.cuda.FloatTensor)
best_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(best_model.parameters(), lr=1e-1, weight_decay=1e-05)
# optimizer = torch.optim.Adamax(nn_model.parameters(), lr=1e-1, betas=(0.9, 0.999), eps=1e-08)
loss_history, train_history, val_history = train_model_lr_schedule(
best_model, train_aug_loader, val_loader, loss, optimizer, 10, 5, 0.2)
PATH = 'sgd_trained_net.pth'
torch.save(best_model.state_dict(), PATH)
###Output
_____no_output_____
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
with torch.no_grad():
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
return float(correct_samples) / total_samples
# TODO Write the code to compute accuracy on test set
# data_test.to(device)
test_loader = torch.utils.data.DataLoader(data_test, batch_size=batch_size)
# for inputs, labels in test_loader:
# inputs, labels = inputs.to(device), labels.to(device)
# test_loader.to(device)
# print("Test accuracy: %2.4f" % test_accuracy)
final_test_accuracy = compute_accuracy(best_model, test_loader)
print("Final test accuracy - ", final_test_accuracy)
###Output
_____no_output_____
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
correct_samples = 0
total_samples = 0
for idx, (x, y) in enumerate(loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
return correct_samples / total_samples
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /pytorch/c10/core/TensorImpl.h:1156.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:1231: UserWarning: Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead
"Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead"
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
# transforms.RandomHorizontalFlip(),
# transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:1231: UserWarning: Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead
"Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead"
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# TODO create new instances of loaders with the augmentations you chose
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 0.607269, Train accuracy: 0.814763, Val accuracy: 0.843901
Average loss: 0.560370, Train accuracy: 0.829352, Val accuracy: 0.827930
Average loss: 0.546149, Train accuracy: 0.834846, Val accuracy: 0.844038
Average loss: 0.527191, Train accuracy: 0.841194, Val accuracy: 0.859805
Average loss: 0.511649, Train accuracy: 0.846176, Val accuracy: 0.851136
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, (5, 5)),
nn.MaxPool2d((2, 2), (2, 2)),
nn.Tanh(),
nn.Conv2d(6, 16, (5, 5)),
nn.MaxPool2d((2, 2), (2, 2)),
nn.Tanh(),
Flattener(),
nn.Linear(16 * 5 * 5, 120),
nn.Tanh(),
nn.Linear(120, 84),
nn.Tanh(),
nn.Linear(84, 10))
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
Average loss: 1.277519, Train accuracy: 0.570931, Val accuracy: 0.800901
Average loss: 0.576050, Train accuracy: 0.823209, Val accuracy: 0.858030
Average loss: 0.488348, Train accuracy: 0.851107, Val accuracy: 0.864856
Average loss: 0.448557, Train accuracy: 0.863069, Val accuracy: 0.878711
Average loss: 0.413424, Train accuracy: 0.874757, Val accuracy: 0.887107
Average loss: 0.393104, Train accuracy: 0.879432, Val accuracy: 0.883694
Average loss: 0.376556, Train accuracy: 0.884142, Val accuracy: 0.878507
Average loss: 0.360614, Train accuracy: 0.889789, Val accuracy: 0.893659
Average loss: 0.347088, Train accuracy: 0.892383, Val accuracy: 0.894478
Average loss: 0.337608, Train accuracy: 0.895813, Val accuracy: 0.895980
###Markdown
Подбор гиперпараметров
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
from copy import deepcopy
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e0, 1e-1, 1e-2, 1e-3, 1e-4]
anneal_coeff = 0.2
anneal_epochs = [1, 5, 10, 15]
regs = [1e-3, 1e-4, 1e-5, 1e-7]
batch_size = 64
epoch_num = 10
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
for idx in range(10):
model = deepcopy(lenet_model)
model.type(torch.cuda.FloatTensor)
model.to(device)
lr = np.random.choice(learning_rates)
anneal_epoch = np.random.choice(anneal_epochs)
reg = np.random.choice(regs)
params = Hyperparams(lr, anneal_epoch, reg)
print(params)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(model.parameters(), lr=lr, weight_decay=reg)
loss_history, train_history, val_history = train_model(model, train_aug_loader, val_loader, loss, optimizer, int(anneal_epoch))
result = RunResult(model, train_history, val_history, val_history[-1])
run_record[params] = result
print('-' * 50)
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
Best validation accuracy: 0.86, best hyperparams: Hyperparams(learning_rate=0.1, anneal_epochs=10, reg=1e-07)
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
best_model = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=(3, 3)),
nn.MaxPool2d(kernel_size=(2, 2), stride=2),
nn.BatchNorm2d(64),
nn.ReLU(inplace=True),
nn.Conv2d(64, 128, kernel_size=(3, 3)),
nn.MaxPool2d(kernel_size=(2, 2), stride=2),
nn.BatchNorm2d(128),
nn.ReLU(inplace=True),
nn.Conv2d(128, 256, kernel_size=(3, 3)),
nn.MaxPool2d(kernel_size=(2, 2), stride=2),
nn.BatchNorm2d(256),
nn.ReLU(inplace=True),
Flattener(),
nn.Linear(256 * 2 * 2, 128),
nn.BatchNorm1d(128),
nn.ReLU(inplace=True),
nn.Linear(128, 10)
)
best_model.type(torch.cuda.FloatTensor)
best_model.to(device)
optimizer = optim.Adam(best_model.parameters(), lr=1e-3, weight_decay=1e-5)
loss_history, train_history, val_history = train_model(best_model, train_aug_loader, val_loader, loss, optimizer, 20)
###Output
Average loss: 0.613664, Train accuracy: 0.809422, Val accuracy: 0.896662
Average loss: 0.336216, Train accuracy: 0.897724, Val accuracy: 0.912566
Average loss: 0.279885, Train accuracy: 0.916305, Val accuracy: 0.923691
Average loss: 0.242542, Train accuracy: 0.927499, Val accuracy: 0.925875
Average loss: 0.214056, Train accuracy: 0.935143, Val accuracy: 0.925193
Average loss: 0.190725, Train accuracy: 0.943760, Val accuracy: 0.929015
Average loss: 0.170181, Train accuracy: 0.948469, Val accuracy: 0.925671
Average loss: 0.149818, Train accuracy: 0.953708, Val accuracy: 0.927240
Average loss: 0.138909, Train accuracy: 0.956643, Val accuracy: 0.924510
Average loss: 0.124092, Train accuracy: 0.960806, Val accuracy: 0.926148
Average loss: 0.113244, Train accuracy: 0.963621, Val accuracy: 0.926421
Average loss: 0.102920, Train accuracy: 0.967051, Val accuracy: 0.928469
Average loss: 0.093474, Train accuracy: 0.969525, Val accuracy: 0.928537
Average loss: 0.087688, Train accuracy: 0.971249, Val accuracy: 0.928128
Average loss: 0.083791, Train accuracy: 0.972409, Val accuracy: 0.924783
Average loss: 0.075656, Train accuracy: 0.974815, Val accuracy: 0.924852
Average loss: 0.069415, Train accuracy: 0.977460, Val accuracy: 0.927240
Average loss: 0.064343, Train accuracy: 0.978876, Val accuracy: 0.927582
Average loss: 0.067606, Train accuracy: 0.977340, Val accuracy: 0.927377
Average loss: 0.061405, Train accuracy: 0.978978, Val accuracy: 0.919186
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
# TODO Write the code to compute accuracy on test set
test_loader = torch.utils.data.DataLoader(data_test, batch_size=batch_size)
final_test_accuracy = compute_accuracy(best_model, test_loader)
print("Final test accuracy - ", final_test_accuracy)
###Output
_____no_output_____
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
# !pip3 install torch torchvision
# !python -m wget http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
print(torch.__version__)
device = torch.device("cuda:0") # Let's make sure GPU is available!
torch.cuda.is_available()
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./data/', split='train',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./data/', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs, scheduler=None):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
if scheduler:
scheduler.step()
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
val_accuracy = float(correct_samples) / total_samples
return val_accuracy
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 1.390966, Train accuracy: 0.535508, Val accuracy: 0.704935
Average loss: 0.707754, Train accuracy: 0.784203, Val accuracy: 0.812095
Average loss: 0.602157, Train accuracy: 0.819865, Val accuracy: 0.822469
Average loss: 0.554444, Train accuracy: 0.835802, Val accuracy: 0.827657
Average loss: 0.520204, Train accuracy: 0.845715, Val accuracy: 0.838373
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./data/',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
_____no_output_____
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./data/',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
# TODO: Add good augmentations
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# TODO create new instances of loaders with the augmentations you chose
data_aug_my_train = dset.SVHN('./data/',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_my_train, batch_size=batch_size,
sampler=train_sampler)
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 0.626222, Train accuracy: 0.811521, Val accuracy: 0.810730
Average loss: 0.568305, Train accuracy: 0.825649, Val accuracy: 0.842878
Average loss: 0.551908, Train accuracy: 0.832986, Val accuracy: 0.816668
Average loss: 0.533631, Train accuracy: 0.839351, Val accuracy: 0.841035
Average loss: 0.516225, Train accuracy: 0.845255, Val accuracy: 0.831070
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, 5, padding=1),
nn.Tanh(),
nn.AvgPool2d(2),
nn.Conv2d(6, 16, 5, padding=1),
nn.Tanh(),
nn.AvgPool2d(2),
nn.Conv2d(16, 120, 5, padding=1),
nn.Tanh(),
Flattener(),
nn.Linear(1920, 120),
nn.Tanh(),
nn.Linear(120, 84),
nn.Tanh(),
nn.Linear(84, 10)
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
from torchsummary import summary
summary(lenet_model, (3, 32, 32))
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
Average loss: 1.980997, Train accuracy: 0.304030, Val accuracy: 0.694560
Average loss: 0.701792, Train accuracy: 0.786097, Val accuracy: 0.820900
Average loss: 0.517134, Train accuracy: 0.841637, Val accuracy: 0.874138
Average loss: 0.446381, Train accuracy: 0.864997, Val accuracy: 0.875094
Average loss: 0.400921, Train accuracy: 0.877862, Val accuracy: 0.882943
Average loss: 0.373021, Train accuracy: 0.886172, Val accuracy: 0.891475
Average loss: 0.347615, Train accuracy: 0.895028, Val accuracy: 0.884991
Average loss: 0.332759, Train accuracy: 0.898986, Val accuracy: 0.896048
Average loss: 0.316150, Train accuracy: 0.903542, Val accuracy: 0.899461
Average loss: 0.300794, Train accuracy: 0.908627, Val accuracy: 0.904512
###Markdown
Подбор гиперпараметров
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e0, 1e-1, 1e-2, 1e-3, 1e-4]
anneal_coeff = 0.2
anneal_epochs = [1, 5, 10, 15, 20, 50]
reg = [1e-3, 1e-4, 1e-5, 1e-7]
# batch_size = 64
epoch_num = 10
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
import random
random.seed(50)
MAX_EVALS = 5
for i in range(MAX_EVALS):
print(f"\n{i} attempt")
hyperparams = Hyperparams(random.choice(learning_rates),
random.choice(anneal_epochs),
random.choice(reg))
optimizer = optim.SGD(lenet_model.parameters(), lr=hyperparams.learning_rate,
weight_decay=hyperparams.reg)
scheduler = optim.lr_scheduler.StepLR(optimizer, hyperparams.anneal_epochs, gamma=anneal_coeff)
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, epoch_num, scheduler=scheduler)
run_result = RunResult(lenet_model, train_history, val_history, val_history[-1])
run_record[hyperparams] = run_result
print(run_record)
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
Best validation accuracy: 0.92, best hyperparams: Hyperparams(learning_rate=0.01, anneal_epochs=1, reg=1e-05)
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
best_model = run_result.model
###Output
Sequential(
(0): Conv2d(3, 6, kernel_size=(5, 5), stride=(1, 1), padding=(1, 1))
(1): Tanh()
(2): AvgPool2d(kernel_size=2, stride=2, padding=0)
(3): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1), padding=(1, 1))
(4): Tanh()
(5): AvgPool2d(kernel_size=2, stride=2, padding=0)
(6): Conv2d(16, 120, kernel_size=(5, 5), stride=(1, 1), padding=(1, 1))
(7): Tanh()
(8): Flattener()
(9): Linear(in_features=1920, out_features=120, bias=True)
(10): Tanh()
(11): Linear(in_features=120, out_features=84, bias=True)
(12): Tanh()
(13): Linear(in_features=84, out_features=10, bias=True)
)
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
# TODO Write the code to compute accuracy on test set
final_test_accuracy = 0.0
test_loader = torch.utils.data.DataLoader(data_test, batch_size=batch_size)
final_test_accuracy = compute_accuracy(best_model, test_loader)
print("Final test accuracy - ", final_test_accuracy)
###Output
Final test accuracy - 0.9024661954517517
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import random
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
from torch.optim import lr_scheduler
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
correct_samples = 0
total_samples = 0
for x, y in loader:
x_gpu = x.to(device)
y_gpu = y.to(device)
predictions = model(x_gpu)
indices = torch.argmax(predictions, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
return float(correct_samples) / total_samples
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 0.707580, Train accuracy: 0.784561, Val accuracy: 0.815234
Average loss: 0.608457, Train accuracy: 0.818432, Val accuracy: 0.823015
Average loss: 0.558876, Train accuracy: 0.833242, Val accuracy: 0.845335
Average loss: 0.522073, Train accuracy: 0.844402, Val accuracy: 0.845267
Average loss: 0.497825, Train accuracy: 0.851551, Val accuracy: 0.840079
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
_____no_output_____
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# TODO create new instances of loaders with the augmentations you chose
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 1.192248, Train accuracy: 0.612924, Val accuracy: 0.756945
Average loss: 0.946745, Train accuracy: 0.696669, Val accuracy: 0.728892
Average loss: 0.888855, Train accuracy: 0.716480, Val accuracy: 0.764862
Average loss: 0.847636, Train accuracy: 0.729908, Val accuracy: 0.760699
Average loss: 0.817090, Train accuracy: 0.739173, Val accuracy: 0.768070
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, 5),
nn.ReLU(),
nn.MaxPool2d(2),
nn.Conv2d(6, 16, 5),
nn.ReLU(),
nn.MaxPool2d(2),
nn.Conv2d(16, 120, 5),
nn.ReLU(),
Flattener(),
nn.Linear(120, 84),
nn.ReLU(),
nn.Linear(84, 10)
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
Average loss: 1.884603, Train accuracy: 0.342064, Val accuracy: 0.646441
Average loss: 0.888245, Train accuracy: 0.723424, Val accuracy: 0.783291
Average loss: 0.707281, Train accuracy: 0.781695, Val accuracy: 0.813323
Average loss: 0.633572, Train accuracy: 0.805549, Val accuracy: 0.824858
Average loss: 0.591450, Train accuracy: 0.816828, Val accuracy: 0.826974
Average loss: 0.556816, Train accuracy: 0.828823, Val accuracy: 0.843560
Average loss: 0.537225, Train accuracy: 0.836280, Val accuracy: 0.845540
Average loss: 0.519241, Train accuracy: 0.841381, Val accuracy: 0.839328
Average loss: 0.503797, Train accuracy: 0.844999, Val accuracy: 0.848406
Average loss: 0.497613, Train accuracy: 0.849572, Val accuracy: 0.840147
###Markdown
Подбор гиперпараметров
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, scheduler, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
scheduler.step()
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e0, 1e-1, 1e-2, 1e-3, 1e-4]
anneal_coeff = 0.2
anneal_epochs = [1, 5, 10, 15, 20, 50]
regs = [1e-3, 1e-4, 1e-5, 1e-7]
batch_size = 64
epoch_num = 10
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
# TODO: Your code here!
for t in range(10):
lr = random.choice(learning_rates)
anneal_epoch = random.choice(anneal_epochs)
reg = random.choice(regs)
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, 5),
nn.ReLU(),
nn.MaxPool2d(2),
nn.Conv2d(6, 16, 5),
nn.ReLU(),
nn.MaxPool2d(2),
nn.Conv2d(16, 120, 5),
nn.ReLU(),
Flattener(),
nn.Linear(120, 84),
nn.ReLU(),
nn.Linear(84, 10)
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=lr, weight_decay=reg)
scheduler = lr_scheduler.StepLR(optimizer, step_size=anneal_epoch,
gamma=anneal_coeff)
loss_history, train_history, val_history = train_model(lenet_model,
train_aug_loader,
val_loader,
loss,
optimizer,
scheduler,
10)
params = Hyperparams(lr, anneal_epoch, reg)
result = RunResult(lenet_model, train_history, val_history, val_history[-1])
run_record[params] = result
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
Best validation accuracy: 0.66, best hyperparams: Hyperparams(learning_rate=0.01, anneal_epochs=5, reg=0.0001)
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
best_model = None
###Output
_____no_output_____
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
# TODO Write the code to compute accuracy on test set
final_test_accuracy = 0.0
print("Final test accuracy - ", final_test_accuracy)
###Output
_____no_output_____
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs, scheduler=None):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
if scheduler:
scheduler.step()
current_lr = optimizer.param_groups[0]['lr']
print(f'Epoch: {epoch}, Learning rate: {current_lr}')
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
model.eval() # Evaluation mode
correct_samples = 0
total_samples = 0
for x, y in loader:
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
accuracy = float(correct_samples) / total_samples
return accuracy
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 1.372701, Train accuracy: 0.540883, Val accuracy: 0.724797
Average loss: 0.684038, Train accuracy: 0.792564, Val accuracy: 0.809569
Average loss: 0.583857, Train accuracy: 0.824097, Val accuracy: 0.814620
Average loss: 0.540260, Train accuracy: 0.837866, Val accuracy: 0.845130
Average loss: 0.507111, Train accuracy: 0.849657, Val accuracy: 0.839465
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:1249: UserWarning: Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead
"Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead"
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:1249: UserWarning: Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead
"Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead"
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 0.611328, Train accuracy: 0.812545, Val accuracy: 0.828612
Average loss: 0.570780, Train accuracy: 0.826008, Val accuracy: 0.841786
Average loss: 0.548460, Train accuracy: 0.833123, Val accuracy: 0.841581
Average loss: 0.536414, Train accuracy: 0.838515, Val accuracy: 0.825609
Average loss: 0.523606, Train accuracy: 0.839692, Val accuracy: 0.835370
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
nn.Conv2d(in_channels=3, out_channels=6, kernel_size=5, stride=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=2),
nn.Conv2d(in_channels=6, out_channels=16, kernel_size=5, stride=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=2),
nn.Conv2d(in_channels=16, out_channels=120, kernel_size=5, stride=1),
nn.ReLU(inplace=True),
Flattener(),
nn.Linear(in_features=120, out_features=84),
nn.ReLU(inplace=True),
nn.Linear(in_features=84, out_features=10),
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
Average loss: 1.374075, Train accuracy: 0.533119, Val accuracy: 0.761245
Average loss: 0.551752, Train accuracy: 0.836894, Val accuracy: 0.869633
Average loss: 0.456045, Train accuracy: 0.863290, Val accuracy: 0.883557
Average loss: 0.406059, Train accuracy: 0.878323, Val accuracy: 0.883899
Average loss: 0.372796, Train accuracy: 0.888083, Val accuracy: 0.885127
Average loss: 0.350413, Train accuracy: 0.894891, Val accuracy: 0.892772
Average loss: 0.328256, Train accuracy: 0.901870, Val accuracy: 0.898846
Average loss: 0.319533, Train accuracy: 0.903935, Val accuracy: 0.896321
Average loss: 0.304025, Train accuracy: 0.907893, Val accuracy: 0.902805
Average loss: 0.292788, Train accuracy: 0.911733, Val accuracy: 0.899939
###Markdown
Подбор гиперпараметров
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
lenet_model = nn.Sequential(
nn.Conv2d(in_channels=3, out_channels=6, kernel_size=5, stride=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=2),
nn.Conv2d(in_channels=6, out_channels=16, kernel_size=5, stride=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=2),
nn.Conv2d(in_channels=16, out_channels=120, kernel_size=5, stride=1),
nn.ReLU(inplace=True),
Flattener(),
nn.Linear(in_features=120, out_features=84),
nn.ReLU(inplace=True),
nn.Linear(in_features=84, out_features=10),
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e0, 1e-1, 1e-2, 1e-3, 1e-4]
anneal_coeff = 0.2
# anneal_epochs = [1, 5, 10, 15, 20, 50]
anneal_epochs = [5]
reg = [1e-3, 1e-4, 1e-5, 1e-7]
batch_size = 64
epoch_num = 10
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
for lr in learning_rates:
for ae in anneal_epochs:
for r in reg:
print(f"\nlearning rate: {lr}, anneal epoch: {ae}, weight decay: {r}")
optimizer = optim.SGD(lenet_model.parameters(), lr=lr, weight_decay=r)
scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=ae, gamma=anneal_coeff)
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, epoch_num, scheduler)
run_record[Hyperparams(lr, ae, r)] = RunResult(lenet_model, train_history, val_history, val_history[-1])
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
_____no_output_____
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
tfs = transforms.Compose([
# TODO: Add good augmentations
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
transforms.RandomGrayscale(p=0.1),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# TODO create new instances of loaders with the augmentations you chose
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
best_model = nn.Sequential(
nn.Conv2d(3, 200, kernel_size=(3,3), stride=1),
nn.BatchNorm2d(200),
nn.ReLU(),
nn.Conv2d(200, 200, kernel_size=(3,3), stride=1),
nn.BatchNorm2d(200),
nn.ReLU(),
nn.MaxPool2d(3),
nn.Conv2d(200, 400, kernel_size=(3,3), stride=1),
nn.BatchNorm2d(400),
nn.ReLU(),
nn.Conv2d(400, 400, kernel_size=(3,3), stride=1),
nn.BatchNorm2d(400),
nn.ReLU(),
nn.MaxPool2d(3),
nn.Flatten(),
nn.Linear(400, 100),
nn.Dropout(0.5),
nn.Linear(100, 10),
)
best_model = best_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = torch.optim.Adam(best_model.parameters(), lr=0.0001, weight_decay=0.00001)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=15, gamma=0.1)
batch_size = 64
epoch_num = 10
# Let's train it!
loss_history, train_history, val_history = train_model(best_model, train_aug_loader, val_loader, loss, optimizer, epoch_num)
###Output
/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:1249: UserWarning: Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead
"Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead"
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
test_loader = torch.utils.data.DataLoader(data_test, batch_size=batch_size)
best_model.eval()
correct_samples = 0
total_samples = 0
for x, y in test_loader:
x_gpu = x.to(device)
y_gpu = y.to(device)
predictions = best_model.forward(x_gpu)
y_pred = predictions.max(1)[1].data
correct_samples += torch.sum(y_pred==y_gpu)
total_samples += y_gpu.shape[0]
final_test_accuracy = float(correct_samples) / total_samples
print("Final test accuracy - ", final_test_accuracy)
###Output
_____no_output_____
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
acc = [torch.mean((model(batch[0].to(device)).argmax(axis=1) == batch[1].to(device)).float())
for batch in loader]
acc = torch.mean(torch.Tensor(acc))
return acc
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
# raise Exception("Not implemented")
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 1.421948, Train accuracy: 0.523803, Val accuracy: 0.753853
Average loss: 0.695322, Train accuracy: 0.788964, Val accuracy: 0.781038
Average loss: 0.587879, Train accuracy: 0.823329, Val accuracy: 0.826589
Average loss: 0.536535, Train accuracy: 0.839351, Val accuracy: 0.818105
Average loss: 0.507209, Train accuracy: 0.847592, Val accuracy: 0.846780
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
_____no_output_____
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
# transforms.RandomHorizontalFlip(),
# transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 0.596135, Train accuracy: 0.818227, Val accuracy: 0.810918
Average loss: 0.548689, Train accuracy: 0.832099, Val accuracy: 0.837762
Average loss: 0.526932, Train accuracy: 0.841347, Val accuracy: 0.850811
Average loss: 0.509263, Train accuracy: 0.845630, Val accuracy: 0.838756
Average loss: 0.500409, Train accuracy: 0.849009, Val accuracy: 0.843749
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, 5, padding=0),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
nn.Conv2d(6, 16, 5, padding=0),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
Flattener(),
nn.Linear(400, 120),
nn.Sigmoid(),
nn.Linear(120, 84),
nn.Sigmoid(),
nn.Linear(84, 10),
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
###Output
_____no_output_____
###Markdown
Подбор гиперпараметров
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e0, 1e-1, 1e-2, 1e-3, 1e-4]
anneal_coeff = 0.2
anneal_epochs = [1, 5, 10, 15, 20, 50]
reg = [1e-3, 1e-4, 1e-5, 1e-7]
batch_size = 64
epoch_num = 10
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
# TODO: Your code here!
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
_____no_output_____
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
from torchvision.models import resnet34
model = resnet34(pretrained=True)
model.to(device)
# Let's train it!
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(model.parameters(), lr=1e-3, weight_decay=1e-4)
loss_history, train_history, val_history = train_model(model, train_aug_loader, val_loader, loss, optimizer, 10)
model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.BatchNorm2d(64),
nn.MaxPool2d(2, 2),
nn.Conv2d(64, 128, 3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(128, 128, 3, padding=1),
nn.ReLU(inplace=True),
nn.BatchNorm2d(128),
nn.MaxPool2d(2, 2),
nn.Conv2d(128, 256, 3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(256, 256, 3, padding=1),
nn.ReLU(inplace=True),
nn.BatchNorm2d(256),
nn.MaxPool2d(2, 2),
Flattener(),
nn.Linear(4 * 4 * 256, 10)
)
model.type(torch.cuda.FloatTensor)
model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.Adam(model.parameters())
loss_history, train_history, val_history = train_model(model, train_aug_loader, val_loader, loss, optimizer, 10)
best_model = None
###Output
_____no_output_____
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
test_loader = torch.utils.data.DataLoader(data_test)
compute_accuracy(model, test_loader)
# TODO Write the code to compute accuracy on test set
final_test_accuracy = 0.0
print("Final test accuracy - ", final_test_accuracy)
###Output
_____no_output_____
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# # Intstall PyTorch and download data
# !pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
import itertools
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Implement the inference of the model on all of the batches from loader,
# and compute the overall accuracy.
# Hint: PyTorch has the argmax function!
correct_samples = 0
total_samples = 0
for x, y in loader:
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
accuracy = float(correct_samples) / total_samples
return accuracy
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 1.416580, Train accuracy: 0.525424, Val accuracy: 0.734489
Average loss: 0.700145, Train accuracy: 0.786558, Val accuracy: 0.794690
Average loss: 0.602223, Train accuracy: 0.819438, Val accuracy: 0.829500
Average loss: 0.551862, Train accuracy: 0.833515, Val accuracy: 0.842263
Average loss: 0.519409, Train accuracy: 0.845647, Val accuracy: 0.827998
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
_____no_output_____
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные **Ответ**Не все аугментации оказываются полезными. Например, кажется, что горизонтальные и вертикальные отражения только запутывают нашу сеть. Действительно, вертикальное + горизонтальное отражение цифры `9` может сделать из нее `6`, а горизонтальное отражение цифр создает несуществующие цифры. Если наша цель - не предсказывать цифры в зеркальном отражении, то эти две аугментации стоит убрать, что я делаю ниже.
###Code
# TODO:
tfs = transforms.Compose([
# TODO: Add good augmentations
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# TODO create new instances of loaders with the augmentations you chose
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 0.612859, Train accuracy: 0.814592, Val accuracy: 0.804450
Average loss: 0.559228, Train accuracy: 0.830137, Val accuracy: 0.832571
Average loss: 0.545853, Train accuracy: 0.834403, Val accuracy: 0.849840
Average loss: 0.528114, Train accuracy: 0.838839, Val accuracy: 0.858098
Average loss: 0.514099, Train accuracy: 0.843992, Val accuracy: 0.857621
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, 5),
nn.Tanh(),
nn.MaxPool2d(2,2),
nn.Conv2d(6, 16, 5),
nn.Tanh(),
nn.MaxPool2d(2,2),
nn.Conv2d(16, 120, 5),
nn.Tanh(),
Flattener(),
nn.Linear(120, 84),
nn.Tanh(),
nn.Linear(84, 10)
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
Average loss: 1.263467, Train accuracy: 0.572945, Val accuracy: 0.814620
Average loss: 0.578812, Train accuracy: 0.824779, Val accuracy: 0.862398
Average loss: 0.491696, Train accuracy: 0.849964, Val accuracy: 0.867518
Average loss: 0.445850, Train accuracy: 0.863086, Val accuracy: 0.877688
Average loss: 0.413883, Train accuracy: 0.873170, Val accuracy: 0.876868
Average loss: 0.393378, Train accuracy: 0.878903, Val accuracy: 0.880827
Average loss: 0.378818, Train accuracy: 0.882640, Val accuracy: 0.886970
Average loss: 0.362343, Train accuracy: 0.888390, Val accuracy: 0.886492
Average loss: 0.345136, Train accuracy: 0.894584, Val accuracy: 0.890588
Average loss: 0.335442, Train accuracy: 0.895983, Val accuracy: 0.884240
###Markdown
Подбор гиперпараметров
###Code
def train_model_with_sched(model, train_loader, val_loader, loss, optimizer, num_epochs, scheduler=None):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
current_lr = optimizer.param_groups[0]['lr']
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
if scheduler:
scheduler.step()
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_coeff', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e-1, 1e-2, 1e-3]
anneal_coeff = [0.2, 0.9]
anneal_epochs = [5]
reg = [1e-4, 1e-5]
batch_size = 64
epoch_num = 10
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
# TODO: Your code here!
for lr, ac, ae, reg in itertools.product(learning_rates, anneal_coeff, anneal_epochs, reg):
print(f'Current set of parameters is: learning rate = {lr}, anneal_coeff = {ac}, anneal_epoch = {ae}, reg = {reg}')
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, 5),
nn.Tanh(),
nn.MaxPool2d(2,2),
nn.Conv2d(6, 16, 5),
nn.Tanh(),
nn.MaxPool2d(2,2),
nn.Conv2d(16, 120, 5),
nn.Tanh(),
Flattener(),
nn.Linear(120, 84),
nn.Tanh(),
nn.Linear(84, 10)
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
# optimizer = torch.optim.Adam(lenet_model.parameters(), lr=lr, weight_decay=reg)
optimizer = optim.SGD(lenet_model.parameters(), lr=lr, weight_decay=reg)
scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=ae, gamma=ac)
loss_history, train_history, val_history = train_model_with_sched(lenet_model,
train_aug_loader,
val_loader,
loss,
optimizer,
epoch_num,
scheduler)
running_res = RunResult(lenet_model, train_history, val_history, val_history[-1])
params = Hyperparams(lr, ac, ae, reg)
run_record[params] = running_res
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
Best validation accuracy: 0.90, best hyperparams: Hyperparams(learning_rate=0.1, anneal_coeff=0.2, anneal_epochs=5, reg=0.0001)
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации Для этого свободного упражнения я взял архитектуру, напоминающую по структуре VGG16.В качестве оптимизатора хорошо себя показал `Adam`.И я добавил еще к уже имеющемуся списку агментаций `RandomGrayScale` - случайный перевод изображения в черно-белое.Также я добавил эпох тренировки: их теперь 20, но на самом деле уже и при 10 качество было выше, чем в случае LeNet.
###Code
tfs = transforms.Compose([
# TODO: Add good augmentations
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
transforms.RandomGrayscale(p=0.1),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# TODO create new instances of loaders with the augmentations you chose
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
best_model = nn.Sequential()
best_model.add_module('conv1', nn.Conv2d(3, 200, kernel_size=(3,3), stride=1))
best_model.add_module('bn1_1', nn.BatchNorm2d(200))
best_model.add_module('relu1_1', nn.ReLU())
best_model.add_module('conv1_2', nn.Conv2d(200, 200, kernel_size=(3,3), stride=1))
best_model.add_module('bn1_2', nn.BatchNorm2d(200))
best_model.add_module('relu1_2', nn.ReLU())
best_model.add_module('maxpool1', nn.MaxPool2d(3))
best_model.add_module('conv2_1', nn.Conv2d(200, 400, kernel_size=(3,3), stride=1))
best_model.add_module('bn2_1', nn.BatchNorm2d(400))
best_model.add_module('relu2_1', nn.ReLU())
best_model.add_module('conv2_2', nn.Conv2d(400, 400, kernel_size=(3,3), stride=1))
best_model.add_module('bn2_2', nn.BatchNorm2d(400))
best_model.add_module('relu2_2', nn.ReLU())
best_model.add_module('maxpool2', nn.MaxPool2d(3))
best_model.add_module('flatten', nn.Flatten())
best_model.add_module('fc1', nn.Linear(400, 100))
best_model.add_module('dp1', nn.Dropout(0.5))
best_model.add_module('fc2', nn.Linear(100, 10))
best_model = best_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
#L2 regularization is added through weight_decay
optimizer = torch.optim.Adam(best_model.parameters(), lr=0.0001, weight_decay=0.00001)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=15, gamma=0.1)
batch_size = 64
epoch_num = 20
# Let's train it!
loss_history, train_history, val_history = train_model_with_sched(
best_model,
train_aug_loader,
val_loader,
loss,
optimizer,
epoch_num,
scheduler = scheduler)
###Output
Average loss: 0.846205, Train accuracy: 0.737126, Val accuracy: 0.880896
Average loss: 0.440950, Train accuracy: 0.868426, Val accuracy: 0.897754
Average loss: 0.373071, Train accuracy: 0.890540, Val accuracy: 0.903556
Average loss: 0.333654, Train accuracy: 0.903013, Val accuracy: 0.915432
Average loss: 0.298466, Train accuracy: 0.912432, Val accuracy: 0.914955
Average loss: 0.278567, Train accuracy: 0.917978, Val accuracy: 0.920210
Average loss: 0.257124, Train accuracy: 0.924137, Val accuracy: 0.921166
Average loss: 0.236620, Train accuracy: 0.930280, Val accuracy: 0.922121
Average loss: 0.220128, Train accuracy: 0.935280, Val accuracy: 0.924101
Average loss: 0.206402, Train accuracy: 0.937839, Val accuracy: 0.925466
Average loss: 0.189389, Train accuracy: 0.943436, Val accuracy: 0.926899
Average loss: 0.173924, Train accuracy: 0.947872, Val accuracy: 0.930107
Average loss: 0.161249, Train accuracy: 0.951933, Val accuracy: 0.927513
Average loss: 0.147546, Train accuracy: 0.955005, Val accuracy: 0.930244
Average loss: 0.136066, Train accuracy: 0.959168, Val accuracy: 0.929902
Average loss: 0.094127, Train accuracy: 0.973876, Val accuracy: 0.938434
Average loss: 0.081239, Train accuracy: 0.978176, Val accuracy: 0.937752
Average loss: 0.075443, Train accuracy: 0.980429, Val accuracy: 0.938434
Average loss: 0.071123, Train accuracy: 0.981742, Val accuracy: 0.938912
Average loss: 0.066747, Train accuracy: 0.983073, Val accuracy: 0.938229
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
# TODO Write the code to compute accuracy on test set
test_loader = torch.utils.data.DataLoader(data_test, batch_size=batch_size)
best_model.eval()
correct_samples = 0
total_samples = 0
for x_batch, y_batch in test_loader:
x_batch = x_batch.to(device)
y_batch = y_batch.to(device)
predictions = best_model.forward(x_batch)
y_pred = predictions.max(1)[1].data
correct_samples += torch.sum(y_pred == y_batch)
total_samples += y_batch.shape[0]
final_test_accuracy = float(correct_samples) / total_samples
print("Final test accuracy - ", final_test_accuracy)
###Output
Final test accuracy - 0.9335049170251998
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
raise Exception("Not implemented")
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
_____no_output_____
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
# TODO: Add good augmentations
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# TODO create new instances of loaders with the augmentations you chose
train_aug_loader = None
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
_____no_output_____
###Markdown
Подбор гиперпараметров
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e0, 1e-1, 1e-2, 1e-3, 1e-4]
anneal_coeff = 0.2
anneal_epochs = [1, 5, 10, 15, 20, 50]
reg = [1e-3, 1e-4, 1e-5, 1e-7]
batch_size = 64
epoch_num = 10
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
# TODO: Your code here!
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
_____no_output_____
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
best_model = None
###Output
_____no_output_____
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
# TODO Write the code to compute accuracy on test set
final_test_accuracy = 0.0
print("Final test accuracy - ", final_test_accuracy)
###Output
_____no_output_____
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
raise Exception("Not implemented")
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
_____no_output_____
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
# TODO: Add good augmentations
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# TODO create new instances of loaders with the augmentations you chose
train_aug_loader = None
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
_____no_output_____
###Markdown
Подбор гиперпараметров
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e0, 1e-1, 1e-2, 1e-3, 1e-4]
anneal_coeff = 0.2
anneal_epochs = [1, 5, 10, 15, 20, 50]
reg = [1e-3, 1e-4, 1e-5, 1e-7]
batch_size = 64
epoch_num = 10
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
# TODO: Your code here!
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
_____no_output_____
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
best_model = None
###Output
_____no_output_____
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
# TODO Write the code to compute accuracy on test set
final_test_accuracy = 0.0
print("Final test accuracy - ", final_test_accuracy)
###Output
_____no_output_____
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose(
[
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]
)
)
data_test = dset.SVHN('./',
split='test',
transform=transforms.Compose(
[
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]
)
)
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train,
batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train,
batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs, scheduler=None):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
if scheduler is not None:
scheduler.step()
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
correct_samples = 0
total_samples = 0
for x, y in loader:
x_gpu = x.to(device)
y_gpu = y.to(device)
pred = model(x_gpu)
_, indices = torch.max(pred, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y_gpu.shape[0]
val_accuracy = float(correct_samples) / total_samples
return val_accuracy
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 1.367969, Train accuracy: 0.544842, Val accuracy: 0.739949
Average loss: 0.698685, Train accuracy: 0.786370, Val accuracy: 0.801652
Average loss: 0.592008, Train accuracy: 0.822527, Val accuracy: 0.824995
Average loss: 0.552195, Train accuracy: 0.834147, Val accuracy: 0.813869
Average loss: 0.516446, Train accuracy: 0.847251, Val accuracy: 0.787250
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./', transform=tfs)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train,
batch_size=batch_size,
sampler=train_sampler)
###Output
/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:1201: UserWarning: Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead
"Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead"
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./', transform=tfs)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:1201: UserWarning: Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead
"Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead"
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
# TODO: Add good augmentations
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomRotation(5, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
data_aug_train = dset.SVHN('./', transform=tfs)
# TODO create new instances of loaders with the augmentations you chose
train_aug_loader = torch.utils.data.DataLoader(data_aug_train,
batch_size=batch_size,
sampler=train_sampler)
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 0.551985, Train accuracy: 0.832236, Val accuracy: 0.854549
Average loss: 0.515229, Train accuracy: 0.843856, Val accuracy: 0.834141
Average loss: 0.499885, Train accuracy: 0.847968, Val accuracy: 0.837759
Average loss: 0.484723, Train accuracy: 0.851295, Val accuracy: 0.852638
Average loss: 0.473744, Train accuracy: 0.858308, Val accuracy: 0.848133
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
nn.Conv2d(in_channels=3, out_channels=6, kernel_size=(5, 5)),
nn.MaxPool2d(kernel_size=(2, 2), stride=2),
nn.Tanh(),
nn.Conv2d(in_channels=6, out_channels=16, kernel_size=(5, 5)),
nn.MaxPool2d(kernel_size=(2, 2), stride=2),
nn.Tanh(),
Flattener(),
nn.Linear(in_features=16 * 5 * 5, out_features=120),
nn.Tanh(),
nn.Linear(in_features=120, out_features=84),
nn.Tanh(),
nn.Linear(in_features=84, out_features=10)
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
Average loss: 1.215577, Train accuracy: 0.595195, Val accuracy: 0.823220
Average loss: 0.536054, Train accuracy: 0.836143, Val accuracy: 0.854549
Average loss: 0.453425, Train accuracy: 0.861806, Val accuracy: 0.869770
Average loss: 0.404877, Train accuracy: 0.875729, Val accuracy: 0.869906
Average loss: 0.380511, Train accuracy: 0.884227, Val accuracy: 0.882534
Average loss: 0.356347, Train accuracy: 0.890762, Val accuracy: 0.873661
Average loss: 0.339846, Train accuracy: 0.895744, Val accuracy: 0.878984
Average loss: 0.322677, Train accuracy: 0.901034, Val accuracy: 0.891065
Average loss: 0.312159, Train accuracy: 0.903611, Val accuracy: 0.893864
Average loss: 0.298735, Train accuracy: 0.908559, Val accuracy: 0.884581
###Markdown
Подбор гиперпараметров
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'reg', 'optimizer'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
np.random.seed(42)
learning_rates = [1e-1, 1e-2, 10 ** -2.5]
anneal_coeff = 0.2
anneal_epoch = 2
regs = [1e-4]
optimizers = [optim.SGD, optim.Adam]
epoch_num = 10
train_aug_loader = torch.utils.data.DataLoader(data_aug_train,
batch_size=16,
sampler=train_sampler)
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
# TODO: Your code here!
from itertools import product
best_hyperparams = Hyperparams(None, None, None)
best_result = RunResult(None, None, None, None)
for lr, reg, optimizer in product(learning_rates, regs, optimizers):
lenet_model = nn.Sequential(
nn.Conv2d(in_channels=3, out_channels=6, kernel_size=(5, 5)),
nn.MaxPool2d(kernel_size=(2, 2), stride=2),
nn.Tanh(),
nn.Conv2d(in_channels=6, out_channels=16, kernel_size=(5, 5)),
nn.MaxPool2d(kernel_size=(2, 2), stride=2),
nn.Tanh(),
Flattener(),
nn.Linear(in_features=16 * 5 * 5, out_features=120),
nn.Tanh(),
nn.Linear(in_features=120, out_features=84),
nn.Tanh(),
nn.Linear(in_features=84, out_features=10)
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optimizer(lenet_model.parameters(), lr=lr, weight_decay=reg)
scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=anneal_epoch, gamma=anneal_coeff)
params = Hyperparams(lr, reg, optimizer)
print(f"\nCurrent hyperparams: {params}")
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, epoch_num, scheduler)
result = RunResult(lenet_model, train_history, val_history, val_history[-1])
run_record[params] = result
if best_result.final_val_accuracy is None or best_result.final_val_accuracy < result.final_val_accuracy:
best_result = result
best_hyperparams = params
print("\nCurrent best validation accuracy: %4.2f, best hyperparams: %s" % (best_result.final_val_accuracy, best_hyperparams))
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
Best validation accuracy: 0.89, best hyperparams: Hyperparams(learning_rate=0.1, reg=0.0001, optimizer=SGD (
Parameter Group 0
dampening: 0
initial_lr: 0.1
lr: 3.200000000000001e-05
momentum: 0
nesterov: False
weight_decay: 0.0001
))
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e-1, 1e-3]
regs = [1e-4, 1e-5]
epoch_num = 10
train_aug_loader = torch.utils.data.DataLoader(data_aug_train,
batch_size=16,
sampler=train_sampler)
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
# TODO: Your code here!
from itertools import product
best_hyperparams = Hyperparams(None, None)
best_result = RunResult(None, None, None, None)
for lr, reg in product(learning_rates, regs):
lenet_model = nn.Sequential(
nn.Conv2d(3, 256, kernel_size=(3, 3)),
nn.MaxPool2d(kernel_size=(2, 2), stride=2),
nn.BatchNorm2d(256),
nn.ReLU(inplace=True),
nn.Conv2d(256, 1024, kernel_size=(3, 3)),
nn.MaxPool2d(kernel_size=(2, 2), stride=2),
nn.BatchNorm2d(1024),
nn.ReLU(inplace=True),
nn.Conv2d(1024, 1024, kernel_size=(3, 3)),
nn.MaxPool2d(kernel_size=(2, 2), stride=2),
nn.BatchNorm2d(1024),
nn.ReLU(inplace=True),
Flattener(),
nn.Linear(1024 * 2 * 2, 512),
nn.BatchNorm1d(512),
nn.ReLU(inplace=True),
nn.Linear(512, 64),
nn.BatchNorm1d(64),
nn.ReLU(inplace=True),
nn.Linear(64, 10)
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.Adam(lenet_model.parameters(), lr=lr, weight_decay=reg)
scheduler = optim.lr_scheduler.CyclicLR(optimizer, base_lr=1e-4, max_lr=1e-3, cycle_momentum=False)
params = Hyperparams(lr, reg)
print(f"\nCurrent hyperparams: {params}")
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, epoch_num, scheduler)
result = RunResult(lenet_model, train_history, val_history, val_history[-1])
run_record[params] = result
if best_result.final_val_accuracy is None or best_result.final_val_accuracy < result.final_val_accuracy:
best_result = result
best_hyperparams = params
print("\nCurrent best validation accuracy: %4.2f, best hyperparams: %s" % (best_result.final_val_accuracy, best_hyperparams))
###Output
Current hyperparams: Hyperparams(learning_rate=0.1, reg=0.0001)
Average loss: 0.774204, Train accuracy: 0.783589, Val accuracy: 0.889154
Average loss: 0.401270, Train accuracy: 0.882401, Val accuracy: 0.913794
Average loss: 0.313353, Train accuracy: 0.907126, Val accuracy: 0.920620
Average loss: 0.263346, Train accuracy: 0.922994, Val accuracy: 0.924442
Average loss: 0.221191, Train accuracy: 0.934904, Val accuracy: 0.929083
Average loss: 0.192814, Train accuracy: 0.943078, Val accuracy: 0.924852
Average loss: 0.168318, Train accuracy: 0.949971, Val accuracy: 0.926831
Average loss: 0.149894, Train accuracy: 0.953810, Val accuracy: 0.928333
Average loss: 0.130594, Train accuracy: 0.960072, Val accuracy: 0.931267
Average loss: 0.121610, Train accuracy: 0.962325, Val accuracy: 0.927172
Current best validation accuracy: 0.93, best hyperparams: Hyperparams(learning_rate=0.1, reg=0.0001)
Current hyperparams: Hyperparams(learning_rate=0.1, reg=1e-05)
Average loss: 0.783472, Train accuracy: 0.781097, Val accuracy: 0.897072
Average loss: 0.398077, Train accuracy: 0.882572, Val accuracy: 0.912839
Average loss: 0.311411, Train accuracy: 0.909173, Val accuracy: 0.922394
Average loss: 0.253370, Train accuracy: 0.926816, Val accuracy: 0.925534
Average loss: 0.209133, Train accuracy: 0.937754, Val accuracy: 0.926353
Average loss: 0.175415, Train accuracy: 0.948009, Val accuracy: 0.932428
Average loss: 0.149012, Train accuracy: 0.955909, Val accuracy: 0.927923
Average loss: 0.129001, Train accuracy: 0.960004, Val accuracy: 0.932906
Average loss: 0.110749, Train accuracy: 0.966266, Val accuracy: 0.928606
Average loss: 0.097218, Train accuracy: 0.969730, Val accuracy: 0.929356
Current best validation accuracy: 0.93, best hyperparams: Hyperparams(learning_rate=0.1, reg=1e-05)
Current hyperparams: Hyperparams(learning_rate=0.001, reg=0.0001)
Average loss: 0.783746, Train accuracy: 0.780654, Val accuracy: 0.901099
Average loss: 0.399287, Train accuracy: 0.883510, Val accuracy: 0.911337
Average loss: 0.316456, Train accuracy: 0.907194, Val accuracy: 0.920756
Average loss: 0.261813, Train accuracy: 0.921885, Val accuracy: 0.923486
Average loss: 0.226908, Train accuracy: 0.932277, Val accuracy: 0.925944
Average loss: 0.190885, Train accuracy: 0.943248, Val accuracy: 0.923896
Average loss: 0.167073, Train accuracy: 0.949647, Val accuracy: 0.930517
Average loss: 0.150864, Train accuracy: 0.954169, Val accuracy: 0.928606
Average loss: 0.131838, Train accuracy: 0.959526, Val accuracy: 0.928469
Average loss: 0.120541, Train accuracy: 0.962478, Val accuracy: 0.925193
Current best validation accuracy: 0.93, best hyperparams: Hyperparams(learning_rate=0.1, reg=1e-05)
Current hyperparams: Hyperparams(learning_rate=0.001, reg=1e-05)
Average loss: 0.772762, Train accuracy: 0.783333, Val accuracy: 0.889427
Average loss: 0.398886, Train accuracy: 0.883851, Val accuracy: 0.912702
Average loss: 0.309198, Train accuracy: 0.908678, Val accuracy: 0.916047
Average loss: 0.255701, Train accuracy: 0.925810, Val accuracy: 0.925807
Average loss: 0.210129, Train accuracy: 0.938232, Val accuracy: 0.928674
Average loss: 0.176550, Train accuracy: 0.947872, Val accuracy: 0.926217
Average loss: 0.146052, Train accuracy: 0.956489, Val accuracy: 0.928060
Average loss: 0.128359, Train accuracy: 0.960840, Val accuracy: 0.929356
Average loss: 0.105671, Train accuracy: 0.967717, Val accuracy: 0.928196
Average loss: 0.093911, Train accuracy: 0.970361, Val accuracy: 0.929629
Current best validation accuracy: 0.93, best hyperparams: Hyperparams(learning_rate=0.001, reg=1e-05)
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
best_model = best_result.model
# TODO Write the code to compute accuracy on test set
test_loader = torch.utils.data.DataLoader(data_test, batch_size=batch_size)
final_test_accuracy = compute_accuracy(best_model, test_loader)
print("Final test accuracy - ", final_test_accuracy)
###Output
_____no_output_____
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
#!pip3 install torch torchvision
#!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
print(f'CUDA available: {torch.cuda.is_available()}')
print(f'Device count: {torch.cuda.device_count()}')
torch.rand(100).cuda()
###Output
CUDA available: True
Device count: 1
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./data',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./data', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs, scheduler=None):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
if scheduler:
if isinstance(scheduler, optim.lr_scheduler.ReduceLROnPlateau):
scheduler.step(val_accuracy)
else:
scheduler.step()
print("%d) Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (epoch + 1, ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
with torch.no_grad():
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y_gpu.shape[0]
return float(correct_samples) / total_samples
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
1) Average loss: 1.439584, Train accuracy: 0.518496, Val accuracy: 0.748413
2) Average loss: 0.718074, Train accuracy: 0.780074, Val accuracy: 0.800218
3) Average loss: 0.609681, Train accuracy: 0.817493, Val accuracy: 0.807795
4) Average loss: 0.556759, Train accuracy: 0.833567, Val accuracy: 0.834619
5) Average loss: 0.524693, Train accuracy: 0.842849, Val accuracy: 0.837690
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, interpolation=transforms.functional.InterpolationMode.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./data',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
_____no_output_____
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, interpolation=transforms.functional.InterpolationMode.BILINEAR),
])
data_aug_vis = dset.SVHN('./data',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
tfs = transforms.Compose([
# Good augmentations
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomRotation(10, interpolation=transforms.functional.InterpolationMode.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create new instances of loaders with the augmentations
data_aug_train = dset.SVHN('./data', transform=tfs)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
1) Average loss: 1.002085, Train accuracy: 0.671604, Val accuracy: 0.774418
2) Average loss: 0.843833, Train accuracy: 0.726171, Val accuracy: 0.772575
3) Average loss: 0.799028, Train accuracy: 0.744651, Val accuracy: 0.765750
4) Average loss: 0.768793, Train accuracy: 0.754513, Val accuracy: 0.766228
5) Average loss: 0.744665, Train accuracy: 0.762942, Val accuracy: 0.778855
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# LeNet-like architecture for SVHN task
def create_lenet_model(device):
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, 5),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
nn.Conv2d(6, 16, 5),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
nn.Flatten(),
nn.Linear(400, 120),
nn.ReLU(inplace=True),
nn.Linear(120, 84),
nn.ReLU(inplace=True),
nn.Linear(84, 10))
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
return lenet_model
lenet_model = create_lenet_model(device=device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
1) Average loss: 1.558958, Train accuracy: 0.463058, Val accuracy: 0.714832
2) Average loss: 0.766354, Train accuracy: 0.761168, Val accuracy: 0.807453
3) Average loss: 0.619916, Train accuracy: 0.809149, Val accuracy: 0.835779
4) Average loss: 0.557765, Train accuracy: 0.827697, Val accuracy: 0.826701
5) Average loss: 0.517985, Train accuracy: 0.841791, Val accuracy: 0.851887
6) Average loss: 0.487492, Train accuracy: 0.849964, Val accuracy: 0.862262
7) Average loss: 0.464337, Train accuracy: 0.856448, Val accuracy: 0.843492
8) Average loss: 0.451682, Train accuracy: 0.859895, Val accuracy: 0.866221
9) Average loss: 0.432538, Train accuracy: 0.866959, Val accuracy: 0.864856
10) Average loss: 0.419571, Train accuracy: 0.870525, Val accuracy: 0.870657
###Markdown
Подбор гиперпараметров
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e-1]
anneal_coeff = 0.2
anneal_epochs = [1, 5, 10]
regs = [1e-4]
batch_size = 64
epoch_num = 10
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
for lr in learning_rates:
for anneal_epoch in anneal_epochs:
for reg in regs:
lenet_model = create_lenet_model(device=device)
optimizer = optim.SGD(lenet_model.parameters(), lr=lr, weight_decay=reg)
lr_lambda = lambda epoch: anneal_coeff if epoch % anneal_epoch == anneal_epoch - 1 else 1
scheduler = optim.lr_scheduler.MultiplicativeLR(optimizer, lr_lambda=lr_lambda)
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, epoch_num, scheduler)
run_record[Hyperparams(lr, anneal_epoch, reg)] = RunResult(lenet_model, train_history, val_history, val_history[-1])
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
Best validation accuracy: 0.89, best hyperparams: Hyperparams(learning_rate=0.1, anneal_epochs=10, reg=0.0001)
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
best_model = nn.Sequential(
nn.Conv2d(3, 10, 5),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
nn.BatchNorm2d(10),
nn.Conv2d(10, 50, 3),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
nn.BatchNorm2d(50),
nn.Conv2d(50, 100, 3),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
nn.BatchNorm2d(100),
nn.Flatten(),
nn.Linear(4 * 100, 10))
best_model.type(torch.cuda.FloatTensor)
best_model.to(device)
tfs = transforms.Compose([
transforms.RandomHorizontalFlip(),
transforms.RandomAffine(10, interpolation=transforms.functional.InterpolationMode.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
batch_size = 64
data_aug_train = dset.SVHN('./data', transform=tfs)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
epoch_num = 50
lr = 1e-2
reg = 1e-4
optimizer = optim.Adam(best_model.parameters(), lr=lr, weight_decay=reg)
scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer)
loss_history, train_history, val_history = train_model(best_model, train_aug_loader, val_loader, loss, optimizer, epoch_num, scheduler)
###Output
1) Average loss: 0.819372, Train accuracy: 0.736904, Val accuracy: 0.828476
2) Average loss: 0.547804, Train accuracy: 0.830546, Val accuracy: 0.843970
3) Average loss: 0.508539, Train accuracy: 0.841501, Val accuracy: 0.857143
4) Average loss: 0.493298, Train accuracy: 0.847695, Val accuracy: 0.862125
5) Average loss: 0.482520, Train accuracy: 0.849742, Val accuracy: 0.872295
6) Average loss: 0.479901, Train accuracy: 0.852507, Val accuracy: 0.857211
7) Average loss: 0.474183, Train accuracy: 0.853445, Val accuracy: 0.857552
8) Average loss: 0.466732, Train accuracy: 0.854435, Val accuracy: 0.860692
9) Average loss: 0.471749, Train accuracy: 0.854332, Val accuracy: 0.864446
10) Average loss: 0.464318, Train accuracy: 0.856346, Val accuracy: 0.869702
11) Average loss: 0.462150, Train accuracy: 0.857045, Val accuracy: 0.865470
12) Average loss: 0.459237, Train accuracy: 0.857352, Val accuracy: 0.864719
13) Average loss: 0.369113, Train accuracy: 0.888851, Val accuracy: 0.895502
14) Average loss: 0.337184, Train accuracy: 0.899140, Val accuracy: 0.902123
15) Average loss: 0.321306, Train accuracy: 0.904600, Val accuracy: 0.904034
16) Average loss: 0.307868, Train accuracy: 0.907126, Val accuracy: 0.903761
17) Average loss: 0.301197, Train accuracy: 0.910743, Val accuracy: 0.903897
18) Average loss: 0.294591, Train accuracy: 0.912415, Val accuracy: 0.905126
19) Average loss: 0.290181, Train accuracy: 0.913763, Val accuracy: 0.902669
20) Average loss: 0.283624, Train accuracy: 0.915811, Val accuracy: 0.905945
21) Average loss: 0.280024, Train accuracy: 0.915964, Val accuracy: 0.904989
22) Average loss: 0.273349, Train accuracy: 0.917892, Val accuracy: 0.906696
23) Average loss: 0.266765, Train accuracy: 0.919820, Val accuracy: 0.904443
24) Average loss: 0.252173, Train accuracy: 0.925673, Val accuracy: 0.906832
25) Average loss: 0.243727, Train accuracy: 0.928352, Val accuracy: 0.907720
26) Average loss: 0.246011, Train accuracy: 0.928147, Val accuracy: 0.906696
27) Average loss: 0.241434, Train accuracy: 0.929035, Val accuracy: 0.907583
28) Average loss: 0.239351, Train accuracy: 0.929154, Val accuracy: 0.907720
29) Average loss: 0.237093, Train accuracy: 0.931202, Val accuracy: 0.908061
30) Average loss: 0.237579, Train accuracy: 0.929256, Val accuracy: 0.907174
31) Average loss: 0.232442, Train accuracy: 0.931389, Val accuracy: 0.907242
32) Average loss: 0.231118, Train accuracy: 0.931423, Val accuracy: 0.907447
33) Average loss: 0.234165, Train accuracy: 0.930997, Val accuracy: 0.908402
34) Average loss: 0.231044, Train accuracy: 0.932208, Val accuracy: 0.906901
35) Average loss: 0.227010, Train accuracy: 0.932362, Val accuracy: 0.907720
36) Average loss: 0.228768, Train accuracy: 0.933505, Val accuracy: 0.906764
37) Average loss: 0.227265, Train accuracy: 0.931833, Val accuracy: 0.907993
38) Average loss: 0.226537, Train accuracy: 0.932618, Val accuracy: 0.907583
39) Average loss: 0.226264, Train accuracy: 0.933079, Val accuracy: 0.907993
40) Average loss: 0.227932, Train accuracy: 0.932652, Val accuracy: 0.907856
41) Average loss: 0.229141, Train accuracy: 0.932259, Val accuracy: 0.907856
42) Average loss: 0.228031, Train accuracy: 0.933659, Val accuracy: 0.907310
43) Average loss: 0.225913, Train accuracy: 0.933795, Val accuracy: 0.907310
44) Average loss: 0.228344, Train accuracy: 0.932123, Val accuracy: 0.907924
45) Average loss: 0.225124, Train accuracy: 0.934478, Val accuracy: 0.907310
46) Average loss: 0.224651, Train accuracy: 0.934819, Val accuracy: 0.907924
47) Average loss: 0.226720, Train accuracy: 0.932874, Val accuracy: 0.907037
48) Average loss: 0.225505, Train accuracy: 0.933676, Val accuracy: 0.907651
49) Average loss: 0.227016, Train accuracy: 0.933317, Val accuracy: 0.907174
50) Average loss: 0.224265, Train accuracy: 0.934102, Val accuracy: 0.906832
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
# Compute accuracy on test set
test_loader = torch.utils.data.DataLoader(data_test, batch_size=batch_size)
final_test_accuracy = compute_accuracy(best_model, test_loader)
print("Final test accuracy: ", final_test_accuracy)
###Output
Final test accuracy: 0.910341118623233
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
from google.colab import drive
drive.mount('/content/drive')
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs, scheduler=None, anneal_epochs=None):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
if epoch > 5 and val_accuracy < 0.5:
break
if not scheduler:
continue
scheduler.step()
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
indices = torch.argmax(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
accuracy = float(correct_samples) / total_samples
return accuracy
# loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:1231: UserWarning: Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead
"Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead"
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:1231: UserWarning: Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead
"Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead"
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
# TODO: Add good augmentations
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomRotation(10, interpolation=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
data_aug_train = dset.SVHN('./',
transform=tfs
)
# TODO create new instances of loaders with the augmentations you chose
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 0.614537, Train accuracy: 0.811726, Val accuracy: 0.847178
Average loss: 0.565538, Train accuracy: 0.827953, Val accuracy: 0.803495
Average loss: 0.541198, Train accuracy: 0.836621, Val accuracy: 0.859873
Average loss: 0.525816, Train accuracy: 0.839743, Val accuracy: 0.854071
Average loss: 0.516308, Train accuracy: 0.844163, Val accuracy: 0.863081
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, 5),
nn.AvgPool2d(2),
nn.Tanh(),
nn.Conv2d(6, 16, 5),
nn.AvgPool2d(2),
nn.Tanh(),
nn.Conv2d(16, 120, 5),
nn.Tanh(),
Flattener(),
nn.Linear(120, 84),
nn.Tanh(),
nn.Linear(84, 10)
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
Average loss: 2.038343, Train accuracy: 0.290977, Val accuracy: 0.603167
Average loss: 0.818385, Train accuracy: 0.747261, Val accuracy: 0.801584
Average loss: 0.591051, Train accuracy: 0.819541, Val accuracy: 0.852229
Average loss: 0.521520, Train accuracy: 0.840119, Val accuracy: 0.860965
Average loss: 0.480949, Train accuracy: 0.854042, Val accuracy: 0.864173
Average loss: 0.451818, Train accuracy: 0.859912, Val accuracy: 0.860351
Average loss: 0.428962, Train accuracy: 0.868904, Val accuracy: 0.874070
Average loss: 0.413287, Train accuracy: 0.873631, Val accuracy: 0.869156
Average loss: 0.397106, Train accuracy: 0.877385, Val accuracy: 0.874070
Average loss: 0.380853, Train accuracy: 0.882640, Val accuracy: 0.871408
###Markdown
Подбор гиперпараметров
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_coeff', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e0, 1e-1, 1e-2, 1e-3, 1e-4]
anneal_coeff = 0.2
anneal_epochs = [1, 5, 10, 15, 20, 50]
reg = [1e-2, 1e-3, 1e-4, 1e-5, 1e-7]
batch_size = 64
epoch_num = 10
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
# TODO: Your code here!
annotation = """Learning rate {},
Learning rate annealing {},
Learning rate decay step size epoch {},
Regularization strength {}"""
hyperparameters_presets = []
for i in range(10):
learning_rate = 10**np.random.uniform(0, -4)
anneal_coeff = np.random.uniform(1e-0, 1e-2)
anneal_epoch = np.random.randint(epoch_num)
reg_strength = np.random.choice(reg)
hyperparams = Hyperparams(learning_rate, anneal_coeff, anneal_epoch, reg_strength)
title = annotation.format(learning_rate, anneal_coeff, anneal_epoch, reg_strength)
print(title)
# Model structure
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, 5),
nn.AvgPool2d(2),
nn.Tanh(),
nn.Conv2d(6, 16, 5),
nn.AvgPool2d(2),
nn.Tanh(),
nn.Conv2d(16, 120, 5),
nn.Tanh(),
Flattener(),
nn.Linear(120, 84),
nn.Tanh(),
nn.Linear(84, 10)
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), learning_rate, weight_decay=reg_strength)
scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=anneal_epoch, gamma=anneal_coeff)
loss_history, train_history, val_history = train_model(lenet_model,
train_aug_loader,
val_loader,
loss, optimizer, epoch_num,
scheduler, anneal_epoch)
final_val_accuracy = val_history[-1]
run_result = RunResult(lenet_model, train_history, val_history, final_val_accuracy)
run_record[hyperparams] = run_result
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
Best validation accuracy: 0.88, best hyperparams: Hyperparams(learning_rate=0.06565725441236954, anneal_coeff=0.6036131407447823, anneal_epochs=9, reg=0.001)
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
best_model = None
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_coeff', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e0, 1e-1, 1e-2, 1e-3, 1e-4]
anneal_coeff = 0.2
anneal_epochs = [1, 5, 10, 15, 20, 50]
reg = [1e-2, 1e-3, 1e-4, 1e-5, 1e-7]
batch_size = 64
epoch_num = 20
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
# TODO: Your code here!
annotation = """Learning rate {},
Learning rate annealing {},
Learning rate decay step size epoch {},
Regularization strength {}"""
hyperparameters_presets = []
for i in range(15):
learning_rate = 10**np.random.uniform(0, -4)
anneal_coeff = np.random.uniform(1e-0, 1e-2)
anneal_epoch = np.random.randint(1, epoch_num)
reg_strength = np.random.choice(reg)
hyperparams = Hyperparams(learning_rate, anneal_coeff, anneal_epoch, reg_strength)
title = annotation.format(learning_rate, anneal_coeff, anneal_epoch, reg_strength)
print(title)
# Model structure
cnn_model = nn.Sequential(
nn.Conv2d(3, 16, 3, padding=1),
nn.ReLU(inplace=True),
nn.BatchNorm2d(16),
nn.Conv2d(16, 32, 5),
nn.ReLU(inplace=True),
nn.BatchNorm2d(32),
nn.MaxPool2d(4),
nn.ReLU(inplace=True),
nn.Conv2d(32, 100, 4),
nn.ReLU(inplace=True),
nn.BatchNorm2d(100),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(100, 150),
nn.ReLU(inplace=True),
nn.BatchNorm1d(150),
nn.Linear(150, 10)
)
cnn_model.type(torch.cuda.FloatTensor)
cnn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.Adam(cnn_model.parameters(), learning_rate, weight_decay=reg_strength)
scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=anneal_epoch, gamma=anneal_coeff)
loss_history, train_history, val_history = train_model(cnn_model,
train_aug_loader,
val_loader,
loss, optimizer, epoch_num,
scheduler, anneal_epoch)
final_val_accuracy = val_history[-1]
run_result = RunResult(cnn_model, train_history, val_history, final_val_accuracy)
run_record[hyperparams] = run_result
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
best_model = best_run.model
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
_____no_output_____
###Markdown
Coarse computed out best result*Learning rate* 0.000169774499014421*Learning rate annealing* 0.7826605538951845 *Learning rate decay step size epoch* 5*Regularization strength* 1e-05
###Code
learning_rate = 0.0001699
anneal_coeff = 0.6
anneal_epoch = 3
reg_strength = 0.05
epoch_num = 15
hyperparams = Hyperparams(learning_rate, anneal_coeff, anneal_epoch, reg_strength)
title = annotation.format(learning_rate, anneal_coeff, anneal_epoch, reg_strength)
print(title)
best_model = nn.Sequential(
nn.Conv2d(3, 16, 3, padding=1),
nn.ReLU(inplace=True),
nn.BatchNorm2d(16),
nn.Conv2d(16, 32, 5),
nn.ReLU(inplace=True),
nn.BatchNorm2d(32),
nn.MaxPool2d(4),
nn.ReLU(inplace=True),
nn.Conv2d(32, 100, 4),
nn.ReLU(inplace=True),
nn.BatchNorm2d(100),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(100, 150),
nn.ReLU(inplace=True),
nn.BatchNorm1d(150),
nn.Linear(150, 10)
)
best_model.type(torch.cuda.FloatTensor)
best_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.Adam(best_model.parameters(), learning_rate, weight_decay=reg_strength)
scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=anneal_epoch, gamma=anneal_coeff)
loss_history, train_history, val_history = train_model(best_model,
train_aug_loader,
val_loader,
loss, optimizer, epoch_num,
scheduler, anneal_epoch)
# Visualise results
plt.figure(figsize=(15, 7))
plt.subplot(211)
plt.title("Loss")
plt.plot(loss_history)
plt.subplot(212)
plt.title("Train/validation accuracy")
plt.plot(train_history)
plt.plot(val_history)
###Output
Learning rate 0.0001699,
Learning rate annealing 0.6,
Learning rate decay step size epoch 3,
Regularization strength 0.05
Average loss: 1.390775, Train accuracy: 0.547555, Val accuracy: 0.784452
Average loss: 0.617896, Train accuracy: 0.824813, Val accuracy: 0.857689
Average loss: 0.497395, Train accuracy: 0.866225, Val accuracy: 0.882738
Average loss: 0.442923, Train accuracy: 0.888646, Val accuracy: 0.903283
Average loss: 0.434637, Train accuracy: 0.896837, Val accuracy: 0.908266
Average loss: 0.440815, Train accuracy: 0.901699, Val accuracy: 0.914613
Average loss: 0.431653, Train accuracy: 0.913473, Val accuracy: 0.917685
Average loss: 0.440815, Train accuracy: 0.917295, Val accuracy: 0.920210
Average loss: 0.451971, Train accuracy: 0.918353, Val accuracy: 0.919323
Average loss: 0.443559, Train accuracy: 0.925707, Val accuracy: 0.921917
Average loss: 0.441531, Train accuracy: 0.925741, Val accuracy: 0.926353
Average loss: 0.440156, Train accuracy: 0.927823, Val accuracy: 0.926626
Average loss: 0.428125, Train accuracy: 0.931850, Val accuracy: 0.927513
Average loss: 0.423173, Train accuracy: 0.932720, Val accuracy: 0.927786
Average loss: 0.420338, Train accuracy: 0.933846, Val accuracy: 0.928742
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
# TODO Write the code to compute accuracy on test set
test_loader = torch.utils.data.DataLoader(data_test, batch_size=batch_size)
final_test_accuracy = compute_accuracy(best_model, test_loader)
print("Final test accuracy - ", final_test_accuracy)
###Output
Final test accuracy - 0.9303933620159803
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
raise Exception("Not implemented")
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
_____no_output_____
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
# TODO: Add good augmentations
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# TODO create new instances of loaders with the augmentations you chose
train_aug_loader = None
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
_____no_output_____
###Markdown
Подбор гиперпараметров
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e0, 1e-1, 1e-2, 1e-3, 1e-4]
anneal_coeff = 0.2
anneal_epochs = [1, 5, 10, 15, 20, 50]
reg = [1e-3, 1e-4, 1e-5, 1e-7]
batch_size = 64
epoch_num = 10
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
# TODO: Your code here!
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
_____no_output_____
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
best_model = None
###Output
_____no_output_____
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
# TODO Write the code to compute accuracy on test set
final_test_accuracy = 0.0
print("Final test accuracy - ", final_test_accuracy)
###Output
_____no_output_____
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43, 0.44, 0.47],
std=[0.20, 0.20, 0.20])]))
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43, 0.44, 0.47],
std=[0.20, 0.20, 0.20])]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
print(f'-- Epoch {epoch}')
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print(" Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
_, inds = torch.max(prediction, 1)
correct_samples += torch.sum(inds == y_gpu)
total_samples += y_gpu.shape[0]
return float(correct_samples) / total_samples
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
-- Epoch 0
Average loss: 1.381345, Train accuracy: 0.536873, Val accuracy: 0.731691
-- Epoch 1
Average loss: 0.691591, Train accuracy: 0.791523, Val accuracy: 0.821309
-- Epoch 2
Average loss: 0.597934, Train accuracy: 0.822646, Val accuracy: 0.832912
-- Epoch 3
Average loss: 0.549867, Train accuracy: 0.837696, Val accuracy: 0.834619
-- Epoch 4
Average loss: 0.521073, Train accuracy: 0.845903, Val accuracy: 0.816122
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
_____no_output_____
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
# transforms.ColorJitter(hue=.20, saturation=.30),
# transforms.RandomHorizontalFlip(),
# transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i >= 0 and i < 10:
plt.subplot(1, 10, i + 1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
if i == 10:
break
###Output
_____no_output_____
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
# TODO: Add good augmentations
transforms.ColorJitter(hue=.20, saturation=.30),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# TODO create new instances of loaders with the augmentations you chose
data_aut_train = dset.SVHN('./', transform=tfs)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
-- Epoch 0
Average loss: 1.994605, Train accuracy: 0.293963, Val accuracy: 0.476828
-- Epoch 1
Average loss: 1.718870, Train accuracy: 0.381531, Val accuracy: 0.478056
-- Epoch 2
Average loss: 1.629027, Train accuracy: 0.414463, Val accuracy: 0.447137
-- Epoch 3
Average loss: 1.578112, Train accuracy: 0.436013, Val accuracy: 0.503106
-- Epoch 4
Average loss: 1.543494, Train accuracy: 0.451029, Val accuracy: 0.513003
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, 5),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
nn.Conv2d(6, 16, 5),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
Flattener(),
nn.Linear(16*5*5, 120),
nn.ReLU(inplace=True),
nn.Linear(120, 84),
nn.ReLU(inplace=True),
nn.Linear(84, 10)
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
-- Epoch 0
Average loss: 2.064756, Train accuracy: 0.256885, Val accuracy: 0.418948
-- Epoch 1
Average loss: 1.473469, Train accuracy: 0.484200, Val accuracy: 0.587537
-- Epoch 2
Average loss: 1.207869, Train accuracy: 0.586203, Val accuracy: 0.632585
-- Epoch 3
Average loss: 1.096142, Train accuracy: 0.626403, Val accuracy: 0.643301
-- Epoch 4
Average loss: 1.024352, Train accuracy: 0.652169, Val accuracy: 0.641526
-- Epoch 5
Average loss: 0.986888, Train accuracy: 0.663413, Val accuracy: 0.684936
-- Epoch 6
Average loss: 0.956163, Train accuracy: 0.673822, Val accuracy: 0.681455
-- Epoch 7
Average loss: 0.935070, Train accuracy: 0.679521, Val accuracy: 0.693195
-- Epoch 8
Average loss: 0.916723, Train accuracy: 0.686841, Val accuracy: 0.722681
-- Epoch 9
Average loss: 0.894307, Train accuracy: 0.693274, Val accuracy: 0.718108
###Markdown
Подбор гиперпараметров
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, scheduler, num_epochs):
loss_history = []
train_history = []
val_history = []
print(' Start train')
for epoch in range(num_epochs):
print(f' Epoch {epoch}')
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
scheduler.step()
print(" Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
if (epoch >= 5) and (val_accuracy < 0.5):
print(f' Warning! Small validation accuracy!!!')
break
if val_accuracy > 0.9:
print(f' Yea!! Best validation accuracy!!!')
break
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
_, inds = torch.max(prediction, 1)
correct_samples += torch.sum(inds == y_gpu)
total_samples += y_gpu.shape[0]
return float(correct_samples) / total_samples
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg', 'optims'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e-1, 1e-2, 1e-3, 1e-4]
anneal_coeff = 0.2
anneal_epochs = [2, 5, 10]
reg = [1e-3, 1e-4, 1e-5, 1e-7]
optims = ['sgd', 'adam']
def get_rand_params(learning_rates, anneal_epochs, reg, optims):
rand_lr = learning_rates[np.random.randint(low=0, high=len(learning_rates))]
rand_ann_epoch = anneal_epochs[np.random.randint(low=0, high=len(anneal_epochs))]
rand_reg = reg[np.random.randint(low=0, high=len(reg))]
rand_optim = optims[np.random.randint(low=0, high=len(optims))]
return rand_lr, rand_ann_epoch, rand_reg, rand_optim
batch_size = 64
epoch_num = 10
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
# TODO: Your code here!
num_random_steps = 10
for step in range(num_random_steps):
print('----------------')
print(f'-- Step {step}')
rand_lr, rand_ann_epoch, rand_reg, rand_optim = get_rand_params(learning_rates, anneal_epochs, reg, optims)
while Hyperparams(rand_lr, rand_ann_epoch, rand_reg, rand_optim) in run_record:
rand_lr, rand_ann_epoch, rand_reg, rand_optim = get_rand_params(learning_rates, anneal_epochs, reg, optims)
print(f' lr={rand_lr}, ann_epoch={rand_ann_epoch}, reg={rand_reg}, optim={rand_optim}')
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, 5),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
nn.Conv2d(6, 16, 5),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
Flattener(),
nn.Linear(16*5*5, 120),
nn.ReLU(inplace=True),
nn.Linear(120, 84),
nn.ReLU(inplace=True),
nn.Linear(84, 10)
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
if rand_optim == 'sgd':
optimizer = optim.SGD(lenet_model.parameters(), lr=rand_lr, weight_decay=rand_reg)
elif rand_optim == 'adam':
optimizer = optim.Adam(lenet_model.parameters(), lr=rand_lr, weight_decay=rand_reg)
lambda1 = lambda epoch: anneal_coeff ** (epoch // rand_ann_epoch)
scheduler = optim.lr_scheduler.LambdaLR(optimizer, lr_lambda=[lambda1])
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, scheduler, epoch_num)
run_record[Hyperparams(rand_lr, rand_ann_epoch,
rand_reg, rand_optim)] = RunResult(lenet_model, train_history, val_history, val_history[-1])
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
Best validation accuracy: 0.66, best hyperparams: Hyperparams(learning_rate=0.001, anneal_epochs=5, reg=0.001, optims='adam')
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43, 0.44, 0.47],
std=[0.20, 0.20, 0.20])]))
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43, 0.44, 0.47],
std=[0.20, 0.20, 0.20])]))
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
# TODO:
tfs = transforms.Compose([
# TODO: Add good augmentations
transforms.ColorJitter(hue=.20, saturation=.30),
transforms.RandomRotation(30, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# TODO create new instances of loaders with the augmentations you chose
data_aug_train = dset.SVHN('./', transform=tfs)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
def train_model(model, train_loader, val_loader, loss, optimizer, scheduler, num_epochs):
loss_history = []
train_history = []
val_history = []
print(' Start train')
for epoch in range(num_epochs):
print(f' Epoch {epoch}')
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
scheduler.step()
print(" Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
if (epoch >= 4) and (val_accuracy < 0.5):
print(f' Warning! Small validation accuracy!!!')
break
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
_, inds = torch.max(prediction, 1)
correct_samples += torch.sum(inds == y_gpu)
total_samples += y_gpu.shape[0]
return float(correct_samples) / total_samples
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e-1, 1e-2, 1e-3, 1e-4]
anneal_coeff = 0.2
anneal_epochs = [5, 10]
reg = [1e-3, 1e-4]
optims = 'adam'
def get_rand_params(learning_rates, anneal_epochs, reg):
rand_lr = learning_rates[np.random.randint(low=0, high=len(learning_rates))]
rand_ann_epoch = anneal_epochs[np.random.randint(low=0, high=len(anneal_epochs))]
rand_reg = reg[np.random.randint(low=0, high=len(reg))]
return rand_lr, rand_ann_epoch, rand_reg
batch_size = 64
epoch_num = 15
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
# TODO: Your code here!
num_random_steps = 6
for step in range(num_random_steps):
print('----------------')
print(f'-- Step {step}')
rand_lr, rand_ann_epoch, rand_reg = get_rand_params(learning_rates, anneal_epochs, reg)
while Hyperparams(rand_lr, rand_ann_epoch, rand_reg) in run_record:
rand_lr, rand_ann_epoch, rand_reg = get_rand_params(learning_rates, anneal_epochs, reg)
print(f' lr={rand_lr}, ann_epoch={rand_ann_epoch}, reg={rand_reg}')
final_model = nn.Sequential(
# (batch_size, 3, 32, 32)
nn.Conv2d(3, 16, 5),
nn.ReLU(inplace=True),
nn.BatchNorm2d(16),
nn.MaxPool2d(2),
# (barch_size, 16, 14, 14)
nn.Conv2d(16, 32, 5, padding=2),
nn.ReLU(inplace=True),
nn.BatchNorm2d(32),
# (batch_size, 32, 14, 14)
nn.Conv2d(32, 32, 5),
nn.ReLU(inplace=True),
nn.BatchNorm2d(32),
nn.MaxPool2d(2),
# (batch_size, 32, 5, 5)
Flattener(),
nn.Linear(32*5*5, 256),
nn.ReLU(inplace=True),
nn.Linear(256, 128),
nn.ReLU(inplace=True),
nn.Linear(128, 10)
)
final_model.type(torch.cuda.FloatTensor)
final_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.Adam(final_model.parameters(), lr=rand_lr, weight_decay=rand_reg)
lambda1 = lambda epoch: anneal_coeff ** (epoch // rand_ann_epoch)
scheduler = optim.lr_scheduler.LambdaLR(optimizer, lr_lambda=[lambda1])
loss_history, train_history, val_history = train_model(final_model, train_aug_loader, val_loader, loss, optimizer, scheduler, epoch_num)
run_record[Hyperparams(rand_lr, rand_ann_epoch, rand_reg)] = RunResult(lenet_model, train_history, val_history, val_history[-1])
for hyperparams, run_result in run_record.items():
print(f'Params: lr = {hyperparams.learning_rate}, ann_ep = {hyperparams.anneal_epochs}, reg = {hyperparams.reg}')
print(f'\n final val accuracy = {run_result.final_val_accuracy:.2f}\n')
best_model = None
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
best_model = run_result.model
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
Best validation accuracy: 0.92, best hyperparams: Hyperparams(learning_rate=0.01, anneal_epochs=5, reg=0.0001)
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
# TODO Write the code to compute accuracy on test set
batch_size = 64
data_test_size = data_test.data.shape[0]
indices_test = list(range(data_test_size))
np.random.shuffle(indices_test)
test_sampler = SubsetRandomSampler(indices_test)
test_loader = torch.utils.data.DataLoader(data_test, batch_size=batch_size,
sampler=test_sampler)
final_test_accuracy = compute_accuracy(final_model, test_loader)
print("Final test accuracy - ", final_test_accuracy)
###Output
_____no_output_____
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
#!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs,scheduler=None):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
if scheduler:
scheduler.step()
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
if epoch>=3:
if val_history[-1]<0.2:
print('bad params')
break
if train_history[-1]<=train_history[-2] and train_history[-2]<=train_history[-3]:
if val_history[-1]<=val_history[-2] and val_history[-2]<=val_history[-3]:
print('bad params')
break
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
correct = 0.0
total = 0.0
for (x,y) in loader:
predictions = model(x.cuda())
# print(predictions.shape)
# print(torch.max(predictions))
_,indices = torch.max(predictions,1)
correct += torch.sum(indices==y.cuda())
total += y.shape[0]
return correct/total
# raise Exception("Not implemented")
#loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
_____no_output_____
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
# TODO: Add good augmentations
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# TODO create new instances of loaders with the augmentations you chose
data_aug_train = dset.SVHN('./', transform=tfs)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 0.605335, Train accuracy: 0.814183, Val accuracy: 0.839260
Average loss: 0.558544, Train accuracy: 0.831707, Val accuracy: 0.823971
Average loss: 0.536741, Train accuracy: 0.838225, Val accuracy: 0.845471
Average loss: 0.517843, Train accuracy: 0.844623, Val accuracy: 0.859941
Average loss: 0.512772, Train accuracy: 0.845272, Val accuracy: 0.804382
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
nn.Conv2d(in_channels=3, out_channels=6, kernel_size=5, stride=1, padding=0),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
nn.Conv2d(6, 16, 5, padding=0),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
Flattener(),
nn.Linear(16*5*5, 120),
nn.ReLU(inplace=True),
nn.Linear(120, 84),
nn.ReLU(inplace=True),
nn.Linear(84, 10)
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
Average loss: 1.359696, Train accuracy: 0.540815, Val accuracy: 0.807317
Average loss: 0.548248, Train accuracy: 0.837201, Val accuracy: 0.849498
Average loss: 0.459641, Train accuracy: 0.861977, Val accuracy: 0.876664
Average loss: 0.413628, Train accuracy: 0.876378, Val accuracy: 0.879189
Average loss: 0.379544, Train accuracy: 0.886053, Val accuracy: 0.876527
Average loss: 0.356997, Train accuracy: 0.891223, Val accuracy: 0.899666
Average loss: 0.339403, Train accuracy: 0.897826, Val accuracy: 0.894342
Average loss: 0.322885, Train accuracy: 0.902536, Val accuracy: 0.893864
Average loss: 0.311364, Train accuracy: 0.904873, Val accuracy: 0.892431
Average loss: 0.297924, Train accuracy: 0.910350, Val accuracy: 0.894888
###Markdown
Подбор гиперпараметров
###Code
import itertools
import copy
from torch.optim.lr_scheduler import StepLR
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e-1, 1e-2, 1e-3, 1e-4]
anneal_coeff = 0.2
anneal_epochs = [1, 5]
reg = [1e-3, 1e-4, 1e-5, 1e-7]
batch_size = 64
epoch_num = 10
lenet_model = nn.Sequential(
nn.Conv2d(in_channels=3, out_channels=6, kernel_size=5, stride=1, padding=0),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
nn.Conv2d(6, 16, 5, padding=0),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
Flattener(),
nn.Linear(16*5*5, 120),
nn.ReLU(inplace=True),
nn.Linear(120, 84),
nn.ReLU(inplace=True),
nn.Linear(84, 10)
)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
torch.save(lenet_model.state_dict(),'./model.sd')
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
for params in itertools.product(learning_rates,anneal_epochs,reg):
params = Hyperparams(*params)
lenet_model.load_state_dict(torch.load('./model.sd'))
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
print(params)
optimizer = optim.SGD(
lenet_model.parameters(), lr=params.learning_rate, weight_decay = params.reg)
scheduler = StepLR(optimizer, step_size = params.anneal_epochs, gamma = anneal_coeff)
loss_history, train_history, val_history = train_model(
lenet_model, train_aug_loader, val_loader, loss, optimizer, epoch_num, scheduler
)
# dkt = copy.deepcopy(lenet_model)
run_record[params] = RunResult(
copy.deepcopy(lenet_model),
copy.deepcopy(train_history),
copy.deepcopy(val_history),
copy.deepcopy(val_history[-1])
)
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
# TODO: Your scode here!
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
best_model = run_result.model
torch.save(best_model.state_dict(),'./best_model.sd')
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
Best validation accuracy: 0.91, best hyperparams: Hyperparams(learning_rate=0.1, anneal_epochs=5, reg=1e-07)
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
optimizer = optim.SGD(
lenet_model.parameters(), lr=0.01, weight_decay = 1e-5)
scheduler = StepLR(optimizer, step_size = 5, gamma = .2)
loss_history, train_history, val_history = train_model(
best_model, train_aug_loader, val_loader, loss, optimizer, 20, scheduler)
###Output
_____no_output_____
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
# TODO Write the code to compute accuracy on test set
test_loader = torch.utils.data.DataLoader(data_test, batch_size=batch_size)
final_test_accuracy = compute_accuracy(best_model, test_loader)
print("Final test accuracy - ", final_test_accuracy)
###Output
_____no_output_____
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs, scheduler = None):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
if scheduler:
scheduler.step()
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
right = 0
total = 0
for i, (x, y) in enumerate(loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
preds = model(x_gpu)
preds = torch.argmax(preds, axis = 1)
right += torch.sum(preds == y_gpu)
total += y_gpu.shape[0]
return float(right) / total
# raise Exception("Not implemented")
# loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
_____no_output_____
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
# TODO: Add good augmentations
transforms.Resize(32),
transforms.RandomCrop(32),
transforms.ColorJitter(brightness = 0.1, contrast = 0.1, saturation = 0.1, hue = 0.1),
# transforms.RandomVerticalFlip(p = 0.3),
transforms.RandomRotation(20),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# TODO create new instances of loaders with the augmentations you chose
data_aug_train = dset.SVHN("./", transform = tfs)
# train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
# sampler=train_sampler)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size = batch_size, sampler = train_sampler)
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
# TODO: Add good augmentations
transforms.Resize(32),
transforms.RandomCrop(32),
transforms.ColorJitter(brightness = 0.1, contrast = 0.1, saturation = 0.1, hue = 0.1),
# transforms.RandomHorizontalFlip(p = 0.3),
# transforms.RandomVerticalFlip(p = 0.3),
transforms.RandomRotation(20)
])
data_aug_vis = dset.SVHN('./',
transform = tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 0.839246, Train accuracy: 0.737587, Val accuracy: 0.831343
Average loss: 0.729897, Train accuracy: 0.772839, Val accuracy: 0.807249
Average loss: 0.696930, Train accuracy: 0.783299, Val accuracy: 0.831616
Average loss: 0.670017, Train accuracy: 0.791267, Val accuracy: 0.824108
Average loss: 0.654480, Train accuracy: 0.797615, Val accuracy: 0.839192
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, 5),
nn.ReLU(inplace = True),
nn.MaxPool2d(2),
nn.Conv2d(6, 16, 5),
nn.ReLU(inplace = True),
nn.MaxPool2d(2),
Flattener(),
nn.Linear(16*5*5, 120),
nn.ReLU(inplace = True),
nn.Linear(120, 84),
nn.ReLU(inplace = True),
nn.Linear(84, 10)
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
print(device)
###Output
cuda:0
###Markdown
Подбор гиперпараметров
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e0, 1e-1, 1e-2, 1e-3, 1e-4]
anneal_coeff = [0.2]
anneal_epochs = [1, 5, 10, 15, 20, 50]
regs = [1e-3, 1e-4, 1e-5, 1e-7]
batch_size = 64
epoch_num = 1
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
# TODO: Your code here!
for epoch in anneal_epochs:
for coeff in anneal_coeff:
for lr in learning_rates:
for reg in regs:
params = Hyperparams(lr, reg, 0)
model = nn.Sequential(
nn.Conv2d(3, 6, 5),
nn.ReLU(inplace = True),
nn.MaxPool2d(2),
nn.Conv2d(6, 16, 5),
nn.ReLU(inplace = True),
nn.MaxPool2d(2),
Flattener(),
nn.Linear(16*5*5, 120),
nn.ReLU(inplace = True),
nn.Linear(120, 84),
nn.ReLU(inplace = True),
nn.Linear(84, 10)
)
model.type(torch.cuda.FloatTensor)
model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.Adam(lenet_model.parameters(), lr = lr, weight_decay = reg)
# scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max = 10)
scheduler = optim.lr_scheduler.StepLR(optimizer, gamma = coeff, step_size = epoch)
loss_history, train_history, val_history = train_model(model, train_aug_loader, val_loader, loss, optimizer, epoch_num, scheduler)
results = RunResult(model, train_history, val_history, val_history[-1])
run_record[params] = results
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
Best validation accuracy: 0.12, best hyperparams: Hyperparams(learning_rate=1.0, anneal_epochs=1e-05, reg=0)
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
best_model = nn.Sequential(
nn.Conv2d(3, 10, 5), ## output 28x28x10
nn.BatchNorm2d(10),
nn.ReLU(inplace = True),
nn.MaxPool2d(2), ## output 14x14x10
nn.Conv2d(10, 20, 5), ## output 10x10x20 # nn.BatchNorm2d(16),
nn.ReLU(inplace = True),
nn.MaxPool2d(2), ## output 5x5x20
# nn.Conv2d(20, 30, 5),
# nn.ReLU(inplace = True),
# nn.MaxPool2d(2),
Flattener(),
nn.Linear(5*5*20, 200),
nn.BatchNorm1d(200),
nn.ReLU(inplace = True),
nn.Linear(200, 84),
nn.ReLU(inplace = True),
nn.Linear(84, 10)
)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
# nn_model.type(torch.cuda.FloatTensor)
best_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.Adam(best_model.parameters(), lr = 0.001, weight_decay = 0.0001)
scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max = 15)
# scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, patience = 3)
loss_history, train_history, val_history = train_model(best_model, train_aug_loader, val_loader, loss, optimizer, 20, scheduler)
# loss_history, train_history, val_history = train_model(best_model, train_aug_loader, val_loader, loss, scheduler, 10)
###Output
Average loss: 0.820058, Train accuracy: 0.737774, Val accuracy: 0.854413
Average loss: 0.528518, Train accuracy: 0.835188, Val accuracy: 0.875094
Average loss: 0.456039, Train accuracy: 0.858427, Val accuracy: 0.890178
Average loss: 0.415708, Train accuracy: 0.870218, Val accuracy: 0.890792
Average loss: 0.384312, Train accuracy: 0.881975, Val accuracy: 0.902601
Average loss: 0.358631, Train accuracy: 0.890080, Val accuracy: 0.898164
Average loss: 0.333994, Train accuracy: 0.897434, Val accuracy: 0.907924
Average loss: 0.314382, Train accuracy: 0.903542, Val accuracy: 0.907310
Average loss: 0.296284, Train accuracy: 0.909139, Val accuracy: 0.907924
Average loss: 0.284033, Train accuracy: 0.912995, Val accuracy: 0.916456
Average loss: 0.265656, Train accuracy: 0.919701, Val accuracy: 0.916797
Average loss: 0.252367, Train accuracy: 0.923728, Val accuracy: 0.918026
Average loss: 0.242887, Train accuracy: 0.927089, Val accuracy: 0.918299
Average loss: 0.233869, Train accuracy: 0.929734, Val accuracy: 0.919459
Average loss: 0.230382, Train accuracy: 0.930229, Val accuracy: 0.919869
Average loss: 0.230341, Train accuracy: 0.931167, Val accuracy: 0.920005
Average loss: 0.230368, Train accuracy: 0.931713, Val accuracy: 0.921302
Average loss: 0.234228, Train accuracy: 0.928659, Val accuracy: 0.918845
Average loss: 0.231712, Train accuracy: 0.930110, Val accuracy: 0.919937
Average loss: 0.237097, Train accuracy: 0.927277, Val accuracy: 0.917071
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
# TODO Write the code to compute accuracy on test set
test_loader = torch.utils.data.DataLoader(data_test, batch_size = batch_size)
final_test_accuracy = compute_accuracy(best_model, test_loader)
print("Final test accuracy - ", final_test_accuracy)
###Output
_____no_output_____
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
#!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
device
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs, scheduler=None):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
if scheduler is not None:
scheduler.step()
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def multiclass_accuracy(prediction, ground_truth):
return np.mean([p == gt for p, gt in zip(prediction, ground_truth)])
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
val_accuracy = 0
correct = 0
total = 0
for i_step, (x, y) in enumerate(loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
pred = model(x_gpu)
_, indices = torch.max(pred, 1)
correct += torch.sum(indices == y_gpu)
total += y_gpu.shape[0]
val_accuracy = float(correct)/total
return val_accuracy
loss_history, train_history, val_history = \
train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 1.405232, Train accuracy: 0.530475, Val accuracy: 0.746911
Average loss: 0.700626, Train accuracy: 0.787650, Val accuracy: 0.797761
Average loss: 0.598498, Train accuracy: 0.821145, Val accuracy: 0.797898
Average loss: 0.552700, Train accuracy: 0.837150, Val accuracy: 0.827179
Average loss: 0.520064, Train accuracy: 0.847234, Val accuracy: 0.797898
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
_____no_output_____
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR)
])
# TODO create new instances of loaders with the augmentations you chose
data_aug_vis = dset.SVHN('./',
transform=tfs
)
plt.figure(figsize=(30, 5))
for i, (x, y) in enumerate(data_aug_vis):
if i == 5:
break
plt.subplot(1, 5, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# TODO create new instances of loaders with the augmentations you chose
data_aug_vis = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_vis, batch_size=batch_size, sampler=train_sampler)
loss_history, train_history, val_history = \
train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 1.536486, Train accuracy: 0.466761, Val accuracy: 0.548631
Average loss: 1.295947, Train accuracy: 0.552077, Val accuracy: 0.555047
Average loss: 1.222218, Train accuracy: 0.576920, Val accuracy: 0.601392
Average loss: 1.169515, Train accuracy: 0.598045, Val accuracy: 0.615862
Average loss: 1.133949, Train accuracy: 0.610620, Val accuracy: 0.651833
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :) ![alt text](https://engmrk.com/wp-content/uploads/2018/09/LeNEt_Summary_Table.jpg)
###Code
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 120)
lenet_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 120),
nn.Linear(120, 84),
nn.Linear(84, 10),
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = \
train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 2.130941, Train accuracy: 0.234942, Val accuracy: 0.318750
Average loss: 1.650454, Train accuracy: 0.401563, Val accuracy: 0.448024
Average loss: 1.402663, Train accuracy: 0.499369, Val accuracy: 0.489318
Average loss: 1.289492, Train accuracy: 0.544364, Val accuracy: 0.549314
Average loss: 1.221133, Train accuracy: 0.572842, Val accuracy: 0.577230
###Markdown
Подбор гиперпараметров
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e0, 1e-1, 1e-2, 1e-3, 1e-4]
anneal_coeff = 0.2
anneal_epochs = [1, 5] #, 10, 15, 20, 50]
reg = [1e-3, 1e-4, 1e-5, 1e-7]
epoch_num = 5 #10
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
for lr in learning_rates:
for ae in anneal_epochs:
for r in reg:
print(f'Parameters are lr={lr}, anneal_epochs={ae}, reg={r}')
params = Hyperparams(lr, ae, r)
new_lenet_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 120),
nn.Linear(120, 84),
nn.Linear(84, 10),
)
new_lenet_model.type(torch.cuda.FloatTensor)
new_lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.Adam(new_lenet_model.parameters(), lr=lr, weight_decay=r)
step_lr = torch.optim.lr_scheduler.StepLR(optimizer, step_size=ae, gamma=anneal_coeff)
loss_history, train_history, val_history = \
train_model(new_lenet_model, train_aug_loader, val_loader, loss, optimizer, epoch_num, scheduler=step_lr)
results = RunResult(new_lenet_model, train_history, val_history, val_history[-1])
run_record[params] = results
###Output
Parameters are lr=1.0, anneal_epochs=1, reg=0.001
Average loss: 2971461.000000, Train accuracy: 0.112753, Val accuracy: 0.191591
Average loss: 3545.909424, Train accuracy: 0.116968, Val accuracy: 0.115009
Average loss: 907.936462, Train accuracy: 0.118606, Val accuracy: 0.191591
Average loss: 279.754913, Train accuracy: 0.120687, Val accuracy: 0.145519
Average loss: 126.580078, Train accuracy: 0.117394, Val accuracy: 0.068937
Parameters are lr=1.0, anneal_epochs=1, reg=0.0001
Average loss: 606352.937500, Train accuracy: 0.110432, Val accuracy: 0.080609
Average loss: 2198.488037, Train accuracy: 0.115858, Val accuracy: 0.191591
Average loss: 675.851257, Train accuracy: 0.116046, Val accuracy: 0.090779
Average loss: 351.111938, Train accuracy: 0.116541, Val accuracy: 0.099379
Average loss: 277.322540, Train accuracy: 0.117633, Val accuracy: 0.080609
Parameters are lr=1.0, anneal_epochs=1, reg=1e-05
Average loss: 1859188.750000, Train accuracy: 0.125994, Val accuracy: 0.191591
Average loss: 55788.199219, Train accuracy: 0.113162, Val accuracy: 0.191591
Average loss: 4023.258789, Train accuracy: 0.095860, Val accuracy: 0.090779
Average loss: 2221.499023, Train accuracy: 0.092755, Val accuracy: 0.115009
Average loss: 1771.856812, Train accuracy: 0.094222, Val accuracy: 0.115009
Parameters are lr=1.0, anneal_epochs=1, reg=1e-07
Average loss: 558153.750000, Train accuracy: 0.110125, Val accuracy: 0.077879
Average loss: 8108.753418, Train accuracy: 0.120824, Val accuracy: 0.099379
Average loss: 6220.397949, Train accuracy: 0.112241, Val accuracy: 0.077879
Average loss: 1045.515503, Train accuracy: 0.100519, Val accuracy: 0.068937
Average loss: 793.456421, Train accuracy: 0.104392, Val accuracy: 0.067504
Parameters are lr=1.0, anneal_epochs=5, reg=0.001
Average loss: 1700298.375000, Train accuracy: 0.136334, Val accuracy: 0.191591
Average loss: 8979.805664, Train accuracy: 0.128826, Val accuracy: 0.191591
Average loss: 13128979.000000, Train accuracy: 0.111866, Val accuracy: 0.099379
Average loss: 668810.937500, Train accuracy: 0.106644, Val accuracy: 0.090779
Average loss: 491919.968750, Train accuracy: 0.104256, Val accuracy: 0.068937
Parameters are lr=1.0, anneal_epochs=5, reg=0.0001
Average loss: 802604.250000, Train accuracy: 0.105638, Val accuracy: 0.068937
Average loss: 4805.804199, Train accuracy: 0.116114, Val accuracy: 0.099379
Average loss: 3148.183838, Train accuracy: 0.114459, Val accuracy: 0.090779
Average loss: 1823.423706, Train accuracy: 0.113589, Val accuracy: 0.115009
Average loss: 1440.116577, Train accuracy: 0.114272, Val accuracy: 0.145519
Parameters are lr=1.0, anneal_epochs=5, reg=1e-05
Average loss: 21001296.000000, Train accuracy: 0.112190, Val accuracy: 0.145519
Average loss: 154046.812500, Train accuracy: 0.114562, Val accuracy: 0.115009
Average loss: 121037.500000, Train accuracy: 0.115722, Val accuracy: 0.099379
Average loss: 102354.828125, Train accuracy: 0.117121, Val accuracy: 0.068937
Average loss: 111426.601562, Train accuracy: 0.114579, Val accuracy: 0.090779
Parameters are lr=1.0, anneal_epochs=5, reg=1e-07
Average loss: 262733.906250, Train accuracy: 0.101218, Val accuracy: 0.090779
Average loss: 13301.765625, Train accuracy: 0.116695, Val accuracy: 0.077879
Average loss: 11229.867188, Train accuracy: 0.114033, Val accuracy: 0.099379
Average loss: 5988.750000, Train accuracy: 0.115927, Val accuracy: 0.191591
Average loss: 3433.102295, Train accuracy: 0.115056, Val accuracy: 0.067504
Parameters are lr=0.1, anneal_epochs=1, reg=0.001
Average loss: 24.869059, Train accuracy: 0.141794, Val accuracy: 0.191591
Average loss: 2.258246, Train accuracy: 0.180135, Val accuracy: 0.191591
Average loss: 2.245479, Train accuracy: 0.186005, Val accuracy: 0.191591
Average loss: 2.241411, Train accuracy: 0.188616, Val accuracy: 0.191591
Average loss: 2.239722, Train accuracy: 0.188616, Val accuracy: 0.191591
Parameters are lr=0.1, anneal_epochs=1, reg=0.0001
Average loss: 5981.710449, Train accuracy: 0.115585, Val accuracy: 0.068937
Average loss: 30.696737, Train accuracy: 0.115654, Val accuracy: 0.067504
Average loss: 7.186845, Train accuracy: 0.116217, Val accuracy: 0.067504
Average loss: 2.657441, Train accuracy: 0.172542, Val accuracy: 0.191591
Average loss: 2.361726, Train accuracy: 0.187609, Val accuracy: 0.191591
Parameters are lr=0.1, anneal_epochs=1, reg=1e-05
Average loss: 6.420296, Train accuracy: 0.160154, Val accuracy: 0.191591
Average loss: 2.249358, Train accuracy: 0.184281, Val accuracy: 0.191591
Average loss: 2.241360, Train accuracy: 0.188616, Val accuracy: 0.191591
Average loss: 2.240316, Train accuracy: 0.188616, Val accuracy: 0.191591
Average loss: 2.239393, Train accuracy: 0.188616, Val accuracy: 0.191591
Parameters are lr=0.1, anneal_epochs=1, reg=1e-07
Average loss: 6.011037, Train accuracy: 0.154575, Val accuracy: 0.191659
Average loss: 2.289780, Train accuracy: 0.171041, Val accuracy: 0.191591
Average loss: 2.246373, Train accuracy: 0.186517, Val accuracy: 0.191591
Average loss: 2.241560, Train accuracy: 0.188616, Val accuracy: 0.191591
Average loss: 2.239777, Train accuracy: 0.188616, Val accuracy: 0.191591
Parameters are lr=0.1, anneal_epochs=5, reg=0.001
Average loss: 18.642365, Train accuracy: 0.150599, Val accuracy: 0.145519
Average loss: 38561.972656, Train accuracy: 0.123196, Val accuracy: 0.145519
Average loss: 10202.024414, Train accuracy: 0.115039, Val accuracy: 0.090779
Average loss: 134293.000000, Train accuracy: 0.113009, Val accuracy: 0.115009
Average loss: 152710.875000, Train accuracy: 0.112463, Val accuracy: 0.062794
Parameters are lr=0.1, anneal_epochs=5, reg=0.0001
Average loss: 78.985901, Train accuracy: 0.121182, Val accuracy: 0.145519
Average loss: 2.655863, Train accuracy: 0.148415, Val accuracy: 0.099379
Average loss: 2.356536, Train accuracy: 0.160171, Val accuracy: 0.145519
Average loss: 159.753571, Train accuracy: 0.128707, Val accuracy: 0.115009
Average loss: 5.481560, Train accuracy: 0.136385, Val accuracy: 0.090779
Parameters are lr=0.1, anneal_epochs=5, reg=1e-05
Average loss: 16.577637, Train accuracy: 0.138894, Val accuracy: 0.099379
Average loss: 5.801804, Train accuracy: 0.153619, Val accuracy: 0.191591
Average loss: 4.947041, Train accuracy: 0.154233, Val accuracy: 0.145519
Average loss: 1955689.375000, Train accuracy: 0.120295, Val accuracy: 0.077879
Average loss: 25370.615234, Train accuracy: 0.118930, Val accuracy: 0.145519
Parameters are lr=0.1, anneal_epochs=5, reg=1e-07
Average loss: 8.276346, Train accuracy: 0.149626, Val accuracy: 0.191591
Average loss: 2.298062, Train accuracy: 0.162031, Val accuracy: 0.191591
Average loss: 2.310753, Train accuracy: 0.161110, Val accuracy: 0.115009
Average loss: 4.779192, Train accuracy: 0.155598, Val accuracy: 0.191591
Average loss: 2.341234, Train accuracy: 0.168413, Val accuracy: 0.191591
Parameters are lr=0.01, anneal_epochs=1, reg=0.001
Average loss: 2.247945, Train accuracy: 0.186824, Val accuracy: 0.191591
Average loss: 2.239505, Train accuracy: 0.188616, Val accuracy: 0.191591
Average loss: 2.239306, Train accuracy: 0.188616, Val accuracy: 0.191591
Average loss: 2.239150, Train accuracy: 0.188616, Val accuracy: 0.191591
Average loss: 2.239149, Train accuracy: 0.188616, Val accuracy: 0.191591
Parameters are lr=0.01, anneal_epochs=1, reg=0.0001
Average loss: 2.251163, Train accuracy: 0.186619, Val accuracy: 0.191591
Average loss: 2.239777, Train accuracy: 0.188616, Val accuracy: 0.191591
Average loss: 2.239325, Train accuracy: 0.188616, Val accuracy: 0.191591
Average loss: 2.239163, Train accuracy: 0.188616, Val accuracy: 0.191591
Average loss: 2.239125, Train accuracy: 0.188616, Val accuracy: 0.191591
Parameters are lr=0.01, anneal_epochs=1, reg=1e-05
Average loss: 2.251245, Train accuracy: 0.186500, Val accuracy: 0.191591
Average loss: 2.239953, Train accuracy: 0.188616, Val accuracy: 0.191591
Average loss: 2.239264, Train accuracy: 0.188616, Val accuracy: 0.191591
Average loss: 2.239208, Train accuracy: 0.188616, Val accuracy: 0.191591
Average loss: 2.239149, Train accuracy: 0.188616, Val accuracy: 0.191591
Parameters are lr=0.01, anneal_epochs=1, reg=1e-07
Average loss: 2.246891, Train accuracy: 0.187233, Val accuracy: 0.191591
Average loss: 2.239772, Train accuracy: 0.188616, Val accuracy: 0.191591
Average loss: 2.239341, Train accuracy: 0.188616, Val accuracy: 0.191591
Average loss: 2.239179, Train accuracy: 0.188616, Val accuracy: 0.191591
Average loss: 2.239148, Train accuracy: 0.188616, Val accuracy: 0.191591
Parameters are lr=0.01, anneal_epochs=5, reg=0.001
Average loss: 2.246440, Train accuracy: 0.187353, Val accuracy: 0.191591
Average loss: 2.240492, Train accuracy: 0.188616, Val accuracy: 0.191591
Average loss: 2.240578, Train accuracy: 0.188616, Val accuracy: 0.191591
Average loss: 2.241259, Train accuracy: 0.188616, Val accuracy: 0.191591
Average loss: 2.241934, Train accuracy: 0.188325, Val accuracy: 0.191591
Parameters are lr=0.01, anneal_epochs=5, reg=0.0001
Average loss: 2.253411, Train accuracy: 0.188172, Val accuracy: 0.191591
Average loss: 2.241310, Train accuracy: 0.188377, Val accuracy: 0.191591
Average loss: 2.290202, Train accuracy: 0.187421, Val accuracy: 0.145519
Average loss: 5.220702, Train accuracy: 0.163584, Val accuracy: 0.191591
Average loss: 2.418368, Train accuracy: 0.172081, Val accuracy: 0.191591
Parameters are lr=0.01, anneal_epochs=5, reg=1e-05
Average loss: 2.245901, Train accuracy: 0.187677, Val accuracy: 0.191591
Average loss: 2.241273, Train accuracy: 0.188496, Val accuracy: 0.191591
Average loss: 2.241281, Train accuracy: 0.188616, Val accuracy: 0.191591
Average loss: 2.243233, Train accuracy: 0.187899, Val accuracy: 0.191591
Average loss: 2.241800, Train accuracy: 0.188445, Val accuracy: 0.191591
Parameters are lr=0.01, anneal_epochs=5, reg=1e-07
Average loss: 2.250036, Train accuracy: 0.187472, Val accuracy: 0.191591
Average loss: 2.241639, Train accuracy: 0.188155, Val accuracy: 0.191591
###Markdown
Зачем мы проверяем на 20-50 эпох, если прогоняем только до 10?)
###Code
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
_____no_output_____
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
epoch_num = 15
new_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.BatchNorm2d(64),
nn.Conv2d(64, 128, 3, padding=1),
nn.ReLU(inplace=True),
nn.BatchNorm2d(128),
Flattener(),
nn.Linear(128*2*2, 120),
nn.Linear(120, 10),
)
new_model.type(torch.cuda.FloatTensor)
new_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.Adam(new_model.parameters(), lr=1e-2, weight_decay=1e-2)
step_lr = torch.optim.lr_scheduler.StepLR(optimizer, step_size=5, gamma=0.1)
loss_history, train_history, val_history = \
train_model(new_model, train_aug_loader, val_loader, loss, optimizer, epoch_num, scheduler=step_lr)
###Output
Average loss: 1.990156, Train accuracy: 0.279323, Val accuracy: 0.324892
Average loss: 1.773163, Train accuracy: 0.351756, Val accuracy: 0.386117
Average loss: 1.717432, Train accuracy: 0.379176, Val accuracy: 0.359088
Average loss: 1.695399, Train accuracy: 0.389141, Val accuracy: 0.332878
Average loss: 1.681706, Train accuracy: 0.394055, Val accuracy: 0.380725
Average loss: 1.560787, Train accuracy: 0.444579, Val accuracy: 0.435806
Average loss: 1.498971, Train accuracy: 0.470805, Val accuracy: 0.456624
Average loss: 1.462743, Train accuracy: 0.485275, Val accuracy: 0.472459
Average loss: 1.435267, Train accuracy: 0.493670, Val accuracy: 0.463518
Average loss: 1.418873, Train accuracy: 0.499573, Val accuracy: 0.505221
Average loss: 1.381603, Train accuracy: 0.514657, Val accuracy: 0.507679
Average loss: 1.367208, Train accuracy: 0.519657, Val accuracy: 0.507269
Average loss: 1.366775, Train accuracy: 0.518718, Val accuracy: 0.508088
Average loss: 1.365597, Train accuracy: 0.521329, Val accuracy: 0.514368
Average loss: 1.357131, Train accuracy: 0.525595, Val accuracy: 0.510272
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
test_loader = torch.utils.data.DataLoader(data_test)
final_test_accuracy = compute_accuracy(new_model, test_loader)
print("Final test accuracy - ", final_test_accuracy)
###Output
Final test accuracy - 0.898740012292563
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
from itertools import product
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs, lr_scheduler=None):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
if lr_scheduler:
lr_scheduler.step()
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
correct = 0
total = 0
for x, y in train_loader:
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
indices = torch.argmax(prediction, dim=1)
correct += torch.sum(indices == y_gpu)
total += y.shape[0]
accuracy = float(correct) / total
return accuracy
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 1.407572, Train accuracy: 0.529297, Val accuracy: 0.714824
Average loss: 0.688792, Train accuracy: 0.793400, Val accuracy: 0.803740
Average loss: 0.592186, Train accuracy: 0.825154, Val accuracy: 0.803843
Average loss: 0.541579, Train accuracy: 0.839726, Val accuracy: 0.835529
Average loss: 0.510401, Train accuracy: 0.848463, Val accuracy: 0.846688
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
_____no_output_____
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(brightness=0.05, contrast=0.05, hue=0.05, saturation=0.05),
transforms.RandomRotation(25, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
data_aug_train = dset.SVHN('./',
transform=tfs
)
torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
# TODO:
tfs = transforms.Compose([
# TODO: Add good augmentations
transforms.ColorJitter(brightness=0.05, contrast=0.05, hue=0.05, saturation=0.05),
transforms.RandomRotation(5, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47], std=[0.20,0.20,0.20])
])
data_aug_train = dset.SVHN('./', transform=tfs)
# TODO create new instances of loaders with the augmentations you chose
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size, sampler=train_sampler)
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 0.531815, Train accuracy: 0.841125, Val accuracy: 0.861601
Average loss: 0.502394, Train accuracy: 0.849879, Val accuracy: 0.822902
Average loss: 0.482771, Train accuracy: 0.858001, Val accuracy: 0.865321
Average loss: 0.469052, Train accuracy: 0.860629, Val accuracy: 0.870030
Average loss: 0.462096, Train accuracy: 0.861055, Val accuracy: 0.876071
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
def get_lenet():
return nn.Sequential(
nn.Conv2d(in_channels=3, out_channels=6, kernel_size=5),
nn.AvgPool2d(kernel_size=2, stride=2),
nn.Conv2d(in_channels=6, out_channels=16, kernel_size=5),
nn.AvgPool2d(kernel_size=2, stride=2),
nn.Conv2d(in_channels=16, out_channels=120, kernel_size=5), # equal FC
Flattener(),
nn.Linear(120, 84),
nn.Linear(84, 10)
)
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = get_lenet()
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
Average loss: 2.202497, Train accuracy: 0.226598, Val accuracy: 0.259615
Average loss: 2.171863, Train accuracy: 0.255639, Val accuracy: 0.277122
Average loss: 2.163796, Train accuracy: 0.268147, Val accuracy: 0.272481
Average loss: 2.158454, Train accuracy: 0.271525, Val accuracy: 0.272344
Average loss: 2.155837, Train accuracy: 0.273777, Val accuracy: 0.289919
Average loss: 2.153603, Train accuracy: 0.275603, Val accuracy: 0.274358
Average loss: 2.152167, Train accuracy: 0.276422, Val accuracy: 0.282821
Average loss: 2.150340, Train accuracy: 0.277378, Val accuracy: 0.283998
Average loss: 2.147557, Train accuracy: 0.279135, Val accuracy: 0.276900
Average loss: 2.146823, Train accuracy: 0.278675, Val accuracy: 0.273180
###Markdown
Подбор гиперпараметров
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e0, 1e-1, 1e-2, 1e-3, 1e-4]
anneal_coeff = 0.2
anneal_epochs = [1, 5, 10, 15, 20, 50]
regs = [1e-3, 1e-4, 1e-5, 1e-7]
epoch_num = 10
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
for comb in product(learning_rates, anneal_epochs, regs):
print('\n', comb, '\n')
lr, anneal_epoch, reg = comb
optimizer = optim.Adam(lenet_model.parameters(), lr=lr, weight_decay=reg)
lr_scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=anneal_epoch)
lenet_model = get_lenet()
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
params = Hyperparams(lr, anneal_epoch, reg)
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, epoch_num, lr_scheduler)
results = RunResult(lenet_model, train_history, val_history, val_history[-1])
run_record[params] = results
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
_____no_output_____
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
best_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.BatchNorm2d(64),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.BatchNorm2d(64),
Flattener(),
nn.Linear(64*2*2, 10),
)
best_model.type(torch.cuda.FloatTensor)
best_model.to(device)
epoch_num = 25
anneal_epoch = 5
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.Adam(best_model.parameters(), lr=1e-3, weight_decay=1e-4)
lr_scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=anneal_epoch)
loss_history, train_history, val_history = train_model(best_model, train_aug_loader, val_loader, loss, optimizer, epoch_num, lr_scheduler)
###Output
Average loss: 0.990851, Train accuracy: 0.693188, Val accuracy: 0.818824
Average loss: 0.562096, Train accuracy: 0.835716, Val accuracy: 0.855151
Average loss: 0.476020, Train accuracy: 0.862198, Val accuracy: 0.879381
Average loss: 0.417162, Train accuracy: 0.881275, Val accuracy: 0.892195
Average loss: 0.376174, Train accuracy: 0.892417, Val accuracy: 0.903030
Average loss: 0.364065, Train accuracy: 0.897963, Val accuracy: 0.903201
Average loss: 0.366475, Train accuracy: 0.895898, Val accuracy: 0.905795
Average loss: 0.386604, Train accuracy: 0.887674, Val accuracy: 0.899515
Average loss: 0.409256, Train accuracy: 0.881957, Val accuracy: 0.886445
Average loss: 0.419120, Train accuracy: 0.876344, Val accuracy: 0.885046
Average loss: 0.408317, Train accuracy: 0.879552, Val accuracy: 0.893765
Average loss: 0.390160, Train accuracy: 0.886889, Val accuracy: 0.903576
Average loss: 0.354532, Train accuracy: 0.896905, Val accuracy: 0.911903
Average loss: 0.319510, Train accuracy: 0.908610, Val accuracy: 0.920520
Average loss: 0.289837, Train accuracy: 0.917688, Val accuracy: 0.926868
Average loss: 0.280110, Train accuracy: 0.920332, Val accuracy: 0.927209
Average loss: 0.283808, Train accuracy: 0.918490, Val accuracy: 0.926902
Average loss: 0.307605, Train accuracy: 0.909327, Val accuracy: 0.919582
Average loss: 0.331441, Train accuracy: 0.902570, Val accuracy: 0.914906
Average loss: 0.352425, Train accuracy: 0.895932, Val accuracy: 0.910419
Average loss: 0.353225, Train accuracy: 0.895762, Val accuracy: 0.911238
Average loss: 0.339240, Train accuracy: 0.899840, Val accuracy: 0.918217
Average loss: 0.312350, Train accuracy: 0.908849, Val accuracy: 0.916579
Average loss: 0.278857, Train accuracy: 0.919309, Val accuracy: 0.932174
Average loss: 0.252516, Train accuracy: 0.927175, Val accuracy: 0.938180
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
test_loader = torch.utils.data.DataLoader(data_test, batch_size=batch_size)
# TODO Write the code to compute accuracy on test set
final_test_accuracy = compute_accuracy(best_model, test_loader)
print("Final test accuracy - ", final_test_accuracy)
###Output
_____no_output_____
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
from google.colab import drive
drive.mount('/content/drive/')
import os
os.chdir('./drive/My Drive/Code/dlcourse_ai/assignments/assignment3')
!ls
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import pickle
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs, scheduler=None, scheduler_epoch=None,
silent=False):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
if scheduler is not None:
if epoch != 0 and scheduler_epoch % epoch == 0:
scheduler.step()
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
if not silent:
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
if silent:
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
correct_samples, total_samples = 0, 0
for i_step, (x, y) in enumerate(loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
return correct_samples.item()/total_samples
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 1.377189, Train accuracy: 0.543682, Val accuracy: 0.758993
Average loss: 0.707525, Train accuracy: 0.782684, Val accuracy: 0.802198
Average loss: 0.604888, Train accuracy: 0.820035, Val accuracy: 0.801720
Average loss: 0.552691, Train accuracy: 0.835085, Val accuracy: 0.843424
Average loss: 0.520937, Train accuracy: 0.844436, Val accuracy: 0.850522
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
_____no_output_____
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
#transforms.RandomHorizontalFlip(),
#transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
# TODO: Add good augmentations
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# TODO create new instances of loaders with the augmentations you chose
data_aug_train = dset.SVHN('./', transform=tfs)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 0.615388, Train accuracy: 0.814609, Val accuracy: 0.826428
Average loss: 0.561433, Train accuracy: 0.829471, Val accuracy: 0.852911
Average loss: 0.532193, Train accuracy: 0.838856, Val accuracy: 0.853457
Average loss: 0.520915, Train accuracy: 0.842781, Val accuracy: 0.840284
Average loss: 0.504299, Train accuracy: 0.846227, Val accuracy: 0.853798
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, 5),
nn.Tanh(),
nn.AvgPool2d(2),
nn.Conv2d(6, 16, 5),
nn.Tanh(),
nn.AvgPool2d(2),
nn.Conv2d(16, 120, 5),
nn.Tanh(),
Flattener(),
nn.Linear(120, 84),
nn.Tanh(),
nn.Linear(84, 10),
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
Average loss: 2.133554, Train accuracy: 0.246425, Val accuracy: 0.519965
Average loss: 0.926466, Train accuracy: 0.706242, Val accuracy: 0.800218
Average loss: 0.588771, Train accuracy: 0.816435, Val accuracy: 0.818511
Average loss: 0.502518, Train accuracy: 0.843224, Val accuracy: 0.859805
Average loss: 0.451168, Train accuracy: 0.860816, Val accuracy: 0.868746
Average loss: 0.417961, Train accuracy: 0.870474, Val accuracy: 0.878643
Average loss: 0.395125, Train accuracy: 0.878238, Val accuracy: 0.882534
Average loss: 0.376146, Train accuracy: 0.884227, Val accuracy: 0.885332
Average loss: 0.361223, Train accuracy: 0.888441, Val accuracy: 0.888404
Average loss: 0.347559, Train accuracy: 0.892417, Val accuracy: 0.886288
###Markdown
Подбор гиперпараметров
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'reg', 'optim_name', 'scheduler_name', 'anneal_epoch'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e-3, 1e-4]
anneal_coeff = 0.2
anneal_epochs = [1., 2., 5.]
regs = [1e-3, 1e-4, 1e-5, 1e-7]
batch_size = 64
epoch_num = 10
lmbda = lambda epoch: anneal_coeff
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
for lr in learning_rates:
for reg in regs:
for optim_name in ['SGD', 'RMSprop']:
for scheduler_name in ['None', 'Multiplicative']:
for anneal_epoch in anneal_epochs:
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, 5), nn.Tanh(),
nn.AvgPool2d(2),
nn.Conv2d(6, 16, 5), nn.Tanh(),
nn.AvgPool2d(2),
nn.Conv2d(16, 120, 5), nn.Tanh(),
Flattener(),
nn.Linear(120, 84), nn.Tanh(),
nn.Linear(84, 10),
)
optimizers = {
'SGD': optim.SGD(lenet_model.parameters(), lr=lr, weight_decay=reg),
'RMSprop': optim.RMSprop(lenet_model.parameters(), lr=lr, weight_decay=reg, momentum=0.9)
}
optimizer = optimizers[optim_name]
schedulers = {
'None': None,
'Multiplicative': optim.lr_scheduler.MultiplicativeLR(optimizer, lr_lambda=lmbda)
}
scheduler = schedulers[scheduler_name]
if scheduler is None:
anneal_epoch = None
lenet_model.cuda()
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss,
optimizer, 10, scheduler, anneal_epoch, silent=True)
key = Hyperparams(lr, reg, optim_name, scheduler_name, anneal_epoch)
value = RunResult(lenet_model, train_history, val_history, val_history[-1])
run_record[key] = value
with open('run_record_1.pickle', 'wb') as handle:
pickle.dump(run_record, handle, protocol=pickle.HIGHEST_PROTOCOL)
# Colab failed to run the loop from above at once
# Saves results to disk and concat it now
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'reg', 'optim_name', 'scheduler_name', 'anneal_epoch'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
run_record = {}
for name in ['run_record_lr_1e0', 'run_record_lr_1e-1', 'run_record_lr_1e-2']:
with open('%s.pickle' % name, 'rb') as f:
if not len(run_record):
run_record = pickle.load(f)
else:
run_record.update(pickle.load(f))
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
Best validation accuracy: 0.90, best hyperparams: Hyperparams(learning_rate=0.001, reg=0.001, optim_name='RMSprop', scheduler_name='Multiplicative', anneal_epoch=5.0)
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
run_record = {}
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e-3, 1e-4]
regs = [1e-3, 1e-4, 1e-5]
anneal_coeff, anneal_epoch = 0.2, 5
lmbda = lambda epoch: anneal_coeff
for lr in learning_rates:
for reg in regs:
nn_model = nn.Sequential(
nn.Conv2d(3, 8, 5), nn.ReLU(),
nn.MaxPool2d(2),
nn.Conv2d(8, 32, 5), nn.ReLU(),
nn.MaxPool2d(2),
nn.Conv2d(32, 140, 5), nn.ReLU(),
Flattener(),
nn.Linear(140, 84), nn.ReLU(),
nn.Linear(84, 10),
)
optimizer = optim.RMSprop(nn_model.parameters(), lr=lr, weight_decay=reg, momentum=0.9)
scheduler = optim.lr_scheduler.MultiplicativeLR(optimizer, lr_lambda=lmbda)
nn_model.cuda()
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss,
optimizer, 10, scheduler, anneal_epoch)
key = Hyperparams(lr, reg)
value = RunResult(nn_model, train_history, val_history, val_history[-1])
run_record[key] = value
with open('run_record_nn1.pickle', 'wb') as handle:
pickle.dump(run_record, handle, protocol=pickle.HIGHEST_PROTOCOL)
run_record = {}
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e-2, 1e-3, 1e-4]
regs = [1e-3, 1e-4, 1e-5]
anneal_coeff, anneal_epoch = 0.2, 5
lmbda = lambda epoch: anneal_coeff
for lr in learning_rates:
for reg in regs:
nn_model = nn.Sequential(
nn.Conv2d(3, 8, 5), nn.ReLU(), nn.BatchNorm2d(8),
nn.AvgPool2d(2),
nn.Conv2d(8, 16, 5), nn.ReLU(), nn.BatchNorm2d(16),
nn.AvgPool2d(2),
nn.Conv2d(16, 64, 5), nn.ReLU(), nn.BatchNorm2d(64),
Flattener(),
nn.Linear(64, 84), nn.ReLU(),
nn.Linear(84, 10)
)
optimizer = optim.RMSprop(nn_model.parameters(), lr=lr, weight_decay=reg, momentum=0.9)
scheduler = optim.lr_scheduler.MultiplicativeLR(optimizer, lr_lambda=lmbda)
nn_model.cuda()
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss,
optimizer, 10, scheduler, anneal_epoch)
key = Hyperparams(lr, reg)
value = RunResult(nn_model, train_history, val_history, val_history[-1])
run_record[key] = value
with open('run_record_nn2.pickle', 'wb') as handle:
pickle.dump(run_record, handle, protocol=pickle.HIGHEST_PROTOCOL)
# Colab failed to run the loop from above at once
# Saves results to disk and concat it now
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
run_record = {}
with open('run_record_nn1.pickle', 'rb') as f:
run_record1 = pickle.load(f)
with open('run_record_nn2.pickle', 'rb') as f:
run_record2 = pickle.load(f)
def find_best(run_record, name):
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("%s, Best validation accuracy: %4.3f, best hyperparams: %s" % (name, best_val_accuracy, best_hyperparams))
find_best(run_record1, 'nn1')
find_best(run_record2, 'nn2')
###Output
nn1, Best validation accuracy: 0.909, best hyperparams: Hyperparams(learning_rate=0.0001, reg=0.0001)
nn2, Best validation accuracy: 0.910, best hyperparams: Hyperparams(learning_rate=0.001, reg=0.0001)
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
# TODO Write the code to compute accuracy on test set
test_loader = torch.utils.data.DataLoader(data_test, batch_size=batch_size)
# best_model = run_record2[Hyperparams(learning_rate=0.001, reg=0.0001)][0]
best_model = run_record1[Hyperparams(learning_rate=0.0001, reg=0.0001)][0]
best_model.eval()
correct_samples, total_samples = 0, 0
for i_step, (x, y) in enumerate(test_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = best_model(x_gpu)
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
test_accuracy = float(correct_samples) / total_samples
print("Final test accuracy - %4.2f%%" % (test_accuracy*100))
###Output
_____no_output_____
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
raise Exception("Not implemented")
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
_____no_output_____
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
# TODO: Add good augmentations
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# TODO create new instances of loaders with the augmentations you chose
train_aug_loader = None
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
_____no_output_____
###Markdown
Подбор гиперпараметров
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e0, 1e-1, 1e-2, 1e-3, 1e-4]
anneal_coeff = 0.2
anneal_epochs = [1, 5, 10, 15, 20, 50]
reg = [1e-3, 1e-4, 1e-5, 1e-7]
batch_size = 64
epoch_num = 10
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
# TODO: Your code here!
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
_____no_output_____
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
best_model = None
###Output
_____no_output_____
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
# TODO Write the code to compute accuracy on test set
final_test_accuracy = 0.0
print("Final test accuracy - ", final_test_accuracy)
###Output
_____no_output_____
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
# моя оценка числа параметров (я в конволюциях ошибался в 3 раза - почему? + забывал биас )
12288 + 576 + 2560
###Output
_____no_output_____
###Markdown
Реальное число параметров
###Code
sum(p.numel() for p in nn_model.parameters())
from prettytable import PrettyTable
def count_parameters(model):
table = PrettyTable(["Modules", "Parameters"])
total_params = 0
for name, parameter in model.named_parameters():
if not parameter.requires_grad: continue
param = parameter.numel()
table.add_row([name, param])
total_params+=param
print(table)
print(f"Total Trainable Params: {total_params}")
return total_params
count_parameters(nn_model)
###Output
+----------+------------+
| Modules | Parameters |
+----------+------------+
| 0.weight | 1728 |
| 0.bias | 64 |
| 3.weight | 36864 |
| 3.bias | 64 |
| 7.weight | 2560 |
| 7.bias | 10 |
+----------+------------+
Total Trainable Params: 41290
###Markdown
Почему на еще раз на 3 умножаем?
###Code
(64 * 64 * 3) * 3
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
from datetime import datetime
start_time = datetime.now()
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
start_time = datetime.now()
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
epoch_calc_time = datetime.now() - start_time
print(
"Average loss: %f, Train accuracy: %f, Val accuracy: %f, Calc Time: %s"
% (ave_loss, train_accuracy, val_accuracy, epoch_calc_time)
)
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
proba = model(x_gpu)
# values, indicies
_, preds = torch.max(proba, 1)
correct_samples += torch.sum(preds == y_gpu)
total_samples += y.shape[0]
accuracy = float(correct_samples) / total_samples
return accuracy
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 0.701161, Train accuracy: 0.780091, Val accuracy: 0.774896, Calc Time: 0:00:17.453701
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
/opt/conda/lib/python3.7/site-packages/torchvision/transforms/transforms.py:1231: UserWarning: Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead
"Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead"
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
# TODO: Add good augmentations
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# TODO create new instances of loaders with the augmentations you chose
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 1.185979, Train accuracy: 0.613555, Val accuracy: 0.721725
Average loss: 0.948479, Train accuracy: 0.694809, Val accuracy: 0.766228
Average loss: 0.874810, Train accuracy: 0.718578, Val accuracy: 0.731691
Average loss: 0.831418, Train accuracy: 0.735403, Val accuracy: 0.771756
Average loss: 0.805682, Train accuracy: 0.743234, Val accuracy: 0.774896
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
nn.Conv2d(in_channels=3, out_channels=6, kernel_size=5, stride=1),
nn.Tanh(),
nn.AvgPool2d(kernel_size=2),
nn.Conv2d(in_channels=6, out_channels=16, kernel_size=5, stride=1),
nn.Tanh(),
nn.AvgPool2d(kernel_size=2),
nn.Conv2d(in_channels=16, out_channels=120, kernel_size=5, stride=1),
nn.Tanh(),
Flattener(),
nn.Linear(in_features=120, out_features=84),
nn.Tanh(),
nn.Linear(in_features=84, out_features=10),
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
count_parameters(lenet_model)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
Average loss: 2.214372, Train accuracy: 0.208375, Val accuracy: 0.349464, Calc Time: 0:00:47.114591
Average loss: 1.460377, Train accuracy: 0.514128, Val accuracy: 0.696198, Calc Time: 0:00:47.528279
Average loss: 0.975341, Train accuracy: 0.689690, Val accuracy: 0.733397, Calc Time: 0:00:47.303167
Average loss: 0.807086, Train accuracy: 0.745043, Val accuracy: 0.770323, Calc Time: 0:00:47.400748
Average loss: 0.718331, Train accuracy: 0.773351, Val accuracy: 0.793529, Calc Time: 0:00:48.124109
Average loss: 0.660032, Train accuracy: 0.792700, Val accuracy: 0.809569, Calc Time: 0:00:48.068744
Average loss: 0.625113, Train accuracy: 0.805037, Val accuracy: 0.821104, Calc Time: 0:00:47.851523
Average loss: 0.595176, Train accuracy: 0.813466, Val accuracy: 0.819261, Calc Time: 0:00:48.067123
Average loss: 0.573904, Train accuracy: 0.821008, Val accuracy: 0.833663, Calc Time: 0:00:47.777641
###Markdown
Подбор гиперпараметров
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e0, 1e-1, 1e-2, 1e-3, 1e-4]
anneal_coeff = 0.2
anneal_epochs = [1, 5, 10, 15, 20, 50]
reg = [1e-3, 1e-4, 1e-5, 1e-7]
batch_size = 64
epoch_num = 10
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
# TODO: Your code here!
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
_____no_output_____
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
best_model = None
###Output
_____no_output_____
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
# TODO Write the code to compute accuracy on test set
final_test_accuracy = 0.0
print("Final test accuracy - ", final_test_accuracy)
###Output
_____no_output_____
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torch.optim.lr_scheduler import StepLR
import itertools
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs, scheduler = StepLR(optimizer, step_size = 1, gamma = 1.0)):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
scheduler.step()
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
indices = torch.argmax(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / (i_step + 1)
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
with torch.no_grad():
correct_samples = 0;
total_samples = 0;
for i_step, (x, y) in enumerate(loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu);
indices = torch.argmax(prediction, 1);
correct_samples += torch.sum(indices == y_gpu);
total_samples += y.shape[0];
val_accuracy = float(correct_samples) / total_samples;
return val_accuracy;
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 0.421254, Train accuracy: 0.871276, Val accuracy: 0.858644
Average loss: 0.421291, Train accuracy: 0.871276, Val accuracy: 0.858644
Average loss: 0.421258, Train accuracy: 0.871276, Val accuracy: 0.858644
Average loss: 0.421260, Train accuracy: 0.871276, Val accuracy: 0.858644
Average loss: 0.421271, Train accuracy: 0.871276, Val accuracy: 0.858644
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
_____no_output_____
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
#transforms.ColorJitter(hue=.20, saturation=.20),
#transforms.RandomHorizontalFlip(),
#transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
# TODO: Add good augmentations
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# TODO create new instances of loaders with the augmentations you chose
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 0.602098, Train accuracy: 0.817203, Val accuracy: 0.812573
Average loss: 0.557348, Train accuracy: 0.831041, Val accuracy: 0.811958
Average loss: 0.537788, Train accuracy: 0.838361, Val accuracy: 0.849771
Average loss: 0.519573, Train accuracy: 0.843156, Val accuracy: 0.837212
Average loss: 0.511489, Train accuracy: 0.843480, Val accuracy: 0.858644
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
nn.Conv2d(in_channels=3, out_channels=6, kernel_size=5, stride=1, padding=0),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
nn.Conv2d(6, 16, 5, padding=0),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
Flattener(),
nn.Linear(16*5*5, 120),
nn.ReLU(inplace=True),
nn.Linear(120, 84),
nn.ReLU(inplace=True),
nn.Linear(84, 10)
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
Average loss: 1.502845, Train accuracy: 0.483295, Val accuracy: 0.817623
Average loss: 0.560720, Train accuracy: 0.832440, Val accuracy: 0.855846
Average loss: 0.465109, Train accuracy: 0.860799, Val accuracy: 0.879667
Average loss: 0.411326, Train accuracy: 0.876310, Val accuracy: 0.880008
Average loss: 0.382405, Train accuracy: 0.884756, Val accuracy: 0.835984
Average loss: 0.360846, Train accuracy: 0.890711, Val accuracy: 0.883762
Average loss: 0.339080, Train accuracy: 0.896768, Val accuracy: 0.888745
Average loss: 0.326137, Train accuracy: 0.901426, Val accuracy: 0.888540
Average loss: 0.308741, Train accuracy: 0.907074, Val accuracy: 0.889427
Average loss: 0.299257, Train accuracy: 0.909395, Val accuracy: 0.894205
###Markdown
Подбор гиперпараметров
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams_t = namedtuple("Hyperparams_t", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult_t = namedtuple("RunResult_t", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e0, 1e-1, 1e-2, 1e-3, 1e-4]
anneal_coeff = 0.2
anneal_epochs = [1, 5, 10, 15, 20, 50]
reg = [1e-3, 1e-4, 1e-5, 1e-7]
parameters = list(itertools.product(learning_rates, anneal_epochs, reg));
batch_size = 64
epoch_num = 10
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
# TODO: Your code here!
for j, (learning_rate, anneal_epochs, reg) in enumerate(parameters):
Hyperparams = Hyperparams_t(learning_rate, anneal_epochs, reg);
lenet_model = nn.Sequential(
nn.Conv2d(in_channels=3, out_channels=6, kernel_size=5, stride=1, padding=0),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
nn.Conv2d(6, 16, 5, padding=0),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
Flattener(),
nn.Linear(16*5*5, 120),
nn.ReLU(inplace=True),
nn.Linear(120, 84),
nn.ReLU(inplace=True),
nn.Linear(84, 10)
);
lenet_model.type(torch.cuda.FloatTensor);
if torch.cuda.device_count() > 1:
print("Let's use", torch.cuda.device_count(), "GPUs!");
# dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs
lenet_model = nn.DataParallel(lenet_model);
lenet_model.to(device);
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor);
optimizer = optim.SGD(lenet_model.parameters(), lr=Hyperparams.learning_rate, weight_decay = Hyperparams.reg);
scheduler = StepLR(optimizer, step_size = Hyperparams.anneal_epochs, gamma = anneal_coeff);
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, epoch_num, scheduler);
run_record[Hyperparams] = RunResult_t(lenet_model.copy(), train_history.copy(), val_history.copy(), val_history[-1].copy());
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
_____no_output_____
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
best_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.BatchNorm2d(64),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 128, 3, padding=1),
nn.BatchNorm2d(128),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(128*2*2, 10),
)
best_model.type(torch.cuda.FloatTensor)
best_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = torch.optim.Adam(best_model.parameters(), lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=1e-4, amsgrad=False)
scheduler = StepLR(optimizer, step_size = 5, gamma=0.5)
loss_history, train_history, val_history = train_model(best_model, train_aug_loader, val_loader, loss, optimizer, 20, scheduler);
###Output
Average loss: 1.297577, Train accuracy: 0.581220, Val accuracy: 0.781175
Average loss: 0.667728, Train accuracy: 0.804389, Val accuracy: 0.830455
Average loss: 0.572768, Train accuracy: 0.834044, Val accuracy: 0.858781
Average loss: 0.522338, Train accuracy: 0.847046, Val accuracy: 0.848133
Average loss: 0.491401, Train accuracy: 0.855322, Val accuracy: 0.869906
Average loss: 0.434679, Train accuracy: 0.875013, Val accuracy: 0.878575
Average loss: 0.415877, Train accuracy: 0.880029, Val accuracy: 0.881783
Average loss: 0.405842, Train accuracy: 0.882503, Val accuracy: 0.880486
Average loss: 0.393997, Train accuracy: 0.886189, Val accuracy: 0.885537
Average loss: 0.386143, Train accuracy: 0.888271, Val accuracy: 0.881305
Average loss: 0.357555, Train accuracy: 0.898577, Val accuracy: 0.898369
Average loss: 0.347993, Train accuracy: 0.900505, Val accuracy: 0.894546
Average loss: 0.344925, Train accuracy: 0.899908, Val accuracy: 0.897413
Average loss: 0.341892, Train accuracy: 0.902280, Val accuracy: 0.897959
Average loss: 0.337351, Train accuracy: 0.903593, Val accuracy: 0.897891
Average loss: 0.318168, Train accuracy: 0.909566, Val accuracy: 0.902874
Average loss: 0.316514, Train accuracy: 0.909583, Val accuracy: 0.901577
Average loss: 0.314219, Train accuracy: 0.911016, Val accuracy: 0.899188
Average loss: 0.308849, Train accuracy: 0.912535, Val accuracy: 0.904239
Average loss: 0.305645, Train accuracy: 0.911681, Val accuracy: 0.904102
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
# TODO Write the code to compute accuracy on test set
test_loader = torch.utils.data.DataLoader(data_test, batch_size=batch_size)
final_test_accuracy = compute_accuracy(best_model, test_loader)
print("Final test accuracy - ", final_test_accuracy)
###Output
Final test accuracy - 0.9047326367547633
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
ls
###Output
[0m[01;34msample_data[0m/ test_32x32.mat train_32x32.mat
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval()
acc = [torch.mean((model(batch[0].to(device)).argmax(axis=1) == batch[1].to(device)).float())
for batch in loader]
acc = torch.mean(torch.Tensor(acc))
return acc
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 1.404994, Train accuracy: 0.532181, Val accuracy: 0.693724
Average loss: 0.695088, Train accuracy: 0.789919, Val accuracy: 0.813511
Average loss: 0.592959, Train accuracy: 0.821810, Val accuracy: 0.825059
Average loss: 0.544001, Train accuracy: 0.836996, Val accuracy: 0.822273
Average loss: 0.511085, Train accuracy: 0.848923, Val accuracy: 0.826765
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
_____no_output_____
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
# TODO: Add good augmentations
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20]),
])
# TODO create new instances of loaders with the augmentations you chose
# train_aug_loader = None
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 0.608016, Train accuracy: 0.816111, Val accuracy: 0.838359
Average loss: 0.555477, Train accuracy: 0.831024, Val accuracy: 0.848071
Average loss: 0.530161, Train accuracy: 0.840033, Val accuracy: 0.819402
Average loss: 0.517352, Train accuracy: 0.842661, Val accuracy: 0.844056
Average loss: 0.506953, Train accuracy: 0.847115, Val accuracy: 0.860705
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
OrderedDict([
('c1', nn.Conv2d(3, 6, kernel_size=(5, 5))),
('relu1', nn.ReLU(inplace=True)),
('s2', nn.MaxPool2d(kernel_size=(2, 2), stride=2)),
('c3', nn.Conv2d(6, 16, kernel_size=(5, 5))),
('relu3', nn.ReLU(inplace=True)),
('s4', nn.MaxPool2d(kernel_size=(2, 2), stride=2)),
('c5', nn.Conv2d(16, 120, kernel_size=(5, 5))),
('relu5', nn.ReLU(inplace=True)),
('flat1', Flattener()),
('f6', nn.Linear(120, 84)),
('relu6', nn.ReLU(inplace=True)),
('f7', nn.Linear(84, 10)),
('sig7', nn.LogSoftmax(dim=-1))
]))
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
Average loss: 1.326593, Train accuracy: 0.554278, Val accuracy: 0.819436
Average loss: 0.561593, Train accuracy: 0.830086, Val accuracy: 0.867045
Average loss: 0.460042, Train accuracy: 0.862045, Val accuracy: 0.875790
Average loss: 0.412782, Train accuracy: 0.876344, Val accuracy: 0.880901
Average loss: 0.377948, Train accuracy: 0.885831, Val accuracy: 0.896811
Average loss: 0.357383, Train accuracy: 0.893014, Val accuracy: 0.895548
Average loss: 0.331975, Train accuracy: 0.900317, Val accuracy: 0.898846
Average loss: 0.319268, Train accuracy: 0.902143, Val accuracy: 0.907443
Average loss: 0.304057, Train accuracy: 0.908508, Val accuracy: 0.897402
Average loss: 0.301345, Train accuracy: 0.908815, Val accuracy: 0.904020
###Markdown
Подбор гиперпараметров
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e0, 1e-1, 1e-2, 1e-3, 1e-4]
anneal_coeff = 0.2
anneal_epochs = [1, 5, 10, 15, 20, 50]
reg = [1e-3, 1e-4, 1e-5, 1e-7]
batch_size = 64
epoch_num = 10
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
# TODO: Your code here!
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
_____no_output_____
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
class ConvNet(nn.Module):
def __init__(self, num_classes=10):
super(ConvNet, self).__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(64),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.layer2 = nn.Sequential(
nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(128),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.layer3 = nn.Sequential(
nn.Conv2d(128, 256, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(256),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.flattener = Flattener()
self.fc = nn.Linear(4 * 4 * 256, num_classes)
def forward(self, x):
# print(x.shape)
out = self.layer1(x)
# print(out.shape)
out = self.layer2(out)
# print(out.shape)
out = self.layer3(out)
out = self.flattener(out)
# print(out.shape)
out = out.reshape(out.size(0), -1)
# print(out.shape)
out = self.fc(out)
# print(out.shape)
return out
model = ConvNet().to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(model.parameters(), lr=1e-2, weight_decay=1e-4)
loss_history, train_history, val_history = train_model(model, train_loader, val_loader, loss, optimizer, 10)
###Output
Average loss: 0.977716, Train accuracy: 0.707692, Val accuracy: 0.847525
Average loss: 0.467956, Train accuracy: 0.866021, Val accuracy: 0.868404
Average loss: 0.386884, Train accuracy: 0.888612, Val accuracy: 0.886719
Average loss: 0.333018, Train accuracy: 0.905948, Val accuracy: 0.896174
Average loss: 0.296669, Train accuracy: 0.916630, Val accuracy: 0.897084
Average loss: 0.266629, Train accuracy: 0.924274, Val accuracy: 0.906198
Average loss: 0.241780, Train accuracy: 0.932259, Val accuracy: 0.896805
Average loss: 0.221443, Train accuracy: 0.938914, Val accuracy: 0.904578
Average loss: 0.202009, Train accuracy: 0.945227, Val accuracy: 0.912175
Average loss: 0.185363, Train accuracy: 0.950346, Val accuracy: 0.915972
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
# TODO Write the code to compute accuracy on test set
test_loader = torch.utils.data.DataLoader(data_test)
final_test_accuracy = compute_accuracy(model, test_loader)
print("Final test accuracy - ", final_test_accuracy)
###Output
_____no_output_____
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
raise Exception("Not implemented")
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
_____no_output_____
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
# TODO: Add good augmentations
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# TODO create new instances of loaders with the augmentations you chose
train_aug_loader = None
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
_____no_output_____
###Markdown
Подбор гиперпараметров
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e0, 1e-1, 1e-2, 1e-3, 1e-4]
anneal_coeff = 0.2
anneal_epochs = [1, 5, 10, 15, 20, 50]
reg = [1e-3, 1e-4, 1e-5, 1e-7]
batch_size = 64
epoch_num = 10
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
# TODO: Your code here!
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
_____no_output_____
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
best_model = None
###Output
_____no_output_____
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
# TODO Write the code to compute accuracy on test set
final_test_accuracy = 0.0
print("Final test accuracy - ", final_test_accuracy)
###Output
_____no_output_____
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
#!pip install wget
#import wget
#wget.download("http://ufldl.stanford.edu/housenumbers/train_32x32.mat")
#wget.download("http://ufldl.stanford.edu/housenumbers/test_32x32.mat")
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs, scheduler = None):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
if scheduler is not None:
scheduler.step(ave_loss)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
model.eval() # Evaluation mode
trueAnswerCounter = 0
totalAnswerCounter = 0
with torch.no_grad():
for i_step, (x, y) in enumerate(loader):
x = x.to(device)
y = y.to(device)
prediction = torch.argmax(model(x) , 1)
for i in range(len(prediction)):
if prediction[i] == y[i]:
trueAnswerCounter += float(1)
totalAnswerCounter += float(len(prediction))
del prediction
return float(trueAnswerCounter/totalAnswerCounter)
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 1.442839, Train accuracy: 0.516278, Val accuracy: 0.711965
Average loss: 0.719823, Train accuracy: 0.782104, Val accuracy: 0.780083
Average loss: 0.607837, Train accuracy: 0.818380, Val accuracy: 0.806020
Average loss: 0.560281, Train accuracy: 0.832969, Val accuracy: 0.810593
Average loss: 0.524020, Train accuracy: 0.844367, Val accuracy: 0.839055
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
c:\users\степан\appdata\local\programs\python\python39\lib\site-packages\torchvision\transforms\transforms.py:1200: UserWarning: Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead
warnings.warn(
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR)
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
Average loss: 1.924172, Train accuracy: 0.316128, Val accuracy: 0.447137
Average loss: 1.703066, Train accuracy: 0.390148, Val accuracy: 0.503515
Average loss: 1.615564, Train accuracy: 0.425963, Val accuracy: 0.510068
Average loss: 1.560174, Train accuracy: 0.447753, Val accuracy: 0.508088
Average loss: 1.523750, Train accuracy: 0.459236, Val accuracy: 0.514982
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
nn.Conv2d(in_channels=3, out_channels=6, kernel_size=5, padding=0),
nn.MaxPool2d(kernel_size=2),
nn.ReLU(inplace=True),
nn.Conv2d(in_channels=6, out_channels=16, kernel_size=5, padding=0),
nn.MaxPool2d(kernel_size=2),
nn.ReLU(inplace=True),
Flattener(),
nn.Linear(16*5*5, 120),
nn.ReLU(inplace=True),
nn.Linear(120, 84),
nn.ReLU(inplace=True),
nn.Linear(84, 10))
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
Average loss: 2.145817, Train accuracy: 0.226564, Val accuracy: 0.408300
Average loss: 1.553668, Train accuracy: 0.451797, Val accuracy: 0.552932
Average loss: 1.279018, Train accuracy: 0.561632, Val accuracy: 0.614839
Average loss: 1.144041, Train accuracy: 0.607839, Val accuracy: 0.626032
Average loss: 1.065721, Train accuracy: 0.635549, Val accuracy: 0.646236
Average loss: 1.026487, Train accuracy: 0.650855, Val accuracy: 0.667122
Average loss: 0.994068, Train accuracy: 0.660939, Val accuracy: 0.654358
Average loss: 0.967729, Train accuracy: 0.669181, Val accuracy: 0.657088
Average loss: 0.942735, Train accuracy: 0.677490, Val accuracy: 0.693332
Average loss: 0.927515, Train accuracy: 0.684896, Val accuracy: 0.676473
###Markdown
Подбор гиперпараметров
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e0, 1e-1, 1e-2, 1e-3, 1e-4]
anneal_coeff = 0.2
anneal_epochs = [1, 5, 10, 15, 20, 50]
reg = [1e-3, 1e-4, 1e-5, 1e-7]
batch_size = 64
epoch_num = 5
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
for lrates in learning_rates:
for rgs in reg:
print("learning rater: " , lrates, " refularization: ", rgs)
lenet_model = nn.Sequential(
nn.Conv2d(in_channels=3, out_channels=6, kernel_size=5, padding=0),
nn.MaxPool2d(kernel_size=2),
nn.ReLU(inplace=True),
nn.Conv2d(in_channels=6, out_channels=16, kernel_size=5, padding=0),
nn.MaxPool2d(kernel_size=2),
nn.ReLU(inplace=True),
Flattener(),
nn.Linear(16*5*5, 120),
nn.ReLU(inplace=True),
nn.Linear(120, 84),
nn.ReLU(inplace=True),
nn.Linear(84, 10))
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=lrates, weight_decay=rgs)
scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, factor = 0.5, patience = 2, verbose = True)
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, epoch_num, scheduler)
run_record[Hyperparams(lrates, anneal_epochs[0], rgs)] = RunResult(lenet_model, train_history, val_history , val_history[-1])
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
Best validation accuracy: 0.67, best hyperparams: Hyperparams(learning_rate=0.1, anneal_epochs=1, reg=1e-05)
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
print("starting gym training ")
lrates = 0.002
rgs = 1e-05
epoch_num = 20
best_model = nn.Sequential(
nn.Conv2d(in_channels=3, out_channels=64, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.BatchNorm2d(64),
nn.MaxPool2d(kernel_size=2),
nn.Conv2d(in_channels=64, out_channels=128, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(in_channels=128, out_channels=128, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.BatchNorm2d(128),
nn.MaxPool2d(kernel_size=2),
nn.Conv2d(in_channels=128, out_channels=256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.BatchNorm2d(256),
nn.MaxPool2d(kernel_size=2),
nn.Conv2d(in_channels=256, out_channels=512, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.BatchNorm2d(512),
nn.MaxPool2d(kernel_size=2),
nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.BatchNorm2d(512),
nn.MaxPool2d(kernel_size=2),
Flattener(),
nn.Linear(512, 1024),
nn.ReLU(inplace=True),
nn.BatchNorm1d(1024),
nn.Linear(1024, 10))
best_model.type(torch.cuda.FloatTensor)
best_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
#optimizer = optim.SGD(best_model.parameters(), lr=lrates, weight_decay=rgs)
#optimizer = optim.Adam(best_model.parameters(), lr=lrates, weight_decay=rgs)
optimizer = optim.Adagrad(best_model.parameters(), lr=lrates, weight_decay=rgs)
scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, factor = 0.5, patience = 2, verbose = False)
#scheduler = optim.lr_scheduler.StepLR(optimizer, 2, gamma=0.5)
loss_history, train_history, val_history = train_model(best_model, train_aug_loader, val_loader, loss, optimizer, epoch_num, scheduler)
###Output
starting gym training
Average loss: 2.267137, Train accuracy: 0.174658, Val accuracy: 0.188520
Average loss: 2.243175, Train accuracy: 0.187370, Val accuracy: 0.189816
Average loss: 1.743498, Train accuracy: 0.376088, Val accuracy: 0.752986
Average loss: 0.550480, Train accuracy: 0.828209, Val accuracy: 0.878780
Average loss: 0.370281, Train accuracy: 0.889090, Val accuracy: 0.879872
Average loss: 0.301625, Train accuracy: 0.910658, Val accuracy: 0.903147
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
# TODO Write the code to compute accuracy on test set
test_loader = torch.utils.data.DataLoader(data_test, batch_size=batch_size)
final_test_accuracy = compute_accuracy(best_model, test_loader)
print("Final test accuracy - ", final_test_accuracy)
print("cla$$ =)")
###Output
cla$$ =)
###Markdown
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.Туториал по настройке Google Colab: https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
###Code
# Intstall PyTorch and download data
!pip3 install torch torchvision
!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Загружаем данные
###Code
# First, lets load the dataset
data_train = dset.SVHN('./',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./', split='test', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
###Output
_____no_output_____
###Markdown
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
###Code
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
###Output
_____no_output_____
###Markdown
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
###Code
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
raise Exception("Not implemented")
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmltransformsНиже мы используем следующие алгоритмы генерации:- ColorJitter - случайное изменение цвета- RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%- RandomVerticalFlip - вертикальное отражение с вероятностью 50%- RandomRotation - случайный поворот
###Code
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN('./',
transform=tfs
)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
###Output
_____no_output_____
###Markdown
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
###Code
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN('./',
transform=tfs
)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
###Code
# TODO:
tfs = transforms.Compose([
# TODO: Add good augmentations
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
# TODO create new instances of loaders with the augmentations you chose
train_aug_loader = None
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
###Code
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
###Output
_____no_output_____
###Markdown
Подбор гиперпараметров
###Code
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e0, 1e-1, 1e-2, 1e-3, 1e-4]
anneal_coeff = 0.2
anneal_epochs = [1, 5, 10, 15, 20, 50]
reg = [1e-3, 1e-4, 1e-5, 1e-7]
batch_size = 64
epoch_num = 10
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
# TODO: Your code here!
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
###Output
_____no_output_____
###Markdown
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить количество слоев и их толщину- Изменять количество эпох тренировки- Попробовать и другие агментации
###Code
best_model = None
###Output
_____no_output_____
###Markdown
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
###Code
# TODO Write the code to compute accuracy on test set
final_test_accuracy = 0.0
print("Final test accuracy - ", final_test_accuracy)
###Output
_____no_output_____ |
notebooks/DataProfiler.ipynb | ###Markdown
Data profiling is critical for the success of the data loads.Profile across file is more important.In many scenarios, I have observed that the attribute behavior change between multiple files sent.There can be multiple reasons for that - e.g. 1. Change in upstream environment DEV/UAT/PROD i.e. first file was generated from DEV env and later from different env and hence data difference may occur2. Change in business logic3.
###Code
from pyspark.sql import SparkSession
spark = SparkSession.builder.master("local[*]").appName('PySpark_Tutorial').getOrCreate()
spark
import platform, sys, os
print('Platform = ',platform.platform())
print('Version of Spark = ',spark.version)
print('Python version = ',sys.version)
import pandas as pd
from pyspark.sql import functions as F
from pyspark.sql.functions import isnan, when, count, col
def dataprofile(data_all_df,data_cols):
data_df = data_all_df.select(data_cols)
columns2Bprofiled = data_df.columns
global schema_name, table_name
if not 'schema_name' in globals():
schema_name = 'schema_name'
if not 'table_name' in globals():
table_name = 'table_name'
dprof_df = pd.DataFrame({'schema_name':[schema_name] * len(data_df.columns),\
'table_name':[table_name] * len(data_df.columns),\
'column_names':data_df.columns,\
'data_types':[x[1] for x in data_df.dtypes]})
dprof_df = dprof_df[['schema_name','table_name','column_names', 'data_types']]
dprof_df.set_index('column_names', inplace=True, drop=False)
# ======================
num_rows = data_df.count()
dprof_df['num_rows'] = num_rows
# ======================
# number of rows with nulls and nans
df_nacounts = data_df.select([count(when(isnan(c) | col(c).isNull(), c)).alias(c) for c in data_df.columns \
if data_df.select(c).dtypes[0][1]!='timestamp']).toPandas().transpose()
df_nacounts = df_nacounts.reset_index()
df_nacounts.columns = ['column_names','num_null']
dprof_df = pd.merge(dprof_df, df_nacounts, on = ['column_names'], how = 'left')
# ========================
# number of rows with white spaces (one or more space) or blanks
num_spaces = [data_df.where(F.col(c).rlike('^\\s+$')).count() for c in data_df.columns]
dprof_df['num_spaces'] = num_spaces
num_blank = [data_df.where(F.col(c)=='').count() for c in data_df.columns]
dprof_df['num_blank'] = num_blank
# =========================
# using the in built describe() function
desc_df = data_df.describe().toPandas().transpose()
desc_df.columns = ['count', 'mean', 'stddev', 'min', 'max']
desc_df = desc_df.iloc[1:,:]
desc_df = desc_df.reset_index()
desc_df.columns.values[0] = 'column_names'
desc_df = desc_df[['column_names','count', 'mean', 'stddev']]
dprof_df = pd.merge(dprof_df, desc_df , on = ['column_names'], how = 'left')
# ===========================================
allminvalues = [data_df.select(F.min(x)).limit(1).toPandas().iloc[0][0] for x in columns2Bprofiled]
allmaxvalues = [data_df.select(F.max(x)).limit(1).toPandas().iloc[0][0] for x in columns2Bprofiled]
allmincounts = [data_df.where(col(x) == y).count() for x,y in zip(columns2Bprofiled, allminvalues)]
allmaxcounts = [data_df.where(col(x) == y).count() for x,y in zip(columns2Bprofiled, allmaxvalues)]
df_counts = dprof_df[['column_names']]
df_counts.insert(loc=0, column='min', value=allminvalues)
df_counts.insert(loc=0, column='counts_min', value=allmincounts)
df_counts.insert(loc=0, column='max', value=allmaxvalues)
df_counts.insert(loc=0, column='counts_max', value=allmaxcounts)
df_counts = df_counts[['column_names','min','counts_min','max','counts_max']]
dprof_df = pd.merge(dprof_df, df_counts , on = ['column_names'], how = 'left')
# ==========================================
# number of distinct values in each column
dprof_df['num_distinct'] = [data_df.select(x).distinct().count() for x in columns2Bprofiled]
# ============================================
# most frequently occuring value in a column and its count
dprof_df['most_freq_valwcount'] = [data_df.groupBy(x).count().sort("count",ascending=False).limit(1).\
toPandas().iloc[0].values.tolist() for x in columns2Bprofiled]
dprof_df['most_freq_value'] = [x[0] for x in dprof_df['most_freq_valwcount']]
dprof_df['most_freq_value_count'] = [x[1] for x in dprof_df['most_freq_valwcount']]
dprof_df = dprof_df.drop(['most_freq_valwcount'],axis=1)
# least frequently occuring value in a column and its count
dprof_df['least_freq_valwcount'] = [data_df.groupBy(x).count().sort("count",ascending=True).limit(1).\
toPandas().iloc[0].values.tolist() for x in columns2Bprofiled]
dprof_df['least_freq_value'] = [x[0] for x in dprof_df['least_freq_valwcount']]
dprof_df['least_freq_value_count'] = [x[1] for x in dprof_df['least_freq_valwcount']]
dprof_df = dprof_df.drop(['least_freq_valwcount'],axis=1)
return dprof_df
import pandas as pd
df = spark.read.text(r'file:/D:/Projects/saket1471/learn-spark/data/netflix_titles.csv')
df.show()
df = spark.read.text(r'file:/D:/Projects/saket1471/learn-spark/data/netflix_titles.csv', sep=",")
help(spark.read.text)
df = spark.read.text(r'file:/D:/Projects/saket1471/learn-spark/data/netflix_titles.csv').option(sep,",")
df = spark.read.options(delimiter=",", header=True).text(r'file:/D:/Projects/saket1471/learn-spark/data/netflix_titles.csv')
df.show()
df = spark.read.options(delimiter=",", header=True).csv(r'file:/D:/Projects/saket1471/learn-spark/data/netflix_titles.csv')
df.columns
df.show()
# Driver code for the data profle function
import time
start = time.time()
cols2profile = df.columns # select all or some columns from the table
dprofile = dataprofile(df, cols2profile)
end = time.time()
print('Time taken to execute dataprofile function ', (end - start)/60,' minutes')
###Output
_____no_output_____ |
impedancia/medio_heterogeneo.ipynb | ###Markdown
Ley de Ohm microscópica en medio heterogeneoPor David A. Miranda, PhD20201. Importa las librerias
###Code
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
2. Ley de Ohm microscópicaLa relación lineal entre el campo eléctrico $\vec{E}$ y la densidad de corriente $\vec{J}$ se conoce como ley de Ohm microscópica. Para el caso de un medio heterogeneo, donde el campo eléctrico y la densidad de corriente no tienen por que ser paralelas, está dada por,$$\vec{J} = \hat{\sigma} \vec{E}$$Donde $\hat{\sigma}$ es el tensor de conductividad, el cual es simétrico y de segundo orden.
###Code
σ_xx = 1e3 # S/m
σ_yy = 1e1 # S/m
σ_zz = 5e3 # S/m
σ_xy = 1e-1 # S/m
σ_xz = 1e0 # S/m
σ_yz = 1e-2 # S/m
E = np.r_[10, 0, 5] # V/m
sigma = np.array([
[σ_xx, σ_xy, σ_xz],
[σ_xy, σ_yy, σ_yz],
[σ_xz, σ_yz, σ_zz],
])
J = np.dot(sigma, E)
print('E : ', E)
print('J : ', J)
###Output
E : [10 0 5]
J : [1.0005e+04 1.0500e+00 2.5010e+04]
###Markdown
3. Gráfica de los vectoresVer detalles en [GitHub](https://gist.github.com/WetHat/1d6cd0f7309535311a539b42cccca89c)
###Code
from matplotlib.patches import FancyArrowPatch
from mpl_toolkits.mplot3d.axes3d import Axes3D
from mpl_toolkits.mplot3d.proj3d import proj_transform
class Arrow3D(FancyArrowPatch):
def __init__(self, r1, r2, *args, **kwargs):
x, y, z = r1
x2, y2, z2 = r2
dx = x2 - x; dy = y2 - y; dz = z2 - z
super().__init__((0,0), (0,0), *args, **kwargs)
self._xyz = (x,y,z)
self._dxdydz = (dx,dy,dz)
def draw(self, renderer):
x1,y1,z1 = self._xyz
dx,dy,dz = self._dxdydz
x2,y2,z2 = (x1+dx,y1+dy,z1+dz)
xs, ys, zs = proj_transform((x1,x2),(y1,y2),(z1,z2), renderer.M)
self.set_positions((xs[0],ys[0]),(xs[1],ys[1]))
super().draw(renderer)
def _arrow3D(ax, r1, r2, *args, **kwargs):
'''Add an 3d arrow to an `Axes3D` instance.'''
arrow = Arrow3D(r1, r2, *args, **kwargs)
ax.add_artist(arrow)
setattr(Axes3D,'arrow3D',_arrow3D)
fig = plt.figure(dpi=120)
ax = fig.add_subplot(111, projection='3d')
ax.set_xlim(0,2)
ax.arrow3D([0,0,0],
E/max(np.abs(E)),
mutation_scale=20,
arrowstyle="-|>",
linestyle='dashed')
ax.arrow3D([0,0,0],
J/max(np.abs(J)),
mutation_scale=20,
ec ='green',
fc='red')
ax.set_xlabel('x'); ax.set_xticks([])
ax.set_ylabel('y'); ax.set_yticks([])
ax.set_zlabel('z'); ax.set_zticks([])
fig.tight_layout()
_ = plt.title(r'$\vec{E}$ (negro) y $\vec{J}$ (rojo).')
###Output
_____no_output_____ |
site/en/r2/tutorials/keras/basic_text_classification.ipynb | ###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Text classification with movie reviews View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This notebook classifies movie reviews as *positive* or *negative* using the text of the review. This is an example of *binary*—or two-class—classification, an important and widely applicable kind of machine learning problem.We'll use the [IMDB dataset](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb) that contains the text of 50,000 movie reviews from the [Internet Movie Database](https://www.imdb.com/). These are split into 25,000 reviews for training and 25,000 reviews for testing. The training and testing sets are *balanced*, meaning they contain an equal number of positive and negative reviews.This notebook uses [tf.keras](https://www.tensorflow.org/guide/keras), a high-level API to build and train models in TensorFlow. For a more advanced text classification tutorial using `tf.keras`, see the [MLCC Text Classification Guide](https://developers.google.com/machine-learning/guides/text-classification/).
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
!pip install tensorflow==2.0.0-beta0
import tensorflow as tf
from tensorflow import keras
import numpy as np
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Download the IMDB datasetThe IMDB dataset comes packaged with TensorFlow. It has already been preprocessed such that the reviews (sequences of words) have been converted to sequences of integers, where each integer represents a specific word in a dictionary.The following code downloads the IMDB dataset to your machine (or uses a cached copy if you've already downloaded it):
###Code
imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` keeps the top 10,000 most frequently occurring words in the training data. The rare words are discarded to keep the size of the data manageable. Explore the dataLet's take a moment to understand the format of the data. The dataset comes preprocessed: each example is an array of integers representing the words of the movie review. Each label is an integer value of either 0 or 1, where 0 is a negative review, and 1 is a positive review.
###Code
print("Training entries: {}, labels: {}".format(len(train_data), len(train_labels)))
###Output
_____no_output_____
###Markdown
The text of reviews have been converted to integers, where each integer represents a specific word in a dictionary. Here's what the first review looks like:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Movie reviews may be different lengths. The below code shows the number of words in the first and second reviews. Since inputs to a neural network must be the same length, we'll need to resolve this later.
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
Convert the integers back to wordsIt may be useful to know how to convert integers back to text. Here, we'll create a helper function to query a dictionary object that contains the integer to string mapping:
###Code
# A dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# The first indices are reserved
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2 # unknown
word_index["<UNUSED>"] = 3
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
###Output
_____no_output_____
###Markdown
Now we can use the `decode_review` function to display the text for the first review:
###Code
decode_review(train_data[0])
###Output
_____no_output_____
###Markdown
Prepare the dataThe reviews—the arrays of integers—must be converted to tensors before fed into the neural network. This conversion can be done a couple of ways:* Convert the arrays into vectors of 0s and 1s indicating word occurrence, similar to a one-hot encoding. For example, the sequence [3, 5] would become a 10,000-dimensional vector that is all zeros except for indices 3 and 5, which are ones. Then, make this the first layer in our network—a Dense layer—that can handle floating point vector data. This approach is memory intensive, though, requiring a `num_words * num_reviews` size matrix.* Alternatively, we can pad the arrays so they all have the same length, then create an integer tensor of shape `max_length * num_reviews`. We can use an embedding layer capable of handling this shape as the first layer in our network.In this tutorial, we will use the second approach.Since the movie reviews must be the same length, we will use the [pad_sequences](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences) function to standardize the lengths:
###Code
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
###Output
_____no_output_____
###Markdown
Let's look at the length of the examples now:
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
And inspect the (now padded) first review:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Build the modelThe neural network is created by stacking layers—this requires two main architectural decisions:* How many layers to use in the model?* How many *hidden units* to use for each layer?In this example, the input data consists of an array of word-indices. The labels to predict are either 0 or 1. Let's build a model for this problem:
###Code
# input shape is the vocabulary count used for the movie reviews (10,000 words)
vocab_size = 10000
model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation='relu'))
model.add(keras.layers.Dense(1, activation='sigmoid'))
model.summary()
###Output
_____no_output_____
###Markdown
The layers are stacked sequentially to build the classifier:1. The first layer is an `Embedding` layer. This layer takes the integer-encoded vocabulary and looks up the embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: `(batch, sequence, embedding)`.2. Next, a `GlobalAveragePooling1D` layer returns a fixed-length output vector for each example by averaging over the sequence dimension. This allows the model to handle input of variable length, in the simplest way possible.3. This fixed-length output vector is piped through a fully-connected (`Dense`) layer with 16 hidden units.4. The last layer is densely connected with a single output node. Using the `sigmoid` activation function, this value is a float between 0 and 1, representing a probability, or confidence level. Hidden unitsThe above model has two intermediate or "hidden" layers, between the input and output. The number of outputs (units, nodes, or neurons) is the dimension of the representational space for the layer. In other words, the amount of freedom the network is allowed when learning an internal representation.If a model has more hidden units (a higher-dimensional representation space), and/or more layers, then the network can learn more complex representations. However, it makes the network more computationally expensive and may lead to learning unwanted patterns—patterns that improve performance on training data but not on the test data. This is called *overfitting*, and we'll explore it later. Loss function and optimizerA model needs a loss function and an optimizer for training. Since this is a binary classification problem and the model outputs a probability (a single-unit layer with a sigmoid activation), we'll use the `binary_crossentropy` loss function.This isn't the only choice for a loss function, you could, for instance, choose `mean_squared_error`. But, generally, `binary_crossentropy` is better for dealing with probabilities—it measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and the predictions.Later, when we are exploring regression problems (say, to predict the price of a house), we will see how to use another loss function called mean squared error.Now, configure the model to use an optimizer and a loss function:
###Code
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Create a validation setWhen training, we want to check the accuracy of the model on data it hasn't seen before. Create a *validation set* by setting apart 10,000 examples from the original training data. (Why not use the testing set now? Our goal is to develop and tune our model using only the training data, then use the test data just once to evaluate our accuracy).
###Code
x_val = train_data[:10000]
partial_x_train = train_data[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
###Output
_____no_output_____
###Markdown
Train the modelTrain the model for 40 epochs in mini-batches of 512 samples. This is 40 iterations over all samples in the `x_train` and `y_train` tensors. While training, monitor the model's loss and accuracy on the 10,000 samples from the validation set:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
###Output
_____no_output_____
###Markdown
Evaluate the modelAnd let's see how the model performs. Two values will be returned. Loss (a number which represents our error, lower values are better), and accuracy.
###Code
results = model.evaluate(test_data, test_labels)
print(results)
###Output
_____no_output_____
###Markdown
This fairly naive approach achieves an accuracy of about 87%. With more advanced approaches, the model should get closer to 95%. Create a graph of accuracy and loss over time`model.fit()` returns a `History` object that contains a dictionary with everything that happened during training:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
There are four entries: one for each monitored metric during training and validation. We can use these to plot the training and validation loss for comparison, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Text classification with movie reviews View on TensorFlow.org Run in Google Colab View source on GitHub This notebook classifies movie reviews as *positive* or *negative* using the text of the review. This is an example of *binary*—or two-class—classification, an important and widely applicable kind of machine learning problem. We'll use the [IMDB dataset](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb) that contains the text of 50,000 movie reviews from the [Internet Movie Database](https://www.imdb.com/). These are split into 25,000 reviews for training and 25,000 reviews for testing. The training and testing sets are *balanced*, meaning they contain an equal number of positive and negative reviews. This notebook uses [tf.keras](https://www.tensorflow.org/guide/keras), a high-level API to build and train models in TensorFlow. For a more advanced text classification tutorial using `tf.keras`, see the [MLCC Text Classification Guide](https://developers.google.com/machine-learning/guides/text-classification/).
###Code
!pip install tf-nightly-2.0-preview
import tensorflow as tf
from tensorflow import keras
import numpy as np
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Download the IMDB datasetThe IMDB dataset comes packaged with TensorFlow. It has already been preprocessed such that the reviews (sequences of words) have been converted to sequences of integers, where each integer represents a specific word in a dictionary.The following code downloads the IMDB dataset to your machine (or uses a cached copy if you've already downloaded it):
###Code
imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` keeps the top 10,000 most frequently occurring words in the training data. The rare words are discarded to keep the size of the data manageable. Explore the data Let's take a moment to understand the format of the data. The dataset comes preprocessed: each example is an array of integers representing the words of the movie review. Each label is an integer value of either 0 or 1, where 0 is a negative review, and 1 is a positive review.
###Code
print("Training entries: {}, labels: {}".format(len(train_data), len(train_labels)))
###Output
_____no_output_____
###Markdown
The text of reviews have been converted to integers, where each integer represents a specific word in a dictionary. Here's what the first review looks like:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Movie reviews may be different lengths. The below code shows the number of words in the first and second reviews. Since inputs to a neural network must be the same length, we'll need to resolve this later.
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
Convert the integers back to wordsIt may be useful to know how to convert integers back to text. Here, we'll create a helper function to query a dictionary object that contains the integer to string mapping:
###Code
# A dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# The first indices are reserved
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2 # unknown
word_index["<UNUSED>"] = 3
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
###Output
_____no_output_____
###Markdown
Now we can use the `decode_review` function to display the text for the first review:
###Code
decode_review(train_data[0])
###Output
_____no_output_____
###Markdown
Prepare the dataThe reviews—the arrays of integers—must be converted to tensors before fed into the neural network. This conversion can be done a couple of ways:* Convert the arrays into vectors of 0s and 1s indicating word occurrence, similar to a one-hot encoding. For example, the sequence [3, 5] would become a 10,000-dimensional vector that is all zeros except for indices 3 and 5, which are ones. Then, make this the first layer in our network—a Dense layer—that can handle floating point vector data. This approach is memory intensive, though, requiring a `num_words * num_reviews` size matrix.* Alternatively, we can pad the arrays so they all have the same length, then create an integer tensor of shape `max_length * num_reviews`. We can use an embedding layer capable of handling this shape as the first layer in our network.In this tutorial, we will use the second approach. Since the movie reviews must be the same length, we will use the [pad_sequences](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences) function to standardize the lengths:
###Code
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
###Output
_____no_output_____
###Markdown
Let's look at the length of the examples now:
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
And inspect the (now padded) first review:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Build the modelThe neural network is created by stacking layers—this requires two main architectural decisions:* How many layers to use in the model?* How many *hidden units* to use for each layer?In this example, the input data consists of an array of word-indices. The labels to predict are either 0 or 1. Let's build a model for this problem:
###Code
# input shape is the vocabulary count used for the movie reviews (10,000 words)
vocab_size = 10000
model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation='relu'))
model.add(keras.layers.Dense(1, activation='sigmoid'))
model.summary()
###Output
_____no_output_____
###Markdown
The layers are stacked sequentially to build the classifier:1. The first layer is an `Embedding` layer. This layer takes the integer-encoded vocabulary and looks up the embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: `(batch, sequence, embedding)`.2. Next, a `GlobalAveragePooling1D` layer returns a fixed-length output vector for each example by averaging over the sequence dimension. This allows the model to handle input of variable length, in the simplest way possible.3. This fixed-length output vector is piped through a fully-connected (`Dense`) layer with 16 hidden units.4. The last layer is densely connected with a single output node. Using the `sigmoid` activation function, this value is a float between 0 and 1, representing a probability, or confidence level. Hidden unitsThe above model has two intermediate or "hidden" layers, between the input and output. The number of outputs (units, nodes, or neurons) is the dimension of the representational space for the layer. In other words, the amount of freedom the network is allowed when learning an internal representation.If a model has more hidden units (a higher-dimensional representation space), and/or more layers, then the network can learn more complex representations. However, it makes the network more computationally expensive and may lead to learning unwanted patterns—patterns that improve performance on training data but not on the test data. This is called *overfitting*, and we'll explore it later. Loss function and optimizerA model needs a loss function and an optimizer for training. Since this is a binary classification problem and the model outputs a probability (a single-unit layer with a sigmoid activation), we'll use the `binary_crossentropy` loss function. This isn't the only choice for a loss function, you could, for instance, choose `mean_squared_error`. But, generally, `binary_crossentropy` is better for dealing with probabilities—it measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and the predictions.Later, when we are exploring regression problems (say, to predict the price of a house), we will see how to use another loss function called mean squared error.Now, configure the model to use an optimizer and a loss function:
###Code
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Create a validation setWhen training, we want to check the accuracy of the model on data it hasn't seen before. Create a *validation set* by setting apart 10,000 examples from the original training data. (Why not use the testing set now? Our goal is to develop and tune our model using only the training data, then use the test data just once to evaluate our accuracy).
###Code
x_val = train_data[:10000]
partial_x_train = train_data[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
###Output
_____no_output_____
###Markdown
Train the modelTrain the model for 40 epochs in mini-batches of 512 samples. This is 40 iterations over all samples in the `x_train` and `y_train` tensors. While training, monitor the model's loss and accuracy on the 10,000 samples from the validation set:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
###Output
_____no_output_____
###Markdown
Evaluate the modelAnd let's see how the model performs. Two values will be returned. Loss (a number which represents our error, lower values are better), and accuracy.
###Code
results = model.evaluate(test_data, test_labels)
print(results)
###Output
_____no_output_____
###Markdown
This fairly naive approach achieves an accuracy of about 87%. With more advanced approaches, the model should get closer to 95%. Create a graph of accuracy and loss over time`model.fit()` returns a `History` object that contains a dictionary with everything that happened during training:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
There are four entries: one for each monitored metric during training and validation. We can use these to plot the training and validation loss for comparison, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Text classification with movie reviews View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This notebook classifies movie reviews as *positive* or *negative* using the text of the review. This is an example of *binary*—or two-class—classification, an important and widely applicable kind of machine learning problem.We'll use the [IMDB dataset](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb) that contains the text of 50,000 movie reviews from the [Internet Movie Database](https://www.imdb.com/). These are split into 25,000 reviews for training and 25,000 reviews for testing. The training and testing sets are *balanced*, meaning they contain an equal number of positive and negative reviews.This notebook uses [tf.keras](https://www.tensorflow.org/guide/keras), a high-level API to build and train models in TensorFlow. For a more advanced text classification tutorial using `tf.keras`, see the [MLCC Text Classification Guide](https://developers.google.com/machine-learning/guides/text-classification/).
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
%tensorflow_version 2.x # Colab only.
except Exception:
pass
import tensorflow as tf
from tensorflow import keras
import numpy as np
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Download the IMDB datasetThe IMDB dataset comes packaged with TensorFlow. It has already been preprocessed such that the reviews (sequences of words) have been converted to sequences of integers, where each integer represents a specific word in a dictionary.The following code downloads the IMDB dataset to your machine (or uses a cached copy if you've already downloaded it):
###Code
imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` keeps the top 10,000 most frequently occurring words in the training data. The rare words are discarded to keep the size of the data manageable. Explore the dataLet's take a moment to understand the format of the data. The dataset comes preprocessed: each example is an array of integers representing the words of the movie review. Each label is an integer value of either 0 or 1, where 0 is a negative review, and 1 is a positive review.
###Code
print("Training entries: {}, labels: {}".format(len(train_data), len(train_labels)))
###Output
_____no_output_____
###Markdown
The text of reviews have been converted to integers, where each integer represents a specific word in a dictionary. Here's what the first review looks like:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Movie reviews may be different lengths. The below code shows the number of words in the first and second reviews. Since inputs to a neural network must be the same length, we'll need to resolve this later.
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
Convert the integers back to wordsIt may be useful to know how to convert integers back to text. Here, we'll create a helper function to query a dictionary object that contains the integer to string mapping:
###Code
# A dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# The first indices are reserved
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2 # unknown
word_index["<UNUSED>"] = 3
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
###Output
_____no_output_____
###Markdown
Now we can use the `decode_review` function to display the text for the first review:
###Code
decode_review(train_data[0])
###Output
_____no_output_____
###Markdown
Prepare the dataThe reviews—the arrays of integers—must be converted to tensors before fed into the neural network. This conversion can be done a couple of ways:* Convert the arrays into vectors of 0s and 1s indicating word occurrence, similar to a one-hot encoding. For example, the sequence [3, 5] would become a 10,000-dimensional vector that is all zeros except for indices 3 and 5, which are ones. Then, make this the first layer in our network—a Dense layer—that can handle floating point vector data. This approach is memory intensive, though, requiring a `num_words * num_reviews` size matrix.* Alternatively, we can pad the arrays so they all have the same length, then create an integer tensor of shape `max_length * num_reviews`. We can use an embedding layer capable of handling this shape as the first layer in our network.In this tutorial, we will use the second approach.Since the movie reviews must be the same length, we will use the [pad_sequences](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences) function to standardize the lengths:
###Code
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
###Output
_____no_output_____
###Markdown
Let's look at the length of the examples now:
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
And inspect the (now padded) first review:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Build the modelThe neural network is created by stacking layers—this requires two main architectural decisions:* How many layers to use in the model?* How many *hidden units* to use for each layer?In this example, the input data consists of an array of word-indices. The labels to predict are either 0 or 1. Let's build a model for this problem:
###Code
# input shape is the vocabulary count used for the movie reviews (10,000 words)
vocab_size = 10000
model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation='relu'))
model.add(keras.layers.Dense(1, activation='sigmoid'))
model.summary()
###Output
_____no_output_____
###Markdown
The layers are stacked sequentially to build the classifier:1. The first layer is an `Embedding` layer. This layer takes the integer-encoded vocabulary and looks up the embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: `(batch, sequence, embedding)`.2. Next, a `GlobalAveragePooling1D` layer returns a fixed-length output vector for each example by averaging over the sequence dimension. This allows the model to handle input of variable length, in the simplest way possible.3. This fixed-length output vector is piped through a fully-connected (`Dense`) layer with 16 hidden units.4. The last layer is densely connected with a single output node. Using the `sigmoid` activation function, this value is a float between 0 and 1, representing a probability, or confidence level. Hidden unitsThe above model has two intermediate or "hidden" layers, between the input and output. The number of outputs (units, nodes, or neurons) is the dimension of the representational space for the layer. In other words, the amount of freedom the network is allowed when learning an internal representation.If a model has more hidden units (a higher-dimensional representation space), and/or more layers, then the network can learn more complex representations. However, it makes the network more computationally expensive and may lead to learning unwanted patterns—patterns that improve performance on training data but not on the test data. This is called *overfitting*, and we'll explore it later. Loss function and optimizerA model needs a loss function and an optimizer for training. Since this is a binary classification problem and the model outputs a probability (a single-unit layer with a sigmoid activation), we'll use the `binary_crossentropy` loss function.This isn't the only choice for a loss function, you could, for instance, choose `mean_squared_error`. But, generally, `binary_crossentropy` is better for dealing with probabilities—it measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and the predictions.Later, when we are exploring regression problems (say, to predict the price of a house), we will see how to use another loss function called mean squared error.Now, configure the model to use an optimizer and a loss function:
###Code
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Create a validation setWhen training, we want to check the accuracy of the model on data it hasn't seen before. Create a *validation set* by setting apart 10,000 examples from the original training data. (Why not use the testing set now? Our goal is to develop and tune our model using only the training data, then use the test data just once to evaluate our accuracy).
###Code
x_val = train_data[:10000]
partial_x_train = train_data[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
###Output
_____no_output_____
###Markdown
Train the modelTrain the model for 40 epochs in mini-batches of 512 samples. This is 40 iterations over all samples in the `x_train` and `y_train` tensors. While training, monitor the model's loss and accuracy on the 10,000 samples from the validation set:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
###Output
_____no_output_____
###Markdown
Evaluate the modelAnd let's see how the model performs. Two values will be returned. Loss (a number which represents our error, lower values are better), and accuracy.
###Code
results = model.evaluate(test_data, test_labels)
print(results)
###Output
_____no_output_____
###Markdown
This fairly naive approach achieves an accuracy of about 87%. With more advanced approaches, the model should get closer to 95%. Create a graph of accuracy and loss over time`model.fit()` returns a `History` object that contains a dictionary with everything that happened during training:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
There are four entries: one for each monitored metric during training and validation. We can use these to plot the training and validation loss for comparison, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Text classification with movie reviews View on TensorFlow.org Run in Google Colab View source on GitHub This notebook classifies movie reviews as *positive* or *negative* using the text of the review. This is an example of *binary*—or two-class—classification, an important and widely applicable kind of machine learning problem. We'll use the [IMDB dataset](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb) that contains the text of 50,000 movie reviews from the [Internet Movie Database](https://www.imdb.com/). These are split into 25,000 reviews for training and 25,000 reviews for testing. The training and testing sets are *balanced*, meaning they contain an equal number of positive and negative reviews. This notebook uses [tf.keras](https://www.tensorflow.org/guide/keras), a high-level API to build and train models in TensorFlow. For a more advanced text classification tutorial using `tf.keras`, see the [MLCC Text Classification Guide](https://developers.google.com/machine-learning/guides/text-classification/).
###Code
from __future__ import absolute_import, division, print_function
!pip install tf-nightly-2.0-preview
import tensorflow as tf
from tensorflow import keras
import numpy as np
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Download the IMDB datasetThe IMDB dataset comes packaged with TensorFlow. It has already been preprocessed such that the reviews (sequences of words) have been converted to sequences of integers, where each integer represents a specific word in a dictionary.The following code downloads the IMDB dataset to your machine (or uses a cached copy if you've already downloaded it):
###Code
imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` keeps the top 10,000 most frequently occurring words in the training data. The rare words are discarded to keep the size of the data manageable. Explore the data Let's take a moment to understand the format of the data. The dataset comes preprocessed: each example is an array of integers representing the words of the movie review. Each label is an integer value of either 0 or 1, where 0 is a negative review, and 1 is a positive review.
###Code
print("Training entries: {}, labels: {}".format(len(train_data), len(train_labels)))
###Output
_____no_output_____
###Markdown
The text of reviews have been converted to integers, where each integer represents a specific word in a dictionary. Here's what the first review looks like:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Movie reviews may be different lengths. The below code shows the number of words in the first and second reviews. Since inputs to a neural network must be the same length, we'll need to resolve this later.
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
Convert the integers back to wordsIt may be useful to know how to convert integers back to text. Here, we'll create a helper function to query a dictionary object that contains the integer to string mapping:
###Code
# A dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# The first indices are reserved
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2 # unknown
word_index["<UNUSED>"] = 3
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
###Output
_____no_output_____
###Markdown
Now we can use the `decode_review` function to display the text for the first review:
###Code
decode_review(train_data[0])
###Output
_____no_output_____
###Markdown
Prepare the dataThe reviews—the arrays of integers—must be converted to tensors before fed into the neural network. This conversion can be done a couple of ways:* Convert the arrays into vectors of 0s and 1s indicating word occurrence, similar to a one-hot encoding. For example, the sequence [3, 5] would become a 10,000-dimensional vector that is all zeros except for indices 3 and 5, which are ones. Then, make this the first layer in our network—a Dense layer—that can handle floating point vector data. This approach is memory intensive, though, requiring a `num_words * num_reviews` size matrix.* Alternatively, we can pad the arrays so they all have the same length, then create an integer tensor of shape `max_length * num_reviews`. We can use an embedding layer capable of handling this shape as the first layer in our network.In this tutorial, we will use the second approach. Since the movie reviews must be the same length, we will use the [pad_sequences](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences) function to standardize the lengths:
###Code
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
###Output
_____no_output_____
###Markdown
Let's look at the length of the examples now:
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
And inspect the (now padded) first review:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Build the modelThe neural network is created by stacking layers—this requires two main architectural decisions:* How many layers to use in the model?* How many *hidden units* to use for each layer?In this example, the input data consists of an array of word-indices. The labels to predict are either 0 or 1. Let's build a model for this problem:
###Code
# input shape is the vocabulary count used for the movie reviews (10,000 words)
vocab_size = 10000
model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation='relu'))
model.add(keras.layers.Dense(1, activation='sigmoid'))
model.summary()
###Output
_____no_output_____
###Markdown
The layers are stacked sequentially to build the classifier:1. The first layer is an `Embedding` layer. This layer takes the integer-encoded vocabulary and looks up the embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: `(batch, sequence, embedding)`.2. Next, a `GlobalAveragePooling1D` layer returns a fixed-length output vector for each example by averaging over the sequence dimension. This allows the model to handle input of variable length, in the simplest way possible.3. This fixed-length output vector is piped through a fully-connected (`Dense`) layer with 16 hidden units.4. The last layer is densely connected with a single output node. Using the `sigmoid` activation function, this value is a float between 0 and 1, representing a probability, or confidence level. Hidden unitsThe above model has two intermediate or "hidden" layers, between the input and output. The number of outputs (units, nodes, or neurons) is the dimension of the representational space for the layer. In other words, the amount of freedom the network is allowed when learning an internal representation.If a model has more hidden units (a higher-dimensional representation space), and/or more layers, then the network can learn more complex representations. However, it makes the network more computationally expensive and may lead to learning unwanted patterns—patterns that improve performance on training data but not on the test data. This is called *overfitting*, and we'll explore it later. Loss function and optimizerA model needs a loss function and an optimizer for training. Since this is a binary classification problem and the model outputs a probability (a single-unit layer with a sigmoid activation), we'll use the `binary_crossentropy` loss function. This isn't the only choice for a loss function, you could, for instance, choose `mean_squared_error`. But, generally, `binary_crossentropy` is better for dealing with probabilities—it measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and the predictions.Later, when we are exploring regression problems (say, to predict the price of a house), we will see how to use another loss function called mean squared error.Now, configure the model to use an optimizer and a loss function:
###Code
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Create a validation setWhen training, we want to check the accuracy of the model on data it hasn't seen before. Create a *validation set* by setting apart 10,000 examples from the original training data. (Why not use the testing set now? Our goal is to develop and tune our model using only the training data, then use the test data just once to evaluate our accuracy).
###Code
x_val = train_data[:10000]
partial_x_train = train_data[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
###Output
_____no_output_____
###Markdown
Train the modelTrain the model for 40 epochs in mini-batches of 512 samples. This is 40 iterations over all samples in the `x_train` and `y_train` tensors. While training, monitor the model's loss and accuracy on the 10,000 samples from the validation set:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
###Output
_____no_output_____
###Markdown
Evaluate the modelAnd let's see how the model performs. Two values will be returned. Loss (a number which represents our error, lower values are better), and accuracy.
###Code
results = model.evaluate(test_data, test_labels)
print(results)
###Output
_____no_output_____
###Markdown
This fairly naive approach achieves an accuracy of about 87%. With more advanced approaches, the model should get closer to 95%. Create a graph of accuracy and loss over time`model.fit()` returns a `History` object that contains a dictionary with everything that happened during training:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
There are four entries: one for each monitored metric during training and validation. We can use these to plot the training and validation loss for comparison, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Text classification with movie reviews View on TensorFlow.org Run in Google Colab View source on GitHub This notebook classifies movie reviews as *positive* or *negative* using the text of the review. This is an example of *binary*—or two-class—classification, an important and widely applicable kind of machine learning problem. We'll use the [IMDB dataset](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb) that contains the text of 50,000 movie reviews from the [Internet Movie Database](https://www.imdb.com/). These are split into 25,000 reviews for training and 25,000 reviews for testing. The training and testing sets are *balanced*, meaning they contain an equal number of positive and negative reviews. This notebook uses [tf.keras](https://www.tensorflow.org/guide/keras), a high-level API to build and train models in TensorFlow. For a more advanced text classification tutorial using `tf.keras`, see the [MLCC Text Classification Guide](https://developers.google.com/machine-learning/guides/text-classification/).
###Code
from __future__ import absolute_import, division, print_function
!pip install tensorflow==2.0.0-alpha0
import tensorflow as tf
from tensorflow import keras
import numpy as np
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Download the IMDB datasetThe IMDB dataset comes packaged with TensorFlow. It has already been preprocessed such that the reviews (sequences of words) have been converted to sequences of integers, where each integer represents a specific word in a dictionary.The following code downloads the IMDB dataset to your machine (or uses a cached copy if you've already downloaded it):
###Code
imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` keeps the top 10,000 most frequently occurring words in the training data. The rare words are discarded to keep the size of the data manageable. Explore the data Let's take a moment to understand the format of the data. The dataset comes preprocessed: each example is an array of integers representing the words of the movie review. Each label is an integer value of either 0 or 1, where 0 is a negative review, and 1 is a positive review.
###Code
print("Training entries: {}, labels: {}".format(len(train_data), len(train_labels)))
###Output
_____no_output_____
###Markdown
The text of reviews have been converted to integers, where each integer represents a specific word in a dictionary. Here's what the first review looks like:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Movie reviews may be different lengths. The below code shows the number of words in the first and second reviews. Since inputs to a neural network must be the same length, we'll need to resolve this later.
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
Convert the integers back to wordsIt may be useful to know how to convert integers back to text. Here, we'll create a helper function to query a dictionary object that contains the integer to string mapping:
###Code
# A dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# The first indices are reserved
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2 # unknown
word_index["<UNUSED>"] = 3
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
###Output
_____no_output_____
###Markdown
Now we can use the `decode_review` function to display the text for the first review:
###Code
decode_review(train_data[0])
###Output
_____no_output_____
###Markdown
Prepare the dataThe reviews—the arrays of integers—must be converted to tensors before fed into the neural network. This conversion can be done a couple of ways:* Convert the arrays into vectors of 0s and 1s indicating word occurrence, similar to a one-hot encoding. For example, the sequence [3, 5] would become a 10,000-dimensional vector that is all zeros except for indices 3 and 5, which are ones. Then, make this the first layer in our network—a Dense layer—that can handle floating point vector data. This approach is memory intensive, though, requiring a `num_words * num_reviews` size matrix.* Alternatively, we can pad the arrays so they all have the same length, then create an integer tensor of shape `max_length * num_reviews`. We can use an embedding layer capable of handling this shape as the first layer in our network.In this tutorial, we will use the second approach. Since the movie reviews must be the same length, we will use the [pad_sequences](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences) function to standardize the lengths:
###Code
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
###Output
_____no_output_____
###Markdown
Let's look at the length of the examples now:
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
And inspect the (now padded) first review:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Build the modelThe neural network is created by stacking layers—this requires two main architectural decisions:* How many layers to use in the model?* How many *hidden units* to use for each layer?In this example, the input data consists of an array of word-indices. The labels to predict are either 0 or 1. Let's build a model for this problem:
###Code
# input shape is the vocabulary count used for the movie reviews (10,000 words)
vocab_size = 10000
model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation='relu'))
model.add(keras.layers.Dense(1, activation='sigmoid'))
model.summary()
###Output
_____no_output_____
###Markdown
The layers are stacked sequentially to build the classifier:1. The first layer is an `Embedding` layer. This layer takes the integer-encoded vocabulary and looks up the embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: `(batch, sequence, embedding)`.2. Next, a `GlobalAveragePooling1D` layer returns a fixed-length output vector for each example by averaging over the sequence dimension. This allows the model to handle input of variable length, in the simplest way possible.3. This fixed-length output vector is piped through a fully-connected (`Dense`) layer with 16 hidden units.4. The last layer is densely connected with a single output node. Using the `sigmoid` activation function, this value is a float between 0 and 1, representing a probability, or confidence level. Hidden unitsThe above model has two intermediate or "hidden" layers, between the input and output. The number of outputs (units, nodes, or neurons) is the dimension of the representational space for the layer. In other words, the amount of freedom the network is allowed when learning an internal representation.If a model has more hidden units (a higher-dimensional representation space), and/or more layers, then the network can learn more complex representations. However, it makes the network more computationally expensive and may lead to learning unwanted patterns—patterns that improve performance on training data but not on the test data. This is called *overfitting*, and we'll explore it later. Loss function and optimizerA model needs a loss function and an optimizer for training. Since this is a binary classification problem and the model outputs a probability (a single-unit layer with a sigmoid activation), we'll use the `binary_crossentropy` loss function. This isn't the only choice for a loss function, you could, for instance, choose `mean_squared_error`. But, generally, `binary_crossentropy` is better for dealing with probabilities—it measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and the predictions.Later, when we are exploring regression problems (say, to predict the price of a house), we will see how to use another loss function called mean squared error.Now, configure the model to use an optimizer and a loss function:
###Code
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Create a validation setWhen training, we want to check the accuracy of the model on data it hasn't seen before. Create a *validation set* by setting apart 10,000 examples from the original training data. (Why not use the testing set now? Our goal is to develop and tune our model using only the training data, then use the test data just once to evaluate our accuracy).
###Code
x_val = train_data[:10000]
partial_x_train = train_data[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
###Output
_____no_output_____
###Markdown
Train the modelTrain the model for 40 epochs in mini-batches of 512 samples. This is 40 iterations over all samples in the `x_train` and `y_train` tensors. While training, monitor the model's loss and accuracy on the 10,000 samples from the validation set:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
###Output
_____no_output_____
###Markdown
Evaluate the modelAnd let's see how the model performs. Two values will be returned. Loss (a number which represents our error, lower values are better), and accuracy.
###Code
results = model.evaluate(test_data, test_labels)
print(results)
###Output
_____no_output_____
###Markdown
This fairly naive approach achieves an accuracy of about 87%. With more advanced approaches, the model should get closer to 95%. Create a graph of accuracy and loss over time`model.fit()` returns a `History` object that contains a dictionary with everything that happened during training:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
There are four entries: one for each monitored metric during training and validation. We can use these to plot the training and validation loss for comparison, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Text classification with movie reviews View on TensorFlow.org Run in Google Colab View source on GitHub This notebook classifies movie reviews as *positive* or *negative* using the text of the review. This is an example of *binary*—or two-class—classification, an important and widely applicable kind of machine learning problem.We'll use the [IMDB dataset](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb) that contains the text of 50,000 movie reviews from the [Internet Movie Database](https://www.imdb.com/). These are split into 25,000 reviews for training and 25,000 reviews for testing. The training and testing sets are *balanced*, meaning they contain an equal number of positive and negative reviews.This notebook uses [tf.keras](https://www.tensorflow.org/guide/keras), a high-level API to build and train models in TensorFlow. For a more advanced text classification tutorial using `tf.keras`, see the [MLCC Text Classification Guide](https://developers.google.com/machine-learning/guides/text-classification/).
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
!pip install tensorflow==2.0.0-alpha0
import tensorflow as tf
from tensorflow import keras
import numpy as np
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Download the IMDB datasetThe IMDB dataset comes packaged with TensorFlow. It has already been preprocessed such that the reviews (sequences of words) have been converted to sequences of integers, where each integer represents a specific word in a dictionary.The following code downloads the IMDB dataset to your machine (or uses a cached copy if you've already downloaded it):
###Code
imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` keeps the top 10,000 most frequently occurring words in the training data. The rare words are discarded to keep the size of the data manageable. Explore the dataLet's take a moment to understand the format of the data. The dataset comes preprocessed: each example is an array of integers representing the words of the movie review. Each label is an integer value of either 0 or 1, where 0 is a negative review, and 1 is a positive review.
###Code
print("Training entries: {}, labels: {}".format(len(train_data), len(train_labels)))
###Output
_____no_output_____
###Markdown
The text of reviews have been converted to integers, where each integer represents a specific word in a dictionary. Here's what the first review looks like:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Movie reviews may be different lengths. The below code shows the number of words in the first and second reviews. Since inputs to a neural network must be the same length, we'll need to resolve this later.
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
Convert the integers back to wordsIt may be useful to know how to convert integers back to text. Here, we'll create a helper function to query a dictionary object that contains the integer to string mapping:
###Code
# A dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# The first indices are reserved
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2 # unknown
word_index["<UNUSED>"] = 3
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
###Output
_____no_output_____
###Markdown
Now we can use the `decode_review` function to display the text for the first review:
###Code
decode_review(train_data[0])
###Output
_____no_output_____
###Markdown
Prepare the dataThe reviews—the arrays of integers—must be converted to tensors before fed into the neural network. This conversion can be done a couple of ways:* Convert the arrays into vectors of 0s and 1s indicating word occurrence, similar to a one-hot encoding. For example, the sequence [3, 5] would become a 10,000-dimensional vector that is all zeros except for indices 3 and 5, which are ones. Then, make this the first layer in our network—a Dense layer—that can handle floating point vector data. This approach is memory intensive, though, requiring a `num_words * num_reviews` size matrix.* Alternatively, we can pad the arrays so they all have the same length, then create an integer tensor of shape `max_length * num_reviews`. We can use an embedding layer capable of handling this shape as the first layer in our network.In this tutorial, we will use the second approach.Since the movie reviews must be the same length, we will use the [pad_sequences](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences) function to standardize the lengths:
###Code
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
###Output
_____no_output_____
###Markdown
Let's look at the length of the examples now:
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
And inspect the (now padded) first review:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Build the modelThe neural network is created by stacking layers—this requires two main architectural decisions:* How many layers to use in the model?* How many *hidden units* to use for each layer?In this example, the input data consists of an array of word-indices. The labels to predict are either 0 or 1. Let's build a model for this problem:
###Code
# input shape is the vocabulary count used for the movie reviews (10,000 words)
vocab_size = 10000
model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation='relu'))
model.add(keras.layers.Dense(1, activation='sigmoid'))
model.summary()
###Output
_____no_output_____
###Markdown
The layers are stacked sequentially to build the classifier:1. The first layer is an `Embedding` layer. This layer takes the integer-encoded vocabulary and looks up the embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: `(batch, sequence, embedding)`.2. Next, a `GlobalAveragePooling1D` layer returns a fixed-length output vector for each example by averaging over the sequence dimension. This allows the model to handle input of variable length, in the simplest way possible.3. This fixed-length output vector is piped through a fully-connected (`Dense`) layer with 16 hidden units.4. The last layer is densely connected with a single output node. Using the `sigmoid` activation function, this value is a float between 0 and 1, representing a probability, or confidence level. Hidden unitsThe above model has two intermediate or "hidden" layers, between the input and output. The number of outputs (units, nodes, or neurons) is the dimension of the representational space for the layer. In other words, the amount of freedom the network is allowed when learning an internal representation.If a model has more hidden units (a higher-dimensional representation space), and/or more layers, then the network can learn more complex representations. However, it makes the network more computationally expensive and may lead to learning unwanted patterns—patterns that improve performance on training data but not on the test data. This is called *overfitting*, and we'll explore it later. Loss function and optimizerA model needs a loss function and an optimizer for training. Since this is a binary classification problem and the model outputs a probability (a single-unit layer with a sigmoid activation), we'll use the `binary_crossentropy` loss function.This isn't the only choice for a loss function, you could, for instance, choose `mean_squared_error`. But, generally, `binary_crossentropy` is better for dealing with probabilities—it measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and the predictions.Later, when we are exploring regression problems (say, to predict the price of a house), we will see how to use another loss function called mean squared error.Now, configure the model to use an optimizer and a loss function:
###Code
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Create a validation setWhen training, we want to check the accuracy of the model on data it hasn't seen before. Create a *validation set* by setting apart 10,000 examples from the original training data. (Why not use the testing set now? Our goal is to develop and tune our model using only the training data, then use the test data just once to evaluate our accuracy).
###Code
x_val = train_data[:10000]
partial_x_train = train_data[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
###Output
_____no_output_____
###Markdown
Train the modelTrain the model for 40 epochs in mini-batches of 512 samples. This is 40 iterations over all samples in the `x_train` and `y_train` tensors. While training, monitor the model's loss and accuracy on the 10,000 samples from the validation set:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
###Output
_____no_output_____
###Markdown
Evaluate the modelAnd let's see how the model performs. Two values will be returned. Loss (a number which represents our error, lower values are better), and accuracy.
###Code
results = model.evaluate(test_data, test_labels)
print(results)
###Output
_____no_output_____
###Markdown
This fairly naive approach achieves an accuracy of about 87%. With more advanced approaches, the model should get closer to 95%. Create a graph of accuracy and loss over time`model.fit()` returns a `History` object that contains a dictionary with everything that happened during training:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
There are four entries: one for each monitored metric during training and validation. We can use these to plot the training and validation loss for comparison, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Text classification with movie reviews View on TensorFlow.org Run in Google Colab View source on GitHub This notebook classifies movie reviews as *positive* or *negative* using the text of the review. This is an example of *binary*—or two-class—classification, an important and widely applicable kind of machine learning problem. We'll use the [IMDB dataset](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb) that contains the text of 50,000 movie reviews from the [Internet Movie Database](https://www.imdb.com/). These are split into 25,000 reviews for training and 25,000 reviews for testing. The training and testing sets are *balanced*, meaning they contain an equal number of positive and negative reviews. This notebook uses [tf.keras](https://www.tensorflow.org/guide/keras), a high-level API to build and train models in TensorFlow. For a more advanced text classification tutorial using `tf.keras`, see the [MLCC Text Classification Guide](https://developers.google.com/machine-learning/guides/text-classification/).
###Code
from __future__ import absolute_import, division, print_function
!pip install tf-nightly-2.0-preview
import tensorflow as tf
from tensorflow import keras
import numpy as np
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Download the IMDB datasetThe IMDB dataset comes packaged with TensorFlow. It has already been preprocessed such that the reviews (sequences of words) have been converted to sequences of integers, where each integer represents a specific word in a dictionary.The following code downloads the IMDB dataset to your machine (or uses a cached copy if you've already downloaded it):
###Code
imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` keeps the top 10,000 most frequently occurring words in the training data. The rare words are discarded to keep the size of the data manageable. Explore the data Let's take a moment to understand the format of the data. The dataset comes preprocessed: each example is an array of integers representing the words of the movie review. Each label is an integer value of either 0 or 1, where 0 is a negative review, and 1 is a positive review.
###Code
print("Training entries: {}, labels: {}".format(len(train_data), len(train_labels)))
###Output
_____no_output_____
###Markdown
The text of reviews have been converted to integers, where each integer represents a specific word in a dictionary. Here's what the first review looks like:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Movie reviews may be different lengths. The below code shows the number of words in the first and second reviews. Since inputs to a neural network must be the same length, we'll need to resolve this later.
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
Convert the integers back to wordsIt may be useful to know how to convert integers back to text. Here, we'll create a helper function to query a dictionary object that contains the integer to string mapping:
###Code
# A dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# The first indices are reserved
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2 # unknown
word_index["<UNUSED>"] = 3
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
###Output
_____no_output_____
###Markdown
Now we can use the `decode_review` function to display the text for the first review:
###Code
decode_review(train_data[0])
###Output
_____no_output_____
###Markdown
Prepare the dataThe reviews—the arrays of integers—must be converted to tensors before fed into the neural network. This conversion can be done a couple of ways:* Convert the arrays into vectors of 0s and 1s indicating word occurrence, similar to a one-hot encoding. For example, the sequence [3, 5] would become a 10,000-dimensional vector that is all zeros except for indices 3 and 5, which are ones. Then, make this the first layer in our network—a Dense layer—that can handle floating point vector data. This approach is memory intensive, though, requiring a `num_words * num_reviews` size matrix.* Alternatively, we can pad the arrays so they all have the same length, then create an integer tensor of shape `max_length * num_reviews`. We can use an embedding layer capable of handling this shape as the first layer in our network.In this tutorial, we will use the second approach. Since the movie reviews must be the same length, we will use the [pad_sequences](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences) function to standardize the lengths:
###Code
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
###Output
_____no_output_____
###Markdown
Let's look at the length of the examples now:
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
And inspect the (now padded) first review:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Build the modelThe neural network is created by stacking layers—this requires two main architectural decisions:* How many layers to use in the model?* How many *hidden units* to use for each layer?In this example, the input data consists of an array of word-indices. The labels to predict are either 0 or 1. Let's build a model for this problem:
###Code
# input shape is the vocabulary count used for the movie reviews (10,000 words)
vocab_size = 10000
model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation='relu'))
model.add(keras.layers.Dense(1, activation='sigmoid'))
model.summary()
###Output
_____no_output_____
###Markdown
The layers are stacked sequentially to build the classifier:1. The first layer is an `Embedding` layer. This layer takes the integer-encoded vocabulary and looks up the embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: `(batch, sequence, embedding)`.2. Next, a `GlobalAveragePooling1D` layer returns a fixed-length output vector for each example by averaging over the sequence dimension. This allows the model to handle input of variable length, in the simplest way possible.3. This fixed-length output vector is piped through a fully-connected (`Dense`) layer with 16 hidden units.4. The last layer is densely connected with a single output node. Using the `sigmoid` activation function, this value is a float between 0 and 1, representing a probability, or confidence level. Hidden unitsThe above model has two intermediate or "hidden" layers, between the input and output. The number of outputs (units, nodes, or neurons) is the dimension of the representational space for the layer. In other words, the amount of freedom the network is allowed when learning an internal representation.If a model has more hidden units (a higher-dimensional representation space), and/or more layers, then the network can learn more complex representations. However, it makes the network more computationally expensive and may lead to learning unwanted patterns—patterns that improve performance on training data but not on the test data. This is called *overfitting*, and we'll explore it later. Loss function and optimizerA model needs a loss function and an optimizer for training. Since this is a binary classification problem and the model outputs a probability (a single-unit layer with a sigmoid activation), we'll use the `binary_crossentropy` loss function. This isn't the only choice for a loss function, you could, for instance, choose `mean_squared_error`. But, generally, `binary_crossentropy` is better for dealing with probabilities—it measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and the predictions.Later, when we are exploring regression problems (say, to predict the price of a house), we will see how to use another loss function called mean squared error.Now, configure the model to use an optimizer and a loss function:
###Code
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Create a validation setWhen training, we want to check the accuracy of the model on data it hasn't seen before. Create a *validation set* by setting apart 10,000 examples from the original training data. (Why not use the testing set now? Our goal is to develop and tune our model using only the training data, then use the test data just once to evaluate our accuracy).
###Code
x_val = train_data[:10000]
partial_x_train = train_data[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
###Output
_____no_output_____
###Markdown
Train the modelTrain the model for 40 epochs in mini-batches of 512 samples. This is 40 iterations over all samples in the `x_train` and `y_train` tensors. While training, monitor the model's loss and accuracy on the 10,000 samples from the validation set:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
###Output
_____no_output_____
###Markdown
Evaluate the modelAnd let's see how the model performs. Two values will be returned. Loss (a number which represents our error, lower values are better), and accuracy.
###Code
results = model.evaluate(test_data, test_labels)
print(results)
###Output
_____no_output_____
###Markdown
This fairly naive approach achieves an accuracy of about 87%. With more advanced approaches, the model should get closer to 95%. Create a graph of accuracy and loss over time`model.fit()` returns a `History` object that contains a dictionary with everything that happened during training:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
There are four entries: one for each monitored metric during training and validation. We can use these to plot the training and validation loss for comparison, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history_dict['acc']
val_acc = history_dict['val_acc']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Text classification with movie reviews View on TensorFlow.org Run in Google Colab View source on GitHub This notebook classifies movie reviews as *positive* or *negative* using the text of the review. This is an example of *binary*—or two-class—classification, an important and widely applicable kind of machine learning problem.We'll use the [IMDB dataset](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb) that contains the text of 50,000 movie reviews from the [Internet Movie Database](https://www.imdb.com/). These are split into 25,000 reviews for training and 25,000 reviews for testing. The training and testing sets are *balanced*, meaning they contain an equal number of positive and negative reviews.This notebook uses [tf.keras](https://www.tensorflow.org/guide/keras), a high-level API to build and train models in TensorFlow. For a more advanced text classification tutorial using `tf.keras`, see the [MLCC Text Classification Guide](https://developers.google.com/machine-learning/guides/text-classification/).
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
!pip install tf-nightly-2.0-preview
import tensorflow as tf
from tensorflow import keras
import numpy as np
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Download the IMDB datasetThe IMDB dataset comes packaged with TensorFlow. It has already been preprocessed such that the reviews (sequences of words) have been converted to sequences of integers, where each integer represents a specific word in a dictionary.The following code downloads the IMDB dataset to your machine (or uses a cached copy if you've already downloaded it):
###Code
imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` keeps the top 10,000 most frequently occurring words in the training data. The rare words are discarded to keep the size of the data manageable. Explore the dataLet's take a moment to understand the format of the data. The dataset comes preprocessed: each example is an array of integers representing the words of the movie review. Each label is an integer value of either 0 or 1, where 0 is a negative review, and 1 is a positive review.
###Code
print("Training entries: {}, labels: {}".format(len(train_data), len(train_labels)))
###Output
_____no_output_____
###Markdown
The text of reviews have been converted to integers, where each integer represents a specific word in a dictionary. Here's what the first review looks like:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Movie reviews may be different lengths. The below code shows the number of words in the first and second reviews. Since inputs to a neural network must be the same length, we'll need to resolve this later.
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
Convert the integers back to wordsIt may be useful to know how to convert integers back to text. Here, we'll create a helper function to query a dictionary object that contains the integer to string mapping:
###Code
# A dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# The first indices are reserved
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2 # unknown
word_index["<UNUSED>"] = 3
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
###Output
_____no_output_____
###Markdown
Now we can use the `decode_review` function to display the text for the first review:
###Code
decode_review(train_data[0])
###Output
_____no_output_____
###Markdown
Prepare the dataThe reviews—the arrays of integers—must be converted to tensors before fed into the neural network. This conversion can be done a couple of ways:* Convert the arrays into vectors of 0s and 1s indicating word occurrence, similar to a one-hot encoding. For example, the sequence [3, 5] would become a 10,000-dimensional vector that is all zeros except for indices 3 and 5, which are ones. Then, make this the first layer in our network—a Dense layer—that can handle floating point vector data. This approach is memory intensive, though, requiring a `num_words * num_reviews` size matrix.* Alternatively, we can pad the arrays so they all have the same length, then create an integer tensor of shape `max_length * num_reviews`. We can use an embedding layer capable of handling this shape as the first layer in our network.In this tutorial, we will use the second approach.Since the movie reviews must be the same length, we will use the [pad_sequences](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences) function to standardize the lengths:
###Code
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
###Output
_____no_output_____
###Markdown
Let's look at the length of the examples now:
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
And inspect the (now padded) first review:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Build the modelThe neural network is created by stacking layers—this requires two main architectural decisions:* How many layers to use in the model?* How many *hidden units* to use for each layer?In this example, the input data consists of an array of word-indices. The labels to predict are either 0 or 1. Let's build a model for this problem:
###Code
# input shape is the vocabulary count used for the movie reviews (10,000 words)
vocab_size = 10000
model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation='relu'))
model.add(keras.layers.Dense(1, activation='sigmoid'))
model.summary()
###Output
_____no_output_____
###Markdown
The layers are stacked sequentially to build the classifier:1. The first layer is an `Embedding` layer. This layer takes the integer-encoded vocabulary and looks up the embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: `(batch, sequence, embedding)`.2. Next, a `GlobalAveragePooling1D` layer returns a fixed-length output vector for each example by averaging over the sequence dimension. This allows the model to handle input of variable length, in the simplest way possible.3. This fixed-length output vector is piped through a fully-connected (`Dense`) layer with 16 hidden units.4. The last layer is densely connected with a single output node. Using the `sigmoid` activation function, this value is a float between 0 and 1, representing a probability, or confidence level. Hidden unitsThe above model has two intermediate or "hidden" layers, between the input and output. The number of outputs (units, nodes, or neurons) is the dimension of the representational space for the layer. In other words, the amount of freedom the network is allowed when learning an internal representation.If a model has more hidden units (a higher-dimensional representation space), and/or more layers, then the network can learn more complex representations. However, it makes the network more computationally expensive and may lead to learning unwanted patterns—patterns that improve performance on training data but not on the test data. This is called *overfitting*, and we'll explore it later. Loss function and optimizerA model needs a loss function and an optimizer for training. Since this is a binary classification problem and the model outputs a probability (a single-unit layer with a sigmoid activation), we'll use the `binary_crossentropy` loss function.This isn't the only choice for a loss function, you could, for instance, choose `mean_squared_error`. But, generally, `binary_crossentropy` is better for dealing with probabilities—it measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and the predictions.Later, when we are exploring regression problems (say, to predict the price of a house), we will see how to use another loss function called mean squared error.Now, configure the model to use an optimizer and a loss function:
###Code
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Create a validation setWhen training, we want to check the accuracy of the model on data it hasn't seen before. Create a *validation set* by setting apart 10,000 examples from the original training data. (Why not use the testing set now? Our goal is to develop and tune our model using only the training data, then use the test data just once to evaluate our accuracy).
###Code
x_val = train_data[:10000]
partial_x_train = train_data[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
###Output
_____no_output_____
###Markdown
Train the modelTrain the model for 40 epochs in mini-batches of 512 samples. This is 40 iterations over all samples in the `x_train` and `y_train` tensors. While training, monitor the model's loss and accuracy on the 10,000 samples from the validation set:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
###Output
_____no_output_____
###Markdown
Evaluate the modelAnd let's see how the model performs. Two values will be returned. Loss (a number which represents our error, lower values are better), and accuracy.
###Code
results = model.evaluate(test_data, test_labels)
print(results)
###Output
_____no_output_____
###Markdown
This fairly naive approach achieves an accuracy of about 87%. With more advanced approaches, the model should get closer to 95%. Create a graph of accuracy and loss over time`model.fit()` returns a `History` object that contains a dictionary with everything that happened during training:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
There are four entries: one for each monitored metric during training and validation. We can use these to plot the training and validation loss for comparison, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Text classification with movie reviews View on TensorFlow.org Run in Google Colab View source on GitHub This notebook classifies movie reviews as *positive* or *negative* using the text of the review. This is an example of *binary*—or two-class—classification, an important and widely applicable kind of machine learning problem. We'll use the [IMDB dataset](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb) that contains the text of 50,000 movie reviews from the [Internet Movie Database](https://www.imdb.com/). These are split into 25,000 reviews for training and 25,000 reviews for testing. The training and testing sets are *balanced*, meaning they contain an equal number of positive and negative reviews. This notebook uses [tf.keras](https://www.tensorflow.org/guide/keras), a high-level API to build and train models in TensorFlow. For a more advanced text classification tutorial using `tf.keras`, see the [MLCC Text Classification Guide](https://developers.google.com/machine-learning/guides/text-classification/).
###Code
from __future__ import absolute_import, division, print_function
!pip install tf-nightly-2.0-preview
import tensorflow as tf
from tensorflow import keras
import numpy as np
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Download the IMDB datasetThe IMDB dataset comes packaged with TensorFlow. It has already been preprocessed such that the reviews (sequences of words) have been converted to sequences of integers, where each integer represents a specific word in a dictionary.The following code downloads the IMDB dataset to your machine (or uses a cached copy if you've already downloaded it):
###Code
imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` keeps the top 10,000 most frequently occurring words in the training data. The rare words are discarded to keep the size of the data manageable. Explore the data Let's take a moment to understand the format of the data. The dataset comes preprocessed: each example is an array of integers representing the words of the movie review. Each label is an integer value of either 0 or 1, where 0 is a negative review, and 1 is a positive review.
###Code
print("Training entries: {}, labels: {}".format(len(train_data), len(train_labels)))
###Output
_____no_output_____
###Markdown
The text of reviews have been converted to integers, where each integer represents a specific word in a dictionary. Here's what the first review looks like:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Movie reviews may be different lengths. The below code shows the number of words in the first and second reviews. Since inputs to a neural network must be the same length, we'll need to resolve this later.
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
Convert the integers back to wordsIt may be useful to know how to convert integers back to text. Here, we'll create a helper function to query a dictionary object that contains the integer to string mapping:
###Code
# A dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# The first indices are reserved
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2 # unknown
word_index["<UNUSED>"] = 3
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
###Output
_____no_output_____
###Markdown
Now we can use the `decode_review` function to display the text for the first review:
###Code
decode_review(train_data[0])
###Output
_____no_output_____
###Markdown
Prepare the dataThe reviews—the arrays of integers—must be converted to tensors before fed into the neural network. This conversion can be done a couple of ways:* Convert the arrays into vectors of 0s and 1s indicating word occurrence, similar to a one-hot encoding. For example, the sequence [3, 5] would become a 10,000-dimensional vector that is all zeros except for indices 3 and 5, which are ones. Then, make this the first layer in our network—a Dense layer—that can handle floating point vector data. This approach is memory intensive, though, requiring a `num_words * num_reviews` size matrix.* Alternatively, we can pad the arrays so they all have the same length, then create an integer tensor of shape `max_length * num_reviews`. We can use an embedding layer capable of handling this shape as the first layer in our network.In this tutorial, we will use the second approach. Since the movie reviews must be the same length, we will use the [pad_sequences](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences) function to standardize the lengths:
###Code
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
###Output
_____no_output_____
###Markdown
Let's look at the length of the examples now:
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
And inspect the (now padded) first review:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Build the modelThe neural network is created by stacking layers—this requires two main architectural decisions:* How many layers to use in the model?* How many *hidden units* to use for each layer?In this example, the input data consists of an array of word-indices. The labels to predict are either 0 or 1. Let's build a model for this problem:
###Code
# input shape is the vocabulary count used for the movie reviews (10,000 words)
vocab_size = 10000
model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation='relu'))
model.add(keras.layers.Dense(1, activation='sigmoid'))
model.summary()
###Output
_____no_output_____
###Markdown
The layers are stacked sequentially to build the classifier:1. The first layer is an `Embedding` layer. This layer takes the integer-encoded vocabulary and looks up the embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: `(batch, sequence, embedding)`.2. Next, a `GlobalAveragePooling1D` layer returns a fixed-length output vector for each example by averaging over the sequence dimension. This allows the model to handle input of variable length, in the simplest way possible.3. This fixed-length output vector is piped through a fully-connected (`Dense`) layer with 16 hidden units.4. The last layer is densely connected with a single output node. Using the `sigmoid` activation function, this value is a float between 0 and 1, representing a probability, or confidence level. Hidden unitsThe above model has two intermediate or "hidden" layers, between the input and output. The number of outputs (units, nodes, or neurons) is the dimension of the representational space for the layer. In other words, the amount of freedom the network is allowed when learning an internal representation.If a model has more hidden units (a higher-dimensional representation space), and/or more layers, then the network can learn more complex representations. However, it makes the network more computationally expensive and may lead to learning unwanted patterns—patterns that improve performance on training data but not on the test data. This is called *overfitting*, and we'll explore it later. Loss function and optimizerA model needs a loss function and an optimizer for training. Since this is a binary classification problem and the model outputs a probability (a single-unit layer with a sigmoid activation), we'll use the `binary_crossentropy` loss function. This isn't the only choice for a loss function, you could, for instance, choose `mean_squared_error`. But, generally, `binary_crossentropy` is better for dealing with probabilities—it measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and the predictions.Later, when we are exploring regression problems (say, to predict the price of a house), we will see how to use another loss function called mean squared error.Now, configure the model to use an optimizer and a loss function:
###Code
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Create a validation setWhen training, we want to check the accuracy of the model on data it hasn't seen before. Create a *validation set* by setting apart 10,000 examples from the original training data. (Why not use the testing set now? Our goal is to develop and tune our model using only the training data, then use the test data just once to evaluate our accuracy).
###Code
x_val = train_data[:10000]
partial_x_train = train_data[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
###Output
_____no_output_____
###Markdown
Train the modelTrain the model for 40 epochs in mini-batches of 512 samples. This is 40 iterations over all samples in the `x_train` and `y_train` tensors. While training, monitor the model's loss and accuracy on the 10,000 samples from the validation set:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
###Output
_____no_output_____
###Markdown
Evaluate the modelAnd let's see how the model performs. Two values will be returned. Loss (a number which represents our error, lower values are better), and accuracy.
###Code
results = model.evaluate(test_data, test_labels)
print(results)
###Output
_____no_output_____
###Markdown
This fairly naive approach achieves an accuracy of about 87%. With more advanced approaches, the model should get closer to 95%. Create a graph of accuracy and loss over time`model.fit()` returns a `History` object that contains a dictionary with everything that happened during training:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
There are four entries: one for each monitored metric during training and validation. We can use these to plot the training and validation loss for comparison, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Text classification with movie reviews View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This notebook classifies movie reviews as *positive* or *negative* using the text of the review. This is an example of *binary*—or two-class—classification, an important and widely applicable kind of machine learning problem.We'll use the [IMDB dataset](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb) that contains the text of 50,000 movie reviews from the [Internet Movie Database](https://www.imdb.com/). These are split into 25,000 reviews for training and 25,000 reviews for testing. The training and testing sets are *balanced*, meaning they contain an equal number of positive and negative reviews.This notebook uses [tf.keras](https://www.tensorflow.org/guide/keras), a high-level API to build and train models in TensorFlow. For a more advanced text classification tutorial using `tf.keras`, see the [MLCC Text Classification Guide](https://developers.google.com/machine-learning/guides/text-classification/).
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow import keras
import numpy as np
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Download the IMDB datasetThe IMDB dataset comes packaged with TensorFlow. It has already been preprocessed such that the reviews (sequences of words) have been converted to sequences of integers, where each integer represents a specific word in a dictionary.The following code downloads the IMDB dataset to your machine (or uses a cached copy if you've already downloaded it):
###Code
imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` keeps the top 10,000 most frequently occurring words in the training data. The rare words are discarded to keep the size of the data manageable. Explore the dataLet's take a moment to understand the format of the data. The dataset comes preprocessed: each example is an array of integers representing the words of the movie review. Each label is an integer value of either 0 or 1, where 0 is a negative review, and 1 is a positive review.
###Code
print("Training entries: {}, labels: {}".format(len(train_data), len(train_labels)))
###Output
_____no_output_____
###Markdown
The text of reviews have been converted to integers, where each integer represents a specific word in a dictionary. Here's what the first review looks like:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Movie reviews may be different lengths. The below code shows the number of words in the first and second reviews. Since inputs to a neural network must be the same length, we'll need to resolve this later.
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
Convert the integers back to wordsIt may be useful to know how to convert integers back to text. Here, we'll create a helper function to query a dictionary object that contains the integer to string mapping:
###Code
# A dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# The first indices are reserved
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2 # unknown
word_index["<UNUSED>"] = 3
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
###Output
_____no_output_____
###Markdown
Now we can use the `decode_review` function to display the text for the first review:
###Code
decode_review(train_data[0])
###Output
_____no_output_____
###Markdown
Prepare the dataThe reviews—the arrays of integers—must be converted to tensors before fed into the neural network. This conversion can be done a couple of ways:* Convert the arrays into vectors of 0s and 1s indicating word occurrence, similar to a one-hot encoding. For example, the sequence [3, 5] would become a 10,000-dimensional vector that is all zeros except for indices 3 and 5, which are ones. Then, make this the first layer in our network—a Dense layer—that can handle floating point vector data. This approach is memory intensive, though, requiring a `num_words * num_reviews` size matrix.* Alternatively, we can pad the arrays so they all have the same length, then create an integer tensor of shape `max_length * num_reviews`. We can use an embedding layer capable of handling this shape as the first layer in our network.In this tutorial, we will use the second approach.Since the movie reviews must be the same length, we will use the [pad_sequences](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences) function to standardize the lengths:
###Code
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
###Output
_____no_output_____
###Markdown
Let's look at the length of the examples now:
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
And inspect the (now padded) first review:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Build the modelThe neural network is created by stacking layers—this requires two main architectural decisions:* How many layers to use in the model?* How many *hidden units* to use for each layer?In this example, the input data consists of an array of word-indices. The labels to predict are either 0 or 1. Let's build a model for this problem:
###Code
# input shape is the vocabulary count used for the movie reviews (10,000 words)
vocab_size = 10000
model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation='relu'))
model.add(keras.layers.Dense(1, activation='sigmoid'))
model.summary()
###Output
_____no_output_____
###Markdown
The layers are stacked sequentially to build the classifier:1. The first layer is an `Embedding` layer. This layer takes the integer-encoded vocabulary and looks up the embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: `(batch, sequence, embedding)`.2. Next, a `GlobalAveragePooling1D` layer returns a fixed-length output vector for each example by averaging over the sequence dimension. This allows the model to handle input of variable length, in the simplest way possible.3. This fixed-length output vector is piped through a fully-connected (`Dense`) layer with 16 hidden units.4. The last layer is densely connected with a single output node. Using the `sigmoid` activation function, this value is a float between 0 and 1, representing a probability, or confidence level. Hidden unitsThe above model has two intermediate or "hidden" layers, between the input and output. The number of outputs (units, nodes, or neurons) is the dimension of the representational space for the layer. In other words, the amount of freedom the network is allowed when learning an internal representation.If a model has more hidden units (a higher-dimensional representation space), and/or more layers, then the network can learn more complex representations. However, it makes the network more computationally expensive and may lead to learning unwanted patterns—patterns that improve performance on training data but not on the test data. This is called *overfitting*, and we'll explore it later. Loss function and optimizerA model needs a loss function and an optimizer for training. Since this is a binary classification problem and the model outputs a probability (a single-unit layer with a sigmoid activation), we'll use the `binary_crossentropy` loss function.This isn't the only choice for a loss function, you could, for instance, choose `mean_squared_error`. But, generally, `binary_crossentropy` is better for dealing with probabilities—it measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and the predictions.Later, when we are exploring regression problems (say, to predict the price of a house), we will see how to use another loss function called mean squared error.Now, configure the model to use an optimizer and a loss function:
###Code
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Create a validation setWhen training, we want to check the accuracy of the model on data it hasn't seen before. Create a *validation set* by setting apart 10,000 examples from the original training data. (Why not use the testing set now? Our goal is to develop and tune our model using only the training data, then use the test data just once to evaluate our accuracy).
###Code
x_val = train_data[:10000]
partial_x_train = train_data[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
###Output
_____no_output_____
###Markdown
Train the modelTrain the model for 40 epochs in mini-batches of 512 samples. This is 40 iterations over all samples in the `x_train` and `y_train` tensors. While training, monitor the model's loss and accuracy on the 10,000 samples from the validation set:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
###Output
_____no_output_____
###Markdown
Evaluate the modelAnd let's see how the model performs. Two values will be returned. Loss (a number which represents our error, lower values are better), and accuracy.
###Code
results = model.evaluate(test_data, test_labels)
print(results)
###Output
_____no_output_____
###Markdown
This fairly naive approach achieves an accuracy of about 87%. With more advanced approaches, the model should get closer to 95%. Create a graph of accuracy and loss over time`model.fit()` returns a `History` object that contains a dictionary with everything that happened during training:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
There are four entries: one for each monitored metric during training and validation. We can use these to plot the training and validation loss for comparison, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Text classification with movie reviews View on TensorFlow.org Run in Google Colab View source on GitHub This notebook classifies movie reviews as *positive* or *negative* using the text of the review. This is an example of *binary*—or two-class—classification, an important and widely applicable kind of machine learning problem.We'll use the [IMDB dataset](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb) that contains the text of 50,000 movie reviews from the [Internet Movie Database](https://www.imdb.com/). These are split into 25,000 reviews for training and 25,000 reviews for testing. The training and testing sets are *balanced*, meaning they contain an equal number of positive and negative reviews.This notebook uses [tf.keras](https://www.tensorflow.org/guide/keras), a high-level API to build and train models in TensorFlow. For a more advanced text classification tutorial using `tf.keras`, see the [MLCC Text Classification Guide](https://developers.google.com/machine-learning/guides/text-classification/).
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
!pip install tf-nightly-2.0-preview
import tensorflow as tf
from tensorflow import keras
import numpy as np
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Download the IMDB datasetThe IMDB dataset comes packaged with TensorFlow. It has already been preprocessed such that the reviews (sequences of words) have been converted to sequences of integers, where each integer represents a specific word in a dictionary.The following code downloads the IMDB dataset to your machine (or uses a cached copy if you've already downloaded it):
###Code
imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` keeps the top 10,000 most frequently occurring words in the training data. The rare words are discarded to keep the size of the data manageable. Explore the dataLet's take a moment to understand the format of the data. The dataset comes preprocessed: each example is an array of integers representing the words of the movie review. Each label is an integer value of either 0 or 1, where 0 is a negative review, and 1 is a positive review.
###Code
print("Training entries: {}, labels: {}".format(len(train_data), len(train_labels)))
###Output
_____no_output_____
###Markdown
The text of reviews have been converted to integers, where each integer represents a specific word in a dictionary. Here's what the first review looks like:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Movie reviews may be different lengths. The below code shows the number of words in the first and second reviews. Since inputs to a neural network must be the same length, we'll need to resolve this later.
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
Convert the integers back to wordsIt may be useful to know how to convert integers back to text. Here, we'll create a helper function to query a dictionary object that contains the integer to string mapping:
###Code
# A dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# The first indices are reserved
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2 # unknown
word_index["<UNUSED>"] = 3
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
###Output
_____no_output_____
###Markdown
Now we can use the `decode_review` function to display the text for the first review:
###Code
decode_review(train_data[0])
###Output
_____no_output_____
###Markdown
Prepare the dataThe reviews—the arrays of integers—must be converted to tensors before fed into the neural network. This conversion can be done a couple of ways:* Convert the arrays into vectors of 0s and 1s indicating word occurrence, similar to a one-hot encoding. For example, the sequence [3, 5] would become a 10,000-dimensional vector that is all zeros except for indices 3 and 5, which are ones. Then, make this the first layer in our network—a Dense layer—that can handle floating point vector data. This approach is memory intensive, though, requiring a `num_words * num_reviews` size matrix.* Alternatively, we can pad the arrays so they all have the same length, then create an integer tensor of shape `max_length * num_reviews`. We can use an embedding layer capable of handling this shape as the first layer in our network.In this tutorial, we will use the second approach.Since the movie reviews must be the same length, we will use the [pad_sequences](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences) function to standardize the lengths:
###Code
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
###Output
_____no_output_____
###Markdown
Let's look at the length of the examples now:
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
And inspect the (now padded) first review:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Build the modelThe neural network is created by stacking layers—this requires two main architectural decisions:* How many layers to use in the model?* How many *hidden units* to use for each layer?In this example, the input data consists of an array of word-indices. The labels to predict are either 0 or 1. Let's build a model for this problem:
###Code
# input shape is the vocabulary count used for the movie reviews (10,000 words)
vocab_size = 10000
model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation='relu'))
model.add(keras.layers.Dense(1, activation='sigmoid'))
model.summary()
###Output
_____no_output_____
###Markdown
The layers are stacked sequentially to build the classifier:1. The first layer is an `Embedding` layer. This layer takes the integer-encoded vocabulary and looks up the embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: `(batch, sequence, embedding)`.2. Next, a `GlobalAveragePooling1D` layer returns a fixed-length output vector for each example by averaging over the sequence dimension. This allows the model to handle input of variable length, in the simplest way possible.3. This fixed-length output vector is piped through a fully-connected (`Dense`) layer with 16 hidden units.4. The last layer is densely connected with a single output node. Using the `sigmoid` activation function, this value is a float between 0 and 1, representing a probability, or confidence level. Hidden unitsThe above model has two intermediate or "hidden" layers, between the input and output. The number of outputs (units, nodes, or neurons) is the dimension of the representational space for the layer. In other words, the amount of freedom the network is allowed when learning an internal representation.If a model has more hidden units (a higher-dimensional representation space), and/or more layers, then the network can learn more complex representations. However, it makes the network more computationally expensive and may lead to learning unwanted patterns—patterns that improve performance on training data but not on the test data. This is called *overfitting*, and we'll explore it later. Loss function and optimizerA model needs a loss function and an optimizer for training. Since this is a binary classification problem and the model outputs a probability (a single-unit layer with a sigmoid activation), we'll use the `binary_crossentropy` loss function.This isn't the only choice for a loss function, you could, for instance, choose `mean_squared_error`. But, generally, `binary_crossentropy` is better for dealing with probabilities—it measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and the predictions.Later, when we are exploring regression problems (say, to predict the price of a house), we will see how to use another loss function called mean squared error.Now, configure the model to use an optimizer and a loss function:
###Code
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Create a validation setWhen training, we want to check the accuracy of the model on data it hasn't seen before. Create a *validation set* by setting apart 10,000 examples from the original training data. (Why not use the testing set now? Our goal is to develop and tune our model using only the training data, then use the test data just once to evaluate our accuracy).
###Code
x_val = train_data[:10000]
partial_x_train = train_data[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
###Output
_____no_output_____
###Markdown
Train the modelTrain the model for 40 epochs in mini-batches of 512 samples. This is 40 iterations over all samples in the `x_train` and `y_train` tensors. While training, monitor the model's loss and accuracy on the 10,000 samples from the validation set:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
###Output
_____no_output_____
###Markdown
Evaluate the modelAnd let's see how the model performs. Two values will be returned. Loss (a number which represents our error, lower values are better), and accuracy.
###Code
results = model.evaluate(test_data, test_labels)
print(results)
###Output
_____no_output_____
###Markdown
This fairly naive approach achieves an accuracy of about 87%. With more advanced approaches, the model should get closer to 95%. Create a graph of accuracy and loss over time`model.fit()` returns a `History` object that contains a dictionary with everything that happened during training:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
There are four entries: one for each monitored metric during training and validation. We can use these to plot the training and validation loss for comparison, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Text classification with movie reviews View on TensorFlow.org Run in Google Colab View source on GitHub This notebook classifies movie reviews as *positive* or *negative* using the text of the review. This is an example of *binary*—or two-class—classification, an important and widely applicable kind of machine learning problem. We'll use the [IMDB dataset](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb) that contains the text of 50,000 movie reviews from the [Internet Movie Database](https://www.imdb.com/). These are split into 25,000 reviews for training and 25,000 reviews for testing. The training and testing sets are *balanced*, meaning they contain an equal number of positive and negative reviews. This notebook uses [tf.keras](https://www.tensorflow.org/guide/keras), a high-level API to build and train models in TensorFlow. For a more advanced text classification tutorial using `tf.keras`, see the [MLCC Text Classification Guide](https://developers.google.com/machine-learning/guides/text-classification/).
###Code
!pip install tf-nightly-2.0-preview
import tensorflow as tf
from tensorflow import keras
import numpy as np
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Download the IMDB datasetThe IMDB dataset comes packaged with TensorFlow. It has already been preprocessed such that the reviews (sequences of words) have been converted to sequences of integers, where each integer represents a specific word in a dictionary.The following code downloads the IMDB dataset to your machine (or uses a cached copy if you've already downloaded it):
###Code
imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` keeps the top 10,000 most frequently occurring words in the training data. The rare words are discarded to keep the size of the data manageable. Explore the data Let's take a moment to understand the format of the data. The dataset comes preprocessed: each example is an array of integers representing the words of the movie review. Each label is an integer value of either 0 or 1, where 0 is a negative review, and 1 is a positive review.
###Code
print("Training entries: {}, labels: {}".format(len(train_data), len(train_labels)))
###Output
_____no_output_____
###Markdown
The text of reviews have been converted to integers, where each integer represents a specific word in a dictionary. Here's what the first review looks like:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Movie reviews may be different lengths. The below code shows the number of words in the first and second reviews. Since inputs to a neural network must be the same length, we'll need to resolve this later.
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
Convert the integers back to wordsIt may be useful to know how to convert integers back to text. Here, we'll create a helper function to query a dictionary object that contains the integer to string mapping:
###Code
# A dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# The first indices are reserved
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2 # unknown
word_index["<UNUSED>"] = 3
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
###Output
_____no_output_____
###Markdown
Now we can use the `decode_review` function to display the text for the first review:
###Code
decode_review(train_data[0])
###Output
_____no_output_____
###Markdown
Prepare the dataThe reviews—the arrays of integers—must be converted to tensors before fed into the neural network. This conversion can be done a couple of ways:* Convert the arrays into vectors of 0s and 1s indicating word occurrence, similar to a one-hot encoding. For example, the sequence [3, 5] would become a 10,000-dimensional vector that is all zeros except for indices 3 and 5, which are ones. Then, make this the first layer in our network—a Dense layer—that can handle floating point vector data. This approach is memory intensive, though, requiring a `num_words * num_reviews` size matrix.* Alternatively, we can pad the arrays so they all have the same length, then create an integer tensor of shape `max_length * num_reviews`. We can use an embedding layer capable of handling this shape as the first layer in our network.In this tutorial, we will use the second approach. Since the movie reviews must be the same length, we will use the [pad_sequences](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences) function to standardize the lengths:
###Code
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
###Output
_____no_output_____
###Markdown
Let's look at the length of the examples now:
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
And inspect the (now padded) first review:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Build the modelThe neural network is created by stacking layers—this requires two main architectural decisions:* How many layers to use in the model?* How many *hidden units* to use for each layer?In this example, the input data consists of an array of word-indices. The labels to predict are either 0 or 1. Let's build a model for this problem:
###Code
# input shape is the vocabulary count used for the movie reviews (10,000 words)
vocab_size = 10000
model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation=tf.nn.relu))
model.add(keras.layers.Dense(1, activation=tf.nn.sigmoid))
model.summary()
###Output
_____no_output_____
###Markdown
The layers are stacked sequentially to build the classifier:1. The first layer is an `Embedding` layer. This layer takes the integer-encoded vocabulary and looks up the embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: `(batch, sequence, embedding)`.2. Next, a `GlobalAveragePooling1D` layer returns a fixed-length output vector for each example by averaging over the sequence dimension. This allows the model to handle input of variable length, in the simplest way possible.3. This fixed-length output vector is piped through a fully-connected (`Dense`) layer with 16 hidden units.4. The last layer is densely connected with a single output node. Using the `sigmoid` activation function, this value is a float between 0 and 1, representing a probability, or confidence level. Hidden unitsThe above model has two intermediate or "hidden" layers, between the input and output. The number of outputs (units, nodes, or neurons) is the dimension of the representational space for the layer. In other words, the amount of freedom the network is allowed when learning an internal representation.If a model has more hidden units (a higher-dimensional representation space), and/or more layers, then the network can learn more complex representations. However, it makes the network more computationally expensive and may lead to learning unwanted patterns—patterns that improve performance on training data but not on the test data. This is called *overfitting*, and we'll explore it later. Loss function and optimizerA model needs a loss function and an optimizer for training. Since this is a binary classification problem and the model outputs a probability (a single-unit layer with a sigmoid activation), we'll use the `binary_crossentropy` loss function. This isn't the only choice for a loss function, you could, for instance, choose `mean_squared_error`. But, generally, `binary_crossentropy` is better for dealing with probabilities—it measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and the predictions.Later, when we are exploring regression problems (say, to predict the price of a house), we will see how to use another loss function called mean squared error.Now, configure the model to use an optimizer and a loss function:
###Code
model.compile(optimizer=tf.keras.optimizers.Adam(),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Create a validation setWhen training, we want to check the accuracy of the model on data it hasn't seen before. Create a *validation set* by setting apart 10,000 examples from the original training data. (Why not use the testing set now? Our goal is to develop and tune our model using only the training data, then use the test data just once to evaluate our accuracy).
###Code
x_val = train_data[:10000]
partial_x_train = train_data[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
###Output
_____no_output_____
###Markdown
Train the modelTrain the model for 40 epochs in mini-batches of 512 samples. This is 40 iterations over all samples in the `x_train` and `y_train` tensors. While training, monitor the model's loss and accuracy on the 10,000 samples from the validation set:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
###Output
_____no_output_____
###Markdown
Evaluate the modelAnd let's see how the model performs. Two values will be returned. Loss (a number which represents our error, lower values are better), and accuracy.
###Code
results = model.evaluate(test_data, test_labels)
print(results)
###Output
_____no_output_____
###Markdown
This fairly naive approach achieves an accuracy of about 87%. With more advanced approaches, the model should get closer to 95%. Create a graph of accuracy and loss over time`model.fit()` returns a `History` object that contains a dictionary with everything that happened during training:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
There are four entries: one for each monitored metric during training and validation. We can use these to plot the training and validation loss for comparison, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Text classification with movie reviews View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This notebook classifies movie reviews as *positive* or *negative* using the text of the review. This is an example of *binary*—or two-class—classification, an important and widely applicable kind of machine learning problem.We'll use the [IMDB dataset](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb) that contains the text of 50,000 movie reviews from the [Internet Movie Database](https://www.imdb.com/). These are split into 25,000 reviews for training and 25,000 reviews for testing. The training and testing sets are *balanced*, meaning they contain an equal number of positive and negative reviews.This notebook uses [tf.keras](https://www.tensorflow.org/guide/keras), a high-level API to build and train models in TensorFlow. For a more advanced text classification tutorial using `tf.keras`, see the [MLCC Text Classification Guide](https://developers.google.com/machine-learning/guides/text-classification/).
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
!pip install tensorflow==2.0.0-beta1
import tensorflow as tf
from tensorflow import keras
import numpy as np
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Download the IMDB datasetThe IMDB dataset comes packaged with TensorFlow. It has already been preprocessed such that the reviews (sequences of words) have been converted to sequences of integers, where each integer represents a specific word in a dictionary.The following code downloads the IMDB dataset to your machine (or uses a cached copy if you've already downloaded it):
###Code
imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` keeps the top 10,000 most frequently occurring words in the training data. The rare words are discarded to keep the size of the data manageable. Explore the dataLet's take a moment to understand the format of the data. The dataset comes preprocessed: each example is an array of integers representing the words of the movie review. Each label is an integer value of either 0 or 1, where 0 is a negative review, and 1 is a positive review.
###Code
print("Training entries: {}, labels: {}".format(len(train_data), len(train_labels)))
###Output
_____no_output_____
###Markdown
The text of reviews have been converted to integers, where each integer represents a specific word in a dictionary. Here's what the first review looks like:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Movie reviews may be different lengths. The below code shows the number of words in the first and second reviews. Since inputs to a neural network must be the same length, we'll need to resolve this later.
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
Convert the integers back to wordsIt may be useful to know how to convert integers back to text. Here, we'll create a helper function to query a dictionary object that contains the integer to string mapping:
###Code
# A dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# The first indices are reserved
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2 # unknown
word_index["<UNUSED>"] = 3
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
###Output
_____no_output_____
###Markdown
Now we can use the `decode_review` function to display the text for the first review:
###Code
decode_review(train_data[0])
###Output
_____no_output_____
###Markdown
Prepare the dataThe reviews—the arrays of integers—must be converted to tensors before fed into the neural network. This conversion can be done a couple of ways:* Convert the arrays into vectors of 0s and 1s indicating word occurrence, similar to a one-hot encoding. For example, the sequence [3, 5] would become a 10,000-dimensional vector that is all zeros except for indices 3 and 5, which are ones. Then, make this the first layer in our network—a Dense layer—that can handle floating point vector data. This approach is memory intensive, though, requiring a `num_words * num_reviews` size matrix.* Alternatively, we can pad the arrays so they all have the same length, then create an integer tensor of shape `max_length * num_reviews`. We can use an embedding layer capable of handling this shape as the first layer in our network.In this tutorial, we will use the second approach.Since the movie reviews must be the same length, we will use the [pad_sequences](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences) function to standardize the lengths:
###Code
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
###Output
_____no_output_____
###Markdown
Let's look at the length of the examples now:
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
And inspect the (now padded) first review:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Build the modelThe neural network is created by stacking layers—this requires two main architectural decisions:* How many layers to use in the model?* How many *hidden units* to use for each layer?In this example, the input data consists of an array of word-indices. The labels to predict are either 0 or 1. Let's build a model for this problem:
###Code
# input shape is the vocabulary count used for the movie reviews (10,000 words)
vocab_size = 10000
model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation='relu'))
model.add(keras.layers.Dense(1, activation='sigmoid'))
model.summary()
###Output
_____no_output_____
###Markdown
The layers are stacked sequentially to build the classifier:1. The first layer is an `Embedding` layer. This layer takes the integer-encoded vocabulary and looks up the embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: `(batch, sequence, embedding)`.2. Next, a `GlobalAveragePooling1D` layer returns a fixed-length output vector for each example by averaging over the sequence dimension. This allows the model to handle input of variable length, in the simplest way possible.3. This fixed-length output vector is piped through a fully-connected (`Dense`) layer with 16 hidden units.4. The last layer is densely connected with a single output node. Using the `sigmoid` activation function, this value is a float between 0 and 1, representing a probability, or confidence level. Hidden unitsThe above model has two intermediate or "hidden" layers, between the input and output. The number of outputs (units, nodes, or neurons) is the dimension of the representational space for the layer. In other words, the amount of freedom the network is allowed when learning an internal representation.If a model has more hidden units (a higher-dimensional representation space), and/or more layers, then the network can learn more complex representations. However, it makes the network more computationally expensive and may lead to learning unwanted patterns—patterns that improve performance on training data but not on the test data. This is called *overfitting*, and we'll explore it later. Loss function and optimizerA model needs a loss function and an optimizer for training. Since this is a binary classification problem and the model outputs a probability (a single-unit layer with a sigmoid activation), we'll use the `binary_crossentropy` loss function.This isn't the only choice for a loss function, you could, for instance, choose `mean_squared_error`. But, generally, `binary_crossentropy` is better for dealing with probabilities—it measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and the predictions.Later, when we are exploring regression problems (say, to predict the price of a house), we will see how to use another loss function called mean squared error.Now, configure the model to use an optimizer and a loss function:
###Code
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Create a validation setWhen training, we want to check the accuracy of the model on data it hasn't seen before. Create a *validation set* by setting apart 10,000 examples from the original training data. (Why not use the testing set now? Our goal is to develop and tune our model using only the training data, then use the test data just once to evaluate our accuracy).
###Code
x_val = train_data[:10000]
partial_x_train = train_data[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
###Output
_____no_output_____
###Markdown
Train the modelTrain the model for 40 epochs in mini-batches of 512 samples. This is 40 iterations over all samples in the `x_train` and `y_train` tensors. While training, monitor the model's loss and accuracy on the 10,000 samples from the validation set:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
###Output
_____no_output_____
###Markdown
Evaluate the modelAnd let's see how the model performs. Two values will be returned. Loss (a number which represents our error, lower values are better), and accuracy.
###Code
results = model.evaluate(test_data, test_labels)
print(results)
###Output
_____no_output_____
###Markdown
This fairly naive approach achieves an accuracy of about 87%. With more advanced approaches, the model should get closer to 95%. Create a graph of accuracy and loss over time`model.fit()` returns a `History` object that contains a dictionary with everything that happened during training:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
There are four entries: one for each monitored metric during training and validation. We can use these to plot the training and validation loss for comparison, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Text classification with movie reviews View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This notebook classifies movie reviews as *positive* or *negative* using the text of the review. This is an example of *binary*—or two-class—classification, an important and widely applicable kind of machine learning problem.We'll use the [IMDB dataset](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb) that contains the text of 50,000 movie reviews from the [Internet Movie Database](https://www.imdb.com/). These are split into 25,000 reviews for training and 25,000 reviews for testing. The training and testing sets are *balanced*, meaning they contain an equal number of positive and negative reviews.This notebook uses [tf.keras](https://www.tensorflow.org/guide/keras), a high-level API to build and train models in TensorFlow. For a more advanced text classification tutorial using `tf.keras`, see the [MLCC Text Classification Guide](https://developers.google.com/machine-learning/guides/text-classification/).
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
!pip install tensorflow==2.0.0-beta1
import tensorflow as tf
from tensorflow import keras
import numpy as np
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Download the IMDB datasetThe IMDB dataset comes packaged with TensorFlow. It has already been preprocessed such that the reviews (sequences of words) have been converted to sequences of integers, where each integer represents a specific word in a dictionary.The following code downloads the IMDB dataset to your machine (or uses a cached copy if you've already downloaded it):
###Code
imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` keeps the top 10,000 most frequently occurring words in the training data. The rare words are discarded to keep the size of the data manageable. Explore the dataLet's take a moment to understand the format of the data. The dataset comes preprocessed: each example is an array of integers representing the words of the movie review. Each label is an integer value of either 0 or 1, where 0 is a negative review, and 1 is a positive review.
###Code
print("Training entries: {}, labels: {}".format(len(train_data), len(train_labels)))
###Output
_____no_output_____
###Markdown
The text of reviews have been converted to integers, where each integer represents a specific word in a dictionary. Here's what the first review looks like:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Movie reviews may be different lengths. The below code shows the number of words in the first and second reviews. Since inputs to a neural network must be the same length, we'll need to resolve this later.
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
Convert the integers back to wordsIt may be useful to know how to convert integers back to text. Here, we'll create a helper function to query a dictionary object that contains the integer to string mapping:
###Code
# A dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# The first indices are reserved
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2 # unknown
word_index["<UNUSED>"] = 3
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
###Output
_____no_output_____
###Markdown
Now we can use the `decode_review` function to display the text for the first review:
###Code
decode_review(train_data[0])
###Output
_____no_output_____
###Markdown
Prepare the dataThe reviews—the arrays of integers—must be converted to tensors before fed into the neural network. This conversion can be done a couple of ways:* Convert the arrays into vectors of 0s and 1s indicating word occurrence, similar to a one-hot encoding. For example, the sequence [3, 5] would become a 10,000-dimensional vector that is all zeros except for indices 3 and 5, which are ones. Then, make this the first layer in our network—a Dense layer—that can handle floating point vector data. This approach is memory intensive, though, requiring a `num_words * num_reviews` size matrix.* Alternatively, we can pad the arrays so they all have the same length, then create an integer tensor of shape `max_length * num_reviews`. We can use an embedding layer capable of handling this shape as the first layer in our network.In this tutorial, we will use the second approach.Since the movie reviews must be the same length, we will use the [pad_sequences](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences) function to standardize the lengths:
###Code
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
###Output
_____no_output_____
###Markdown
Let's look at the length of the examples now:
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
And inspect the (now padded) first review:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Build the modelThe neural network is created by stacking layers—this requires two main architectural decisions:* How many layers to use in the model?* How many *hidden units* to use for each layer?In this example, the input data consists of an array of word-indices. The labels to predict are either 0 or 1. Let's build a model for this problem:
###Code
# input shape is the vocabulary count used for the movie reviews (10,000 words)
vocab_size = 10000
model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation='relu'))
model.add(keras.layers.Dense(1, activation='sigmoid'))
model.summary()
###Output
_____no_output_____
###Markdown
The layers are stacked sequentially to build the classifier:1. The first layer is an `Embedding` layer. This layer takes the integer-encoded vocabulary and looks up the embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: `(batch, sequence, embedding)`.2. Next, a `GlobalAveragePooling1D` layer returns a fixed-length output vector for each example by averaging over the sequence dimension. This allows the model to handle input of variable length, in the simplest way possible.3. This fixed-length output vector is piped through a fully-connected (`Dense`) layer with 16 hidden units.4. The last layer is densely connected with a single output node. Using the `sigmoid` activation function, this value is a float between 0 and 1, representing a probability, or confidence level. Hidden unitsThe above model has two intermediate or "hidden" layers, between the input and output. The number of outputs (units, nodes, or neurons) is the dimension of the representational space for the layer. In other words, the amount of freedom the network is allowed when learning an internal representation.If a model has more hidden units (a higher-dimensional representation space), and/or more layers, then the network can learn more complex representations. However, it makes the network more computationally expensive and may lead to learning unwanted patterns—patterns that improve performance on training data but not on the test data. This is called *overfitting*, and we'll explore it later. Loss function and optimizerA model needs a loss function and an optimizer for training. Since this is a binary classification problem and the model outputs a probability (a single-unit layer with a sigmoid activation), we'll use the `binary_crossentropy` loss function.This isn't the only choice for a loss function, you could, for instance, choose `mean_squared_error`. But, generally, `binary_crossentropy` is better for dealing with probabilities—it measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and the predictions.Later, when we are exploring regression problems (say, to predict the price of a house), we will see how to use another loss function called mean squared error.Now, configure the model to use an optimizer and a loss function:
###Code
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Create a validation setWhen training, we want to check the accuracy of the model on data it hasn't seen before. Create a *validation set* by setting apart 10,000 examples from the original training data. (Why not use the testing set now? Our goal is to develop and tune our model using only the training data, then use the test data just once to evaluate our accuracy).
###Code
x_val = train_data[:10000]
partial_x_train = train_data[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
###Output
_____no_output_____
###Markdown
Train the modelTrain the model for 40 epochs in mini-batches of 512 samples. This is 40 iterations over all samples in the `x_train` and `y_train` tensors. While training, monitor the model's loss and accuracy on the 10,000 samples from the validation set:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
###Output
_____no_output_____
###Markdown
Evaluate the modelAnd let's see how the model performs. Two values will be returned. Loss (a number which represents our error, lower values are better), and accuracy.
###Code
results = model.evaluate(test_data, test_labels)
print(results)
###Output
_____no_output_____
###Markdown
This fairly naive approach achieves an accuracy of about 87%. With more advanced approaches, the model should get closer to 95%. Create a graph of accuracy and loss over time`model.fit()` returns a `History` object that contains a dictionary with everything that happened during training:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
There are four entries: one for each monitored metric during training and validation. We can use these to plot the training and validation loss for comparison, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Text classification with movie reviews View on TensorFlow.org Run in Google Colab View source on GitHub This notebook classifies movie reviews as *positive* or *negative* using the text of the review. This is an example of *binary*—or two-class—classification, an important and widely applicable kind of machine learning problem. We'll use the [IMDB dataset](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb) that contains the text of 50,000 movie reviews from the [Internet Movie Database](https://www.imdb.com/). These are split into 25,000 reviews for training and 25,000 reviews for testing. The training and testing sets are *balanced*, meaning they contain an equal number of positive and negative reviews. This notebook uses [tf.keras](https://www.tensorflow.org/guide/keras), a high-level API to build and train models in TensorFlow. For a more advanced text classification tutorial using `tf.keras`, see the [MLCC Text Classification Guide](https://developers.google.com/machine-learning/guides/text-classification/).
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
!pip install tensorflow==2.0.0-alpha0
import tensorflow as tf
from tensorflow import keras
import numpy as np
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Download the IMDB datasetThe IMDB dataset comes packaged with TensorFlow. It has already been preprocessed such that the reviews (sequences of words) have been converted to sequences of integers, where each integer represents a specific word in a dictionary.The following code downloads the IMDB dataset to your machine (or uses a cached copy if you've already downloaded it):
###Code
imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
The argument `num_words=10000` keeps the top 10,000 most frequently occurring words in the training data. The rare words are discarded to keep the size of the data manageable. Explore the data Let's take a moment to understand the format of the data. The dataset comes preprocessed: each example is an array of integers representing the words of the movie review. Each label is an integer value of either 0 or 1, where 0 is a negative review, and 1 is a positive review.
###Code
print("Training entries: {}, labels: {}".format(len(train_data), len(train_labels)))
###Output
_____no_output_____
###Markdown
The text of reviews have been converted to integers, where each integer represents a specific word in a dictionary. Here's what the first review looks like:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Movie reviews may be different lengths. The below code shows the number of words in the first and second reviews. Since inputs to a neural network must be the same length, we'll need to resolve this later.
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
Convert the integers back to wordsIt may be useful to know how to convert integers back to text. Here, we'll create a helper function to query a dictionary object that contains the integer to string mapping:
###Code
# A dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# The first indices are reserved
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2 # unknown
word_index["<UNUSED>"] = 3
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
###Output
_____no_output_____
###Markdown
Now we can use the `decode_review` function to display the text for the first review:
###Code
decode_review(train_data[0])
###Output
_____no_output_____
###Markdown
Prepare the dataThe reviews—the arrays of integers—must be converted to tensors before fed into the neural network. This conversion can be done a couple of ways:* Convert the arrays into vectors of 0s and 1s indicating word occurrence, similar to a one-hot encoding. For example, the sequence [3, 5] would become a 10,000-dimensional vector that is all zeros except for indices 3 and 5, which are ones. Then, make this the first layer in our network—a Dense layer—that can handle floating point vector data. This approach is memory intensive, though, requiring a `num_words * num_reviews` size matrix.* Alternatively, we can pad the arrays so they all have the same length, then create an integer tensor of shape `max_length * num_reviews`. We can use an embedding layer capable of handling this shape as the first layer in our network.In this tutorial, we will use the second approach. Since the movie reviews must be the same length, we will use the [pad_sequences](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences) function to standardize the lengths:
###Code
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
###Output
_____no_output_____
###Markdown
Let's look at the length of the examples now:
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
And inspect the (now padded) first review:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Build the modelThe neural network is created by stacking layers—this requires two main architectural decisions:* How many layers to use in the model?* How many *hidden units* to use for each layer?In this example, the input data consists of an array of word-indices. The labels to predict are either 0 or 1. Let's build a model for this problem:
###Code
# input shape is the vocabulary count used for the movie reviews (10,000 words)
vocab_size = 10000
model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation='relu'))
model.add(keras.layers.Dense(1, activation='sigmoid'))
model.summary()
###Output
_____no_output_____
###Markdown
The layers are stacked sequentially to build the classifier:1. The first layer is an `Embedding` layer. This layer takes the integer-encoded vocabulary and looks up the embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: `(batch, sequence, embedding)`.2. Next, a `GlobalAveragePooling1D` layer returns a fixed-length output vector for each example by averaging over the sequence dimension. This allows the model to handle input of variable length, in the simplest way possible.3. This fixed-length output vector is piped through a fully-connected (`Dense`) layer with 16 hidden units.4. The last layer is densely connected with a single output node. Using the `sigmoid` activation function, this value is a float between 0 and 1, representing a probability, or confidence level. Hidden unitsThe above model has two intermediate or "hidden" layers, between the input and output. The number of outputs (units, nodes, or neurons) is the dimension of the representational space for the layer. In other words, the amount of freedom the network is allowed when learning an internal representation.If a model has more hidden units (a higher-dimensional representation space), and/or more layers, then the network can learn more complex representations. However, it makes the network more computationally expensive and may lead to learning unwanted patterns—patterns that improve performance on training data but not on the test data. This is called *overfitting*, and we'll explore it later. Loss function and optimizerA model needs a loss function and an optimizer for training. Since this is a binary classification problem and the model outputs a probability (a single-unit layer with a sigmoid activation), we'll use the `binary_crossentropy` loss function. This isn't the only choice for a loss function, you could, for instance, choose `mean_squared_error`. But, generally, `binary_crossentropy` is better for dealing with probabilities—it measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and the predictions.Later, when we are exploring regression problems (say, to predict the price of a house), we will see how to use another loss function called mean squared error.Now, configure the model to use an optimizer and a loss function:
###Code
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Create a validation setWhen training, we want to check the accuracy of the model on data it hasn't seen before. Create a *validation set* by setting apart 10,000 examples from the original training data. (Why not use the testing set now? Our goal is to develop and tune our model using only the training data, then use the test data just once to evaluate our accuracy).
###Code
x_val = train_data[:10000]
partial_x_train = train_data[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
###Output
_____no_output_____
###Markdown
Train the modelTrain the model for 40 epochs in mini-batches of 512 samples. This is 40 iterations over all samples in the `x_train` and `y_train` tensors. While training, monitor the model's loss and accuracy on the 10,000 samples from the validation set:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
###Output
_____no_output_____
###Markdown
Evaluate the modelAnd let's see how the model performs. Two values will be returned. Loss (a number which represents our error, lower values are better), and accuracy.
###Code
results = model.evaluate(test_data, test_labels)
print(results)
###Output
_____no_output_____
###Markdown
This fairly naive approach achieves an accuracy of about 87%. With more advanced approaches, the model should get closer to 95%. Create a graph of accuracy and loss over time`model.fit()` returns a `History` object that contains a dictionary with everything that happened during training:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
There are four entries: one for each monitored metric during training and validation. We can use these to plot the training and validation loss for comparison, as well as the training and validation accuracy:
###Code
import matplotlib.pyplot as plt
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
###Output
_____no_output_____ |
course_1_neural_networks_and_deep_learning/week_4/Deep_Neural_Network_Application_v8.ipynb | ###Markdown
Deep Neural Network for Image Classification: ApplicationWhen you finish this, you will have finished the last programming assignment of Week 4, and also the last programming assignment of this course! You will use use the functions you'd implemented in the previous assignment to build a deep network, and apply it to cat vs non-cat classification. Hopefully, you will see an improvement in accuracy relative to your previous logistic regression implementation. **After this assignment you will be able to:**- Build and apply a deep neural network to supervised learning. Let's get started! 1 - Packages Let's first import all the packages that you will need during this assignment. - [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.- [matplotlib](http://matplotlib.org) is a library to plot graphs in Python.- [h5py](http://www.h5py.org) is a common package to interact with a dataset that is stored on an H5 file.- [PIL](http://www.pythonware.com/products/pil/) and [scipy](https://www.scipy.org/) are used here to test your model with your own picture at the end.- dnn_app_utils provides the functions implemented in the "Building your Deep Neural Network: Step by Step" assignment to this notebook.- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work.
###Code
import time
import numpy as np
import h5py
import matplotlib.pyplot as plt
import scipy
from PIL import Image
from scipy import ndimage
from dnn_app_utils_v3 import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
###Output
_____no_output_____
###Markdown
2 - DatasetYou will use the same "Cat vs non-Cat" dataset as in "Logistic Regression as a Neural Network" (Assignment 2). The model you had built had 70% test accuracy on classifying cats vs non-cats images. Hopefully, your new model will perform a better!**Problem Statement**: You are given a dataset ("data.h5") containing: - a training set of m_train images labelled as cat (1) or non-cat (0) - a test set of m_test images labelled as cat and non-cat - each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB).Let's get more familiar with the dataset. Load the data by running the cell below.
###Code
train_x_orig, train_y, test_x_orig, test_y, classes = load_data()
###Output
_____no_output_____
###Markdown
The following code will show you an image in the dataset. Feel free to change the index and re-run the cell multiple times to see other images.
###Code
# Example of a picture
index = 11
plt.imshow(train_x_orig[index])
print ("y = " + str(train_y[0,index]) + ". It's a " + classes[train_y[0,index]].decode("utf-8") + " picture.")
# Explore your dataset
m_train = train_x_orig.shape[0]
num_px = train_x_orig.shape[1]
m_test = test_x_orig.shape[0]
print ("Number of training examples: " + str(m_train))
print ("Number of testing examples: " + str(m_test))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_x_orig shape: " + str(train_x_orig.shape))
print ("train_y shape: " + str(train_y.shape))
print ("test_x_orig shape: " + str(test_x_orig.shape))
print ("test_y shape: " + str(test_y.shape))
###Output
Number of training examples: 209
Number of testing examples: 50
Each image is of size: (64, 64, 3)
train_x_orig shape: (209, 64, 64, 3)
train_y shape: (1, 209)
test_x_orig shape: (50, 64, 64, 3)
test_y shape: (1, 50)
###Markdown
As usual, you reshape and standardize the images before feeding them to the network. The code is given in the cell below. Figure 1: Image to vector conversion.
###Code
# Reshape the training and test examples
train_x_flatten = train_x_orig.reshape(train_x_orig.shape[0], -1).T # The "-1" makes reshape flatten the remaining dimensions
test_x_flatten = test_x_orig.reshape(test_x_orig.shape[0], -1).T
# Standardize data to have feature values between 0 and 1.
train_x = train_x_flatten/255.
test_x = test_x_flatten/255.
print ("train_x's shape: " + str(train_x.shape))
print ("test_x's shape: " + str(test_x.shape))
###Output
train_x's shape: (12288, 209)
test_x's shape: (12288, 50)
###Markdown
$12,288$ equals $64 \times 64 \times 3$ which is the size of one reshaped image vector. 3 - Architecture of your model Now that you are familiar with the dataset, it is time to build a deep neural network to distinguish cat images from non-cat images.You will build two different models:- A 2-layer neural network- An L-layer deep neural networkYou will then compare the performance of these models, and also try out different values for $L$. Let's look at the two architectures. 3.1 - 2-layer neural network Figure 2: 2-layer neural network. The model can be summarized as: ***INPUT -> LINEAR -> RELU -> LINEAR -> SIGMOID -> OUTPUT***. Detailed Architecture of figure 2:- The input is a (64,64,3) image which is flattened to a vector of size $(12288,1)$. - The corresponding vector: $[x_0,x_1,...,x_{12287}]^T$ is then multiplied by the weight matrix $W^{[1]}$ of size $(n^{[1]}, 12288)$.- You then add a bias term and take its relu to get the following vector: $[a_0^{[1]}, a_1^{[1]},..., a_{n^{[1]}-1}^{[1]}]^T$.- You then repeat the same process.- You multiply the resulting vector by $W^{[2]}$ and add your intercept (bias). - Finally, you take the sigmoid of the result. If it is greater than 0.5, you classify it to be a cat. 3.2 - L-layer deep neural networkIt is hard to represent an L-layer deep neural network with the above representation. However, here is a simplified network representation: Figure 3: L-layer neural network. The model can be summarized as: ***[LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID***Detailed Architecture of figure 3:- The input is a (64,64,3) image which is flattened to a vector of size (12288,1).- The corresponding vector: $[x_0,x_1,...,x_{12287}]^T$ is then multiplied by the weight matrix $W^{[1]}$ and then you add the intercept $b^{[1]}$. The result is called the linear unit.- Next, you take the relu of the linear unit. This process could be repeated several times for each $(W^{[l]}, b^{[l]})$ depending on the model architecture.- Finally, you take the sigmoid of the final linear unit. If it is greater than 0.5, you classify it to be a cat. 3.3 - General methodologyAs usual you will follow the Deep Learning methodology to build the model: 1. Initialize parameters / Define hyperparameters 2. Loop for num_iterations: a. Forward propagation b. Compute cost function c. Backward propagation d. Update parameters (using parameters, and grads from backprop) 4. Use trained parameters to predict labelsLet's now implement those two models! 4 - Two-layer neural network**Question**: Use the helper functions you have implemented in the previous assignment to build a 2-layer neural network with the following structure: *LINEAR -> RELU -> LINEAR -> SIGMOID*. The functions you may need and their inputs are:```pythondef initialize_parameters(n_x, n_h, n_y): ... return parameters def linear_activation_forward(A_prev, W, b, activation): ... return A, cachedef compute_cost(AL, Y): ... return costdef linear_activation_backward(dA, cache, activation): ... return dA_prev, dW, dbdef update_parameters(parameters, grads, learning_rate): ... return parameters```
###Code
### CONSTANTS DEFINING THE MODEL ####
n_x = 12288 # num_px * num_px * 3
n_h = 7
n_y = 1
layers_dims = (n_x, n_h, n_y)
# GRADED FUNCTION: two_layer_model
def two_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):
"""
Implements a two-layer neural network: LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (n_x, number of examples)
Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)
layers_dims -- dimensions of the layers (n_x, n_h, n_y)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- If set to True, this will print the cost every 100 iterations
Returns:
parameters -- a dictionary containing W1, W2, b1, and b2
"""
np.random.seed(1)
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
(n_x, n_h, n_y) = layers_dims
# Initialize parameters dictionary, by calling one of the functions you'd previously implemented
### START CODE HERE ### (≈ 1 line of code)
parameters = initialize_parameters(n_x, n_h, n_y)
### END CODE HERE ###
# Get W1, b1, W2 and b2 from the dictionary parameters.
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> SIGMOID. Inputs: "X, W1, b1, W2, b2". Output: "A1, cache1, A2, cache2".
### START CODE HERE ### (≈ 2 lines of code)
A1, cache1 = linear_activation_forward(X, W1, b1, activation="relu")
A2, cache2 = linear_activation_forward(A1, W2, b2, activation="sigmoid")
### END CODE HERE ###
# Compute cost
### START CODE HERE ### (≈ 1 line of code)
cost = compute_cost(A2, Y)
### END CODE HERE ###
# Initializing backward propagation
dA2 = - (np.divide(Y, A2) - np.divide(1 - Y, 1 - A2))
# Backward propagation. Inputs: "dA2, cache2, cache1". Outputs: "dA1, dW2, db2; also dA0 (not used), dW1, db1".
### START CODE HERE ### (≈ 2 lines of code)
dA1, dW2, db2 = linear_activation_backward(dA2, cache2, activation="sigmoid")
dA0, dW1, db1 = linear_activation_backward(dA1, cache1, activation="relu")
### END CODE HERE ###
# Set grads['dWl'] to dW1, grads['db1'] to db1, grads['dW2'] to dW2, grads['db2'] to db2
grads['dW1'] = dW1
grads['db1'] = db1
grads['dW2'] = dW2
grads['db2'] = db2
# Update parameters.
### START CODE HERE ### (approx. 1 line of code)
parameters = update_parameters(parameters, grads, learning_rate)
### END CODE HERE ###
# Retrieve W1, b1, W2, b2 from parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
# Print the cost every 100 training example
if print_cost and i % 100 == 0:
print("Cost after iteration {}: {}".format(i, np.squeeze(cost)))
if print_cost and i % 100 == 0:
costs.append(cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
###Output
_____no_output_____
###Markdown
Run the cell below to train your parameters. See if your model runs. The cost should be decreasing. It may take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output below, if not click on the square (⬛) on the upper bar of the notebook to stop the cell and try to find your error.
###Code
parameters = two_layer_model(train_x, train_y, layers_dims = (n_x, n_h, n_y), num_iterations = 2500, print_cost=True)
###Output
Cost after iteration 0: 0.6930497356599888
Cost after iteration 100: 0.6464320953428849
Cost after iteration 200: 0.6325140647912677
Cost after iteration 300: 0.6015024920354665
Cost after iteration 400: 0.5601966311605747
Cost after iteration 500: 0.5158304772764729
Cost after iteration 600: 0.47549013139433255
Cost after iteration 700: 0.43391631512257495
Cost after iteration 800: 0.400797753620389
Cost after iteration 900: 0.3580705011323798
Cost after iteration 1000: 0.3394281538366411
Cost after iteration 1100: 0.3052753636196264
Cost after iteration 1200: 0.2749137728213018
Cost after iteration 1300: 0.24681768210614854
Cost after iteration 1400: 0.19850735037466094
Cost after iteration 1500: 0.17448318112556666
Cost after iteration 1600: 0.17080762978096128
Cost after iteration 1700: 0.11306524562164724
Cost after iteration 1800: 0.09629426845937152
Cost after iteration 1900: 0.08342617959726856
Cost after iteration 2000: 0.07439078704319078
Cost after iteration 2100: 0.06630748132267927
Cost after iteration 2200: 0.05919329501038164
Cost after iteration 2300: 0.05336140348560553
Cost after iteration 2400: 0.048554785628770115
###Markdown
**Expected Output**: **Cost after iteration 0** 0.6930497356599888 **Cost after iteration 100** 0.6464320953428849 **...** ... **Cost after iteration 2400** 0.048554785628770206 Good thing you built a vectorized implementation! Otherwise it might have taken 10 times longer to train this.Now, you can use the trained parameters to classify images from the dataset. To see your predictions on the training and test sets, run the cell below.
###Code
predictions_train = predict(train_x, train_y, parameters)
###Output
Accuracy: 1.0
###Markdown
**Expected Output**: **Accuracy** 1.0
###Code
predictions_test = predict(test_x, test_y, parameters)
###Output
Accuracy: 0.72
###Markdown
**Expected Output**: **Accuracy** 0.72 **Note**: You may notice that running the model on fewer iterations (say 1500) gives better accuracy on the test set. This is called "early stopping" and we will talk about it in the next course. Early stopping is a way to prevent overfitting. Congratulations! It seems that your 2-layer neural network has better performance (72%) than the logistic regression implementation (70%, assignment week 2). Let's see if you can do even better with an $L$-layer model. 5 - L-layer Neural Network**Question**: Use the helper functions you have implemented previously to build an $L$-layer neural network with the following structure: *[LINEAR -> RELU]$\times$(L-1) -> LINEAR -> SIGMOID*. The functions you may need and their inputs are:```pythondef initialize_parameters_deep(layers_dims): ... return parameters def L_model_forward(X, parameters): ... return AL, cachesdef compute_cost(AL, Y): ... return costdef L_model_backward(AL, Y, caches): ... return gradsdef update_parameters(parameters, grads, learning_rate): ... return parameters```
###Code
### CONSTANTS ###
layers_dims = [12288, 20, 7, 5, 1] # 4-layer model
# GRADED FUNCTION: L_layer_model
def L_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):#lr was 0.009
"""
Implements a L-layer neural network: [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID.
Arguments:
X -- data, numpy array of shape (number of examples, num_px * num_px * 3)
Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)
layers_dims -- list containing the input size and each layer size, of length (number of layers + 1).
learning_rate -- learning rate of the gradient descent update rule
num_iterations -- number of iterations of the optimization loop
print_cost -- if True, it prints the cost every 100 steps
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
"""
np.random.seed(1)
costs = [] # keep track of cost
# Parameters initialization. (≈ 1 line of code)
### START CODE HERE ###
parameters = initialize_parameters_deep(layers_dims)
### END CODE HERE ###
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: [LINEAR -> RELU]*(L-1) -> LINEAR -> SIGMOID.
### START CODE HERE ### (≈ 1 line of code)
AL, caches = L_model_forward(X, parameters)
### END CODE HERE ###
# Compute cost.
### START CODE HERE ### (≈ 1 line of code)
cost = compute_cost(AL, Y)
### END CODE HERE ###
# Backward propagation.
### START CODE HERE ### (≈ 1 line of code)
grads = L_model_backward(AL, Y, caches)
### END CODE HERE ###
# Update parameters.
### START CODE HERE ### (≈ 1 line of code)
parameters = update_parameters(parameters, grads, learning_rate)
### END CODE HERE ###
# Print the cost every 100 training example
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
if print_cost and i % 100 == 0:
costs.append(cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
###Output
_____no_output_____
###Markdown
You will now train the model as a 4-layer neural network. Run the cell below to train your model. The cost should decrease on every iteration. It may take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output below, if not click on the square (⬛) on the upper bar of the notebook to stop the cell and try to find your error.
###Code
parameters = L_layer_model(train_x, train_y, layers_dims, num_iterations = 2500, print_cost = True)
###Output
Cost after iteration 0: 0.771749
Cost after iteration 100: 0.672053
Cost after iteration 200: 0.648263
Cost after iteration 300: 0.611507
Cost after iteration 400: 0.567047
Cost after iteration 500: 0.540138
Cost after iteration 600: 0.527930
Cost after iteration 700: 0.465477
Cost after iteration 800: 0.369126
Cost after iteration 900: 0.391747
Cost after iteration 1000: 0.315187
Cost after iteration 1100: 0.272700
Cost after iteration 1200: 0.237419
Cost after iteration 1300: 0.199601
Cost after iteration 1400: 0.189263
Cost after iteration 1500: 0.161189
Cost after iteration 1600: 0.148214
Cost after iteration 1700: 0.137775
Cost after iteration 1800: 0.129740
Cost after iteration 1900: 0.121225
Cost after iteration 2000: 0.113821
Cost after iteration 2100: 0.107839
Cost after iteration 2200: 0.102855
Cost after iteration 2300: 0.100897
Cost after iteration 2400: 0.092878
###Markdown
**Expected Output**: **Cost after iteration 0** 0.771749 **Cost after iteration 100** 0.672053 **...** ... **Cost after iteration 2400** 0.092878
###Code
pred_train = predict(train_x, train_y, parameters)
###Output
Accuracy: 0.985645933014
###Markdown
**Train Accuracy** 0.985645933014
###Code
pred_test = predict(test_x, test_y, parameters)
###Output
Accuracy: 0.8
###Markdown
**Expected Output**: **Test Accuracy** 0.8 Congrats! It seems that your 4-layer neural network has better performance (80%) than your 2-layer neural network (72%) on the same test set. This is good performance for this task. Nice job! Though in the next course on "Improving deep neural networks" you will learn how to obtain even higher accuracy by systematically searching for better hyperparameters (learning_rate, layers_dims, num_iterations, and others you'll also learn in the next course). 6) Results AnalysisFirst, let's take a look at some images the L-layer model labeled incorrectly. This will show a few mislabeled images.
###Code
print_mislabeled_images(classes, test_x, test_y, pred_test)
###Output
_____no_output_____
###Markdown
**A few types of images the model tends to do poorly on include:** - Cat body in an unusual position- Cat appears against a background of a similar color- Unusual cat color and species- Camera Angle- Brightness of the picture- Scale variation (cat is very large or small in image) 7) Test with your own image (optional/ungraded exercise) Congratulations on finishing this assignment. You can use your own image and see the output of your model. To do that: 1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub. 2. Add your image to this Jupyter Notebook's directory, in the "images" folder 3. Change your image's name in the following code 4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)!
###Code
## START CODE HERE ##
my_image = "my_image.jpg" # change this to the name of your image file
my_label_y = [1] # the true class of your image (1 -> cat, 0 -> non-cat)
## END CODE HERE ##
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((num_px*num_px*3,1))
my_image = my_image/255.
my_predicted_image = predict(my_image, my_label_y, parameters)
plt.imshow(image)
print ("y = " + str(np.squeeze(my_predicted_image)) + ", your L-layer model predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.")
###Output
_____no_output_____ |